Lee, Seung Soo; Lee, Youngjoo; Kim, Namkug; Kim, Seong Who; Byun, Jae Ho; Park, Seong Ho; Lee, Moon-Gyu; Ha, Hyun Kwon
2011-06-01
To compare the accuracy of four chemical shift magnetic resonance imaging (MRI) (CS-MRI) analysis methods and MR spectroscopy (MRS) with and without T2-correction in fat quantification in the presence of excess iron. CS-MRI with six opposed- and in-phase acquisitions and MRS with five-echo acquisitions (TEs of 20, 30, 40, 50, 60 msec) were performed at 1.5 T on phantoms containing various fat fractions (FFs), on phantoms containing various iron concentrations, and in 18 patients with chronic liver disease. For CS-MRI, FFs were estimated with the dual-echo method, with two T2*-correction methods (triple- and multiecho), and with multiinterference methods that corrected for both T2* and spectral interference effects. For MRS, FF was estimated without T2-correction (single-echo MRS) and with T2-correction (multiecho MRS). In the phantoms, T2*- or T2-correction methods for CS-MRI and MRS provided unbiased estimations of FFs (mean bias, -1.1% to 0.5%) regardless of iron concentration, whereas the dual-echo method (-5.5% to -8.4%) and single-echo MRS (12.1% to 37.3%) resulted in large biases in FFs. In patients, the FFs estimated with triple-echo (R = 0.98), multiecho (R = 0.99), and multiinterference (R = 0.99) methods had stronger correlations with multiecho MRS FFs than with the dual-echo method (R = 0.86; P ≤ 0.011). The FFs estimated with multiinterference method showed the closest agreement with multiecho MRS FFs (the 95% limit-of-agreement, -0.2 ± 1.1). T2*- or T2-correction methods are effective in correcting the confounding effects of iron, enabling an accurate fat quantification throughout a wide range of iron concentrations. Spectral modeling of fat may further improve the accuracy of CS-MRI in fat quantification. Copyright © 2011 Wiley-Liss, Inc.
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee
2013-01-01
Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. Conclusions: The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications. PMID:24320536
Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee
2013-12-01
Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method significantly increased the precision and accuracy of breast density quantification using breast MRI images by effectively correcting the bias field. It is expected that a fully automated computerized algorithm for breast density quantification may have great potential in clinical MRI applications.
Jordan, Gregor; Onami, Ichio; Heinrich, Julia; Staack, Roland F
2017-11-01
Assessment of active drug exposure of biologics may be crucial for drug development. Typically, ligand-binding assay methods are used to provide free/active drug concentrations. To what extent hybrid LC-MS/MS procedures enable correct 'active' drug quantification is currently under consideration. Experimental & results: The relevance of appropriate extraction condition was evaluated by a hybrid target capture immuno-affinity LC-MS/MS method using total and free/active quality controls (QCs). The rapid extraction (10 min) provided correct results, whereas overnight incubation resulted in significant overestimation of the free/active drug (monclonal antibody) concentration. Conventional total QCs were inappropriate to determine optimal method conditions in contrast to free/active QCs. The 'free/active analyte QC concept' enables development of appropriate extraction conditions for correct active drug quantification by hybrid LC-MS/MS.
Impact of reconstruction parameters on quantitative I-131 SPECT
NASA Astrophysics Data System (ADS)
van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.
2016-07-01
Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.
New approach for the quantification of processed animal proteins in feed using light microscopy.
Veys, P; Baeten, V
2010-07-01
A revision of European Union's total feed ban on animal proteins in feed will need robust quantification methods, especially for control analyses, if tolerance levels are to be introduced, as for fishmeal in ruminant feed. In 2006, a study conducted by the Community Reference Laboratory for Animal Proteins in feedstuffs (CRL-AP) demonstrated the deficiency of the official quantification method based on light microscopy. The study concluded that the method had to be revised. This paper puts forward an improved quantification method based on three elements: (1) the preparation of permanent slides with an optical adhesive preserving all morphological markers of bones necessary for accurate identification and precision counting; (2) the use of a counting grid eyepiece reticle; and (3) new definitions for correction factors for the estimated portions of animal particles in the sediment. This revised quantification method was tested on feeds adulterated at different levels with bovine meat and bone meal (MBM) and fishmeal, and it proved to be effortless to apply. The results obtained were very close to the expected values of contamination levels for both types of adulteration (MBM or fishmeal). Calculated values were not only replicable, but also reproducible. The advantages of the new approach, including the benefits of the optical adhesive used for permanent slide mounting and the experimental conditions that need to be met to implement the new method correctly, are discussed.
Mikkelsen, Mark; Singh, Krish D; Brealy, Jennifer A; Linden, David E J; Evans, C John
2016-11-01
The quantification of γ-aminobutyric acid (GABA) concentration using localised MRS suffers from partial volume effects related to differences in the intrinsic concentration of GABA in grey (GM) and white (WM) matter. These differences can be represented as a ratio between intrinsic GABA in GM and WM: r M . Individual differences in GM tissue volume can therefore potentially drive apparent concentration differences. Here, a quantification method that corrects for these effects is formulated and empirically validated. Quantification using tissue water as an internal concentration reference has been described previously. Partial volume effects attributed to r M can be accounted for by incorporating into this established method an additional multiplicative correction factor based on measured or literature values of r M weighted by the proportion of GM and WM within tissue-segmented MRS volumes. Simulations were performed to test the sensitivity of this correction using different assumptions of r M taken from previous studies. The tissue correction method was then validated by applying it to an independent dataset of in vivo GABA measurements using an empirically measured value of r M . It was shown that incorrect assumptions of r M can lead to overcorrection and inflation of GABA concentration measurements quantified in volumes composed predominantly of WM. For the independent dataset, GABA concentration was linearly related to GM tissue volume when only the water signal was corrected for partial volume effects. Performing a full correction that additionally accounts for partial volume effects ascribed to r M successfully removed this dependence. With an appropriate assumption of the ratio of intrinsic GABA concentration in GM and WM, GABA measurements can be corrected for partial volume effects, potentially leading to a reduction in between-participant variance, increased power in statistical tests and better discriminability of true effects. Copyright © 2016 John Wiley & Sons, Ltd.
Ning, Jia; Schubert, Tilman; Johnson, Kevin M; Roldán-Alzate, Alejandro; Chen, Huijun; Yuan, Chun; Reeder, Scott B
2018-06-01
To propose a simple method to correct vascular input function (VIF) due to inflow effects and to test whether the proposed method can provide more accurate VIFs for improved pharmacokinetic modeling. A spoiled gradient echo sequence-based inflow quantification and contrast agent concentration correction method was proposed. Simulations were conducted to illustrate improvement in the accuracy of VIF estimation and pharmacokinetic fitting. Animal studies with dynamic contrast-enhanced MR scans were conducted before, 1 week after, and 2 weeks after portal vein embolization (PVE) was performed in the left portal circulation of pigs. The proposed method was applied to correct the VIFs for model fitting. Pharmacokinetic parameters fitted using corrected and uncorrected VIFs were compared between different lobes and visits. Simulation results demonstrated that the proposed method can improve accuracy of VIF estimation and pharmacokinetic fitting. In animal study results, pharmacokinetic fitting using corrected VIFs demonstrated changes in perfusion consistent with changes expected after PVE, whereas the perfusion estimates derived by uncorrected VIFs showed no significant changes. The proposed correction method improves accuracy of VIFs and therefore provides more precise pharmacokinetic fitting. This method may be promising in improving the reliability of perfusion quantification. Magn Reson Med 79:3093-3102, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Samanipour, Saer; Dimitriou-Christidis, Petros; Gros, Jonas; Grange, Aureline; Samuel Arey, J
2015-01-02
Comprehensive two-dimensional gas chromatography (GC×GC) is used widely to separate and measure organic chemicals in complex mixtures. However, approaches to quantify analytes in real, complex samples have not been critically assessed. We quantified 7 PAHs in a certified diesel fuel using GC×GC coupled to flame ionization detector (FID), and we quantified 11 target chlorinated hydrocarbons in a lake water extract using GC×GC with electron capture detector (μECD), further confirmed qualitatively by GC×GC with electron capture negative chemical ionization time-of-flight mass spectrometer (ENCI-TOFMS). Target analyte peak volumes were determined using several existing baseline correction algorithms and peak delineation algorithms. Analyte quantifications were conducted using external standards and also using standard additions, enabling us to diagnose matrix effects. We then applied several chemometric tests to these data. We find that the choice of baseline correction algorithm and peak delineation algorithm strongly influence the reproducibility of analyte signal, error of the calibration offset, proportionality of integrated signal response, and accuracy of quantifications. Additionally, the choice of baseline correction and the peak delineation algorithm are essential for correctly discriminating analyte signal from unresolved complex mixture signal, and this is the chief consideration for controlling matrix effects during quantification. The diagnostic approaches presented here provide guidance for analyte quantification using GC×GC. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2017-07-20
The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.
Grošev, Darko; Gregov, Marin; Wolfl, Miroslava Radić; Krstonošić, Branislav; Debeljuh, Dea Dundara
2018-06-07
To make quantitative methods of nuclear medicine more available, four centres in Croatia participated in the national intercomparison study, following the materials and methods used in the previous international study organized by the International Atomic Energy Agency (IAEA). The study task was to calculate the activities of four Ba sources (T1/2=10.54 years; Eγ=356 keV) using planar and single-photon emission computed tomography (SPECT) or SPECT/CT acquisitions of the sources inside a water-filled cylindrical phantom. The sources were previously calibrated by the US National Institute of Standards and Technology. Triple-energy window was utilized for scatter correction. Planar studies were corrected for attenuation correction (AC) using the conjugate-view method. For SPECT/CT studies, data from X-ray computed tomography were used for attenuation correction (CT-AC), whereas for SPECT-only acquisition, the Chang-AC method was applied. Using the lessons learned from the IAEA study, data were acquired according to the harmonized data acquisition protocol, and the acquired images were then processed using centralized data analysis. The accuracy of the activity quantification was evaluated as the ratio R between the calculated activity and the value obtained from National Institute of Standards and Technology. For planar studies, R=1.06±0.08; for SPECT/CT study using CT-AC, R=1.00±0.08; and for Chang-AC, R=0.89±0.12. The results are in accordance with those obtained within the larger IAEA study and confirm that SPECT/CT method is the most appropriate for accurate activity quantification.
On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification
Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.
2014-01-01
Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362
Hines, Catherine D. G.; Hamilton, Gavin; Sirlin, Claude B.; McKenzie, Charles A.; Yu, Huanzhou; Brittain, Jean H.; Reeder, Scott B.
2011-01-01
Purpose: To prospectively compare an investigational version of a complex-based chemical shift–based fat fraction magnetic resonance (MR) imaging method with MR spectroscopy for the quantification of hepatic steatosis. Materials and Methods: This study was approved by the institutional review board and was HIPAA compliant. Written informed consent was obtained before all studies. Fifty-five patients (31 women, 24 men; age range, 24–71 years) were prospectively imaged at 1.5 T with quantitative MR imaging and single-voxel MR spectroscopy, each within a single breath hold. The effects of T2* correction, spectral modeling of fat, and magnitude fitting for eddy current correction on fat quantification with MR imaging were investigated by reconstructing fat fraction images from the same source data with different combinations of error correction. Single-voxel T2-corrected MR spectroscopy was used to measure fat fraction and served as the reference standard. All MR spectroscopy data were postprocessed at a separate institution by an MR physicist who was blinded to MR imaging results. Fat fractions measured with MR imaging and MR spectroscopy were compared statistically to determine the correlation (r2), and the slope and intercept as measures of agreement between MR imaging and MR spectroscopy fat fraction measurements, to determine whether MR imaging can help quantify fat, and examine the importance of T2* correction, spectral modeling of fat, and eddy current correction. Two-sided t tests (significance level, P = .05) were used to determine whether estimated slopes and intercepts were significantly different from 1.0 and 0.0, respectively. Sensitivity and specificity for the classification of clinically significant steatosis were evaluated. Results: Overall, there was excellent correlation between MR imaging and MR spectroscopy for all reconstruction combinations. However, agreement was only achieved when T2* correction, spectral modeling of fat, and magnitude fitting for eddy current correction were used (r2 = 0.99; slope ± standard deviation = 1.00 ± 0.01, P = .77; intercept ± standard deviation = 0.2% ± 0.1, P = .19). Conclusion: T1-independent chemical shift–based water-fat separation MR imaging methods can accurately quantify fat over the entire liver, by using MR spectroscopy as the reference standard, when T2* correction, spectral modeling of fat, and eddy current correction methods are used. © RSNA, 2011 PMID:21248233
NASA Astrophysics Data System (ADS)
Mérida, Inés; Reilhac, Anthonin; Redouté, Jérôme; Heckemann, Rolf A.; Costes, Nicolas; Hammers, Alexander
2017-04-01
In simultaneous PET-MR, attenuation maps are not directly available. Essential for absolute radioactivity quantification, they need to be derived from MR or PET data to correct for gamma photon attenuation by the imaged object. We evaluate a multi-atlas attenuation correction method for brain imaging (MaxProb) on static [18F]FDG PET and, for the first time, on dynamic PET, using the serotoninergic tracer [18F]MPPF. A database of 40 MR/CT image pairs (atlases) was used. The MaxProb method synthesises subject-specific pseudo-CTs by registering each atlas to the target subject space. Atlas CT intensities are then fused via label propagation and majority voting. Here, we compared these pseudo-CTs with the real CTs in a leave-one-out design, contrasting the MaxProb approach with a simplified single-atlas method (SingleAtlas). We evaluated the impact of pseudo-CT accuracy on reconstructed PET images, compared to PET data reconstructed with real CT, at the regional and voxel levels for the following: radioactivity images; time-activity curves; and kinetic parameters (non-displaceable binding potential, BPND). On static [18F]FDG, the mean bias for MaxProb ranged between 0 and 1% for 73 out of 84 regions assessed, and exceptionally peaked at 2.5% for only one region. Statistical parametric map analysis of MaxProb-corrected PET data showed significant differences in less than 0.02% of the brain volume, whereas SingleAtlas-corrected data showed significant differences in 20% of the brain volume. On dynamic [18F]MPPF, most regional errors on BPND ranged from -1 to +3% (maximum bias 5%) for the MaxProb method. With SingleAtlas, errors were larger and had higher variability in most regions. PET quantification bias increased over the duration of the dynamic scan for SingleAtlas, but not for MaxProb. We show that this effect is due to the interaction of the spatial tracer-distribution heterogeneity variation over time with the degree of accuracy of the attenuation maps. This work demonstrates that inaccuracies in attenuation maps can induce bias in dynamic brain PET studies. Multi-atlas attenuation correction with MaxProb enables quantification on hybrid PET-MR scanners, eschewing the need for CT.
Mérida, Inés; Reilhac, Anthonin; Redouté, Jérôme; Heckemann, Rolf A; Costes, Nicolas; Hammers, Alexander
2017-04-07
In simultaneous PET-MR, attenuation maps are not directly available. Essential for absolute radioactivity quantification, they need to be derived from MR or PET data to correct for gamma photon attenuation by the imaged object. We evaluate a multi-atlas attenuation correction method for brain imaging (MaxProb) on static [ 18 F]FDG PET and, for the first time, on dynamic PET, using the serotoninergic tracer [ 18 F]MPPF. A database of 40 MR/CT image pairs (atlases) was used. The MaxProb method synthesises subject-specific pseudo-CTs by registering each atlas to the target subject space. Atlas CT intensities are then fused via label propagation and majority voting. Here, we compared these pseudo-CTs with the real CTs in a leave-one-out design, contrasting the MaxProb approach with a simplified single-atlas method (SingleAtlas). We evaluated the impact of pseudo-CT accuracy on reconstructed PET images, compared to PET data reconstructed with real CT, at the regional and voxel levels for the following: radioactivity images; time-activity curves; and kinetic parameters (non-displaceable binding potential, BP ND ). On static [ 18 F]FDG, the mean bias for MaxProb ranged between 0 and 1% for 73 out of 84 regions assessed, and exceptionally peaked at 2.5% for only one region. Statistical parametric map analysis of MaxProb-corrected PET data showed significant differences in less than 0.02% of the brain volume, whereas SingleAtlas-corrected data showed significant differences in 20% of the brain volume. On dynamic [ 18 F]MPPF, most regional errors on BP ND ranged from -1 to +3% (maximum bias 5%) for the MaxProb method. With SingleAtlas, errors were larger and had higher variability in most regions. PET quantification bias increased over the duration of the dynamic scan for SingleAtlas, but not for MaxProb. We show that this effect is due to the interaction of the spatial tracer-distribution heterogeneity variation over time with the degree of accuracy of the attenuation maps. This work demonstrates that inaccuracies in attenuation maps can induce bias in dynamic brain PET studies. Multi-atlas attenuation correction with MaxProb enables quantification on hybrid PET-MR scanners, eschewing the need for CT.
Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist
NASA Astrophysics Data System (ADS)
Tummala, Sudhakar; Dam, Erik B.
2010-03-01
Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.
Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara
2012-08-01
Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.
On the Performance of T2∗ Correction Methods for Quantification of Hepatic Fat Content
Reeder, Scott B.; Bice, Emily K.; Yu, Huanzhou; Hernando, Diego; Pineda, Angel R.
2014-01-01
Nonalcoholic fatty liver disease is the most prevalent chronic liver disease in Western societies. MRI can quantify liver fat, the hallmark feature of nonalcoholic fatty liver disease, so long as multiple confounding factors including T2∗ decay are addressed. Recently developed MRI methods that correct for T2∗ to improve the accuracy of fat quantification either assume a common T2∗ (single- T2∗) for better stability and noise performance or independently estimate the T2∗ for water and fat (dual- T2∗) for reduced bias, but with noise performance penalty. In this study, the tradeoff between bias and variance for different T2∗ correction methods is analyzed using the Cramér-Rao bound analysis for biased estimators and is validated using Monte Carlo experiments. A noise performance metric for estimation of fat fraction is proposed. Cramér-Rao bound analysis for biased estimators was used to compute the metric at different echo combinations. Optimization was performed for six echoes and typical T2∗ values. This analysis showed that all methods have better noise performance with very short first echo times and echo spacing of ∼π/2 for single- T2∗ correction, and ∼2π/3 for dual- T2∗ correction. Interestingly, when an echo spacing and first echo shift of ∼π/2 are used, methods without T2∗ correction have less than 5% bias in the estimates of fat fraction. PMID:21661045
Meisamy, Sina; Hines, Catherine D G; Hamilton, Gavin; Sirlin, Claude B; McKenzie, Charles A; Yu, Huanzhou; Brittain, Jean H; Reeder, Scott B
2011-03-01
To prospectively compare an investigational version of a complex-based chemical shift-based fat fraction magnetic resonance (MR) imaging method with MR spectroscopy for the quantification of hepatic steatosis. This study was approved by the institutional review board and was HIPAA compliant. Written informed consent was obtained before all studies. Fifty-five patients (31 women, 24 men; age range, 24-71 years) were prospectively imaged at 1.5 T with quantitative MR imaging and single-voxel MR spectroscopy, each within a single breath hold. The effects of T2 correction, spectral modeling of fat, and magnitude fitting for eddy current correction on fat quantification with MR imaging were investigated by reconstructing fat fraction images from the same source data with different combinations of error correction. Single-voxel T2-corrected MR spectroscopy was used to measure fat fraction and served as the reference standard. All MR spectroscopy data were postprocessed at a separate institution by an MR physicist who was blinded to MR imaging results. Fat fractions measured with MR imaging and MR spectroscopy were compared statistically to determine the correlation (r(2)), and the slope and intercept as measures of agreement between MR imaging and MR spectroscopy fat fraction measurements, to determine whether MR imaging can help quantify fat, and examine the importance of T2 correction, spectral modeling of fat, and eddy current correction. Two-sided t tests (significance level, P = .05) were used to determine whether estimated slopes and intercepts were significantly different from 1.0 and 0.0, respectively. Sensitivity and specificity for the classification of clinically significant steatosis were evaluated. Overall, there was excellent correlation between MR imaging and MR spectroscopy for all reconstruction combinations. However, agreement was only achieved when T2 correction, spectral modeling of fat, and magnitude fitting for eddy current correction were used (r(2) = 0.99; slope ± standard deviation = 1.00 ± 0.01, P = .77; intercept ± standard deviation = 0.2% ± 0.1, P = .19). T1-independent chemical shift-based water-fat separation MR imaging methods can accurately quantify fat over the entire liver, by using MR spectroscopy as the reference standard, when T2 correction, spectral modeling of fat, and eddy current correction methods are used. © RSNA, 2011.
Chandra, A; Rana, J; Li, Y
2001-08-01
A method has been established and validated for identification and quantification of individual, as well as total, anthocyanins by HPLC and LC/ES-MS in botanical raw materials used in the herbal supplement industry. The anthocyanins were separated and identified on the basis of their respective M(+) (cation) using LC/ES-MS. Separated anthocyanins were individually calculated against one commercially available anthocyanin external standard (cyanidin-3-glucoside chloride) and expressed as its equivalents. Amounts of each anthocyanin calculated as external standard equivalent were then multiplied by a molecular-weight correction factor to afford their specific quantities. Experimental procedures and use of a molecular-weight correction factors are substantiated and validated using Balaton tart cherry and elderberry as templates. Cyanidin-3-glucoside chloride has been widely used in the botanical industry to calculate total anthocyanins. In our studies on tart cherry and elderberry, its use as external standard followed by use of molecular-weight correction factors should provide relatively accurate results for total anthocyanins, because of the presence of cyanidin as their major anthocyanidin backbone. The method proposed here is simple and has a direct sample preparation procedure without any solid-phase extraction. It enables selection and use of commercially available anthocyanins as external standards for quantification of specific anthocyanins in the sample matrix irrespective of their commercial availability as analytical standards. It can be used as a template and applied for similar quantification in several anthocyanin-containing raw materials for routine quality control procedures, thus providing consistency in analytical testing of botanical raw materials used for manufacturing efficacious and true-to-the-label nutritional supplements.
Fei, Baowei; Yang, Xiaofeng; Nye, Jonathon A.; Aarsvold, John N.; Raghunath, Nivedita; Cervo, Morgan; Stark, Rebecca; Meltzer, Carolyn C.; Votaw, John R.
2012-01-01
Purpose: Combined MR/PET is a relatively new, hybrid imaging modality. A human MR/PET prototype system consisting of a Siemens 3T Trio MR and brain PET insert was installed and tested at our institution. Its present design does not offer measured attenuation correction (AC) using traditional transmission imaging. This study is the development of quantification tools including MR-based AC for quantification in combined MR/PET for brain imaging. Methods: The developed quantification tools include image registration, segmentation, classification, and MR-based AC. These components were integrated into a single scheme for processing MR/PET data. The segmentation method is multiscale and based on the Radon transform of brain MR images. It was developed to segment the skull on T1-weighted MR images. A modified fuzzy C-means classification scheme was developed to classify brain tissue into gray matter, white matter, and cerebrospinal fluid. Classified tissue is assigned an attenuation coefficient so that AC factors can be generated. PET emission data are then reconstructed using a three-dimensional ordered sets expectation maximization method with the MR-based AC map. Ten subjects had separate MR and PET scans. The PET with [11C]PIB was acquired using a high-resolution research tomography (HRRT) PET. MR-based AC was compared with transmission (TX)-based AC on the HRRT. Seventeen volumes of interest were drawn manually on each subject image to compare the PET activities between the MR-based and TX-based AC methods. Results: For skull segmentation, the overlap ratio between our segmented results and the ground truth is 85.2 ± 2.6%. Attenuation correction results from the ten subjects show that the difference between the MR and TX-based methods was <6.5%. Conclusions: MR-based AC compared favorably with conventional transmission-based AC. Quantitative tools including registration, segmentation, classification, and MR-based AC have been developed for use in combined MR/PET. PMID:23039679
Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.
Mehranian, Abolfazl; Zaidi, Habib
2015-04-01
Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Lien, Stina K; Kvitvang, Hans Fredrik Nyvold; Bruheim, Per
2012-07-20
GC-MS analysis of silylated metabolites is a sensitive method that covers important metabolite groups such as sugars, amino acids and non-amino organic acids, and it has become one of the most important analytical methods for exploring the metabolome. Absolute quantitative GC-MS analysis of silylated metabolites poses a challenge as different metabolites have different derivatization kinetics and as their silyl-derivates have varying stability. This report describes the development of a targeted GC-MS/MS method for quantification of metabolites. Internal standards for each individual metabolite were obtained by derivatization of a mixture of standards with deuterated N-methyl-N-trimethylsilyltrifluoroacetamide (d9-MSTFA), and spiking this solution into MSTFA derivatized samples prior to GC-MS/MS analysis. The derivatization and spiking protocol needed optimization to ensure that the behaviour of labelled compound responses in the spiked sample correctly reflected the behaviour of unlabelled compound responses. Using labelled and unlabelled MSTFA in this way enabled normalization of metabolite responses by the response of their deuterated counterpart (i.e. individual correction). Such individual correction of metabolite responses reproducibly resulted in significantly higher precision than traditional data correction strategies when tested on samples both with and without serum and urine matrices. The developed method is thus a valuable contribution to the field of absolute quantitative metabolomics. Copyright © 2012 Elsevier B.V. All rights reserved.
Jacchia, Sara; Nardini, Elena; Savini, Christian; Petrillo, Mauro; Angers-Loustau, Alexandre; Shim, Jung-Hyun; Trijatmiko, Kurniawan; Kreysa, Joachim; Mazzara, Marco
2015-02-18
In this study, we developed, optimized, and in-house validated a real-time PCR method for the event-specific detection and quantification of Golden Rice 2, a genetically modified rice with provitamin A in the grain. We optimized and evaluated the performance of the taxon (targeting rice Phospholipase D α2 gene)- and event (targeting the 3' insert-to-plant DNA junction)-specific assays that compose the method as independent modules, using haploid genome equivalents as unit of measurement. We verified the specificity of the two real-time PCR assays and determined their dynamic range, limit of quantification, limit of detection, and robustness. We also confirmed that the taxon-specific DNA sequence is present in single copy in the rice genome and verified its stability of amplification across 132 rice varieties. A relative quantification experiment evidenced the correct performance of the two assays when used in combination.
Kangas, Michael J; Burks, Raychelle M; Atwater, Jordyn; Lukowicz, Rachel M; Garver, Billy; Holmes, Andrea E
2018-02-01
With the increasing availability of digital imaging devices, colorimetric sensor arrays are rapidly becoming a simple, yet effective tool for the identification and quantification of various analytes. Colorimetric arrays utilize colorimetric data from many colorimetric sensors, with the multidimensional nature of the resulting data necessitating the use of chemometric analysis. Herein, an 8 sensor colorimetric array was used to analyze select acid and basic samples (0.5 - 10 M) to determine which chemometric methods are best suited for classification quantification of analytes within clusters. PCA, HCA, and LDA were used to visualize the data set. All three methods showed well-separated clusters for each of the acid or base analytes and moderate separation between analyte concentrations, indicating that the sensor array can be used to identify and quantify samples. Furthermore, PCA could be used to determine which sensors showed the most effective analyte identification. LDA, KNN, and HQI were used for identification of analyte and concentration. HQI and KNN could be used to correctly identify the analytes in all cases, while LDA correctly identified 95 of 96 analytes correctly. Additional studies demonstrated that controlling for solvent and image effects was unnecessary for all chemometric methods utilized in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silva-Rodríguez, Jesús, E-mail: jesus.silva.rodriguez@sergas.es; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es; Servicio de Medicina Nuclear, Complexo Hospitalario Universidade de Santiago de Compostela
Purpose: Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. Methods: One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manualmore » ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Results: Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. Conclusions: The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.« less
NASA Astrophysics Data System (ADS)
Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik
2001-05-01
Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu
We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated undermore » three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.« less
Silva-Rodríguez, Jesús; Aguiar, Pablo; Sánchez, Manuel; Mosquera, Javier; Luna-Vega, Víctor; Cortés, Julia; Garrido, Miguel; Pombar, Miguel; Ruibal, Alvaro
2014-05-01
Current procedure guidelines for whole body [18F]fluoro-2-deoxy-D-glucose (FDG)-positron emission tomography (PET) state that studies with visible dose extravasations should be rejected for quantification protocols. Our work is focused on the development and validation of methods for estimating extravasated doses in order to correct standard uptake value (SUV) values for this effect in clinical routine. One thousand three hundred sixty-seven consecutive whole body FDG-PET studies were visually inspected looking for extravasation cases. Two methods for estimating the extravasated dose were proposed and validated in different scenarios using Monte Carlo simulations. All visible extravasations were retrospectively evaluated using a manual ROI based method. In addition, the 50 patients with higher extravasated doses were also evaluated using a threshold-based method. Simulation studies showed that the proposed methods for estimating extravasated doses allow us to compensate the impact of extravasations on SUV values with an error below 5%. The quantitative evaluation of patient studies revealed that paravenous injection is a relatively frequent effect (18%) with a small fraction of patients presenting considerable extravasations ranging from 1% to a maximum of 22% of the injected dose. A criterion based on the extravasated volume and maximum concentration was established in order to identify this fraction of patients that might be corrected for paravenous injection effect. The authors propose the use of a manual ROI based method for estimating the effectively administered FDG dose and then correct SUV quantification in those patients fulfilling the proposed criterion.
Noise suppressed partial volume correction for cardiac SPECT/CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Chung; Liu, Chi, E-mail: chi.liu@yale.edu
Purpose: Partial volume correction (PVC) methods typically improve quantification at the expense of increased image noise and reduced reproducibility. In this study, the authors developed a novel voxel-based PVC method that incorporates anatomical knowledge to improve quantification while suppressing noise for cardiac SPECT/CT imaging. Methods: In the proposed method, the SPECT images were first reconstructed using anatomical-based maximum a posteriori (AMAP) with Bowsher’s prior to penalize noise while preserving boundaries. A sequential voxel-by-voxel PVC approach (Yang’s method) was then applied on the AMAP reconstruction using a template response. This template response was obtained by forward projecting a template derived frommore » a contrast-enhanced CT image, and then reconstructed using AMAP to model the partial volume effects (PVEs) introduced by both the system resolution and the smoothing applied during reconstruction. To evaluate the proposed noise suppressed PVC (NS-PVC), the authors first simulated two types of cardiac SPECT studies: a {sup 99m}Tc-tetrofosmin myocardial perfusion scan and a {sup 99m}Tc-labeled red blood cell (RBC) scan on a dedicated cardiac multiple pinhole SPECT/CT at both high and low count levels. The authors then applied the proposed method on a canine equilibrium blood pool study following injection with {sup 99m}Tc-RBCs at different count levels by rebinning the list-mode data into shorter acquisitions. The proposed method was compared to MLEM reconstruction without PVC, two conventional PVC methods, including Yang’s method and multitarget correction (MTC) applied on the MLEM reconstruction, and AMAP reconstruction without PVC. Results: The results showed that the Yang’s method improved quantification, however, yielded increased noise and reduced reproducibility in the regions with higher activity. MTC corrected for PVE on high count data with amplified noise, although yielded the worst performance among all the methods tested on low-count data. AMAP effectively suppressed noise and reduced the spill-in effect in the low activity regions. However it was unable to reduce the spill-out effect in high activity regions. NS-PVC yielded superior performance in terms of both quantitative assessment and visual image quality while improving reproducibility. Conclusions: The results suggest that NS-PVC may be a promising PVC algorithm for application in low-dose protocols, and in gated and dynamic cardiac studies with low counts.« less
López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón
2009-10-01
The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.
Attenuation correction of emission PET images with average CT: Interpolation from breath-hold CT
NASA Astrophysics Data System (ADS)
Huang, Tzung-Chi; Zhang, Geoffrey; Chen, Chih-Hao; Yang, Bang-Hung; Wu, Nien-Yun; Wang, Shyh-Jen; Wu, Tung-Hsin
2011-05-01
Misregistration resulting from the difference of temporal resolution in PET and CT scans occur frequently in PET/CT imaging, which causes distortion in tumor quantification in PET. Respiration cine average CT (CACT) for PET attenuation correction has been reported to improve the misalignment effectively by several papers. However, the radiation dose to the patient from a four-dimensional CT scan is relatively high. In this study, we propose a method to interpolate respiratory CT images over a respiratory cycle from inhalation and exhalation breath-hold CT images, and use the average CT from the generated CT set for PET attenuation correction. The radiation dose to the patient is reduced using this method. Six cancer patients of various lesion sites underwent routine free-breath helical CT (HCT), respiration CACT, interpolated average CT (IACT), and 18F-FDG PET. Deformable image registration was used to interpolate the middle phases of a respiratory cycle based on the end-inspiration and end-expiration breath-hold CT scans. The average CT image was calculated from the eight interpolated CT image sets of middle respiratory phases and the two original inspiration and expiration CT images. Then the PET images were reconstructed by these three methods for attenuation correction using HCT, CACT, and IACT. Misalignment of PET image using either CACT or IACT for attenuation correction in PET/CT was improved. The difference in standard uptake value (SUV) from tumor in PET images was most significant between the use of HCT and CACT, while the least significant between the use of CACT and IACT. Besides the similar improvement in tumor quantification compared to the use of CACT, using IACT for PET attenuation correction reduces the radiation dose to the patient.
Quantitative proton magnetic resonance spectroscopy without water suppression
NASA Astrophysics Data System (ADS)
Özdemir, M. S.; DeDeene, Y.; Fieremans, E.; Lemahieu, I.
2009-06-01
The suppression of the abundant water signal has been traditionally employed to decrease the dynamic range of the NMR signal in proton MRS (1H MRS) in vivo. When using this approach, if the intent is to utilize the water signal as an internal reference for the absolute quantification of metabolites, additional measurements are required for the acquisition of the water signal. This can be prohibitively time-consuming and is not desired clinically. Additionally, traditional water suppression can lead to metabolite alterations. This can be overcome by performing quantitative 1H MRS without water suppression. However, the non-water-suppressed spectra suffer from gradient-induced frequency modulations, resulting in sidebands in the spectrum. Sidebands may overlap with the metabolites, which renders the spectral analysis and quantification problematic. In this paper, we performed absolute quantification of metabolites without water suppression. Sidebands were removed by utilizing the phase of an external reference signal of single resonance to observe the time-varying the static field fluctuations induced by gradient-vibration and deconvolving this phase contamination from the desired NMR signal. The quantification of metabolites was determined after sideband correction by calibrating the metabolite signal intensities against the recorded water signal. The method was evaluated by phantom and in vivo measurements in human brain. The maximum systematic error for the quantified metabolite concentrations was found to be 10.8%, showing the feasibility of the quantification after sideband correction.
Parsons, Teresa L.; Marzinke, Mark A.; Hoang, Thuy; Bliven-Sizemore, Erin; Weiner, Marc; Mac Kenzie, William R.; Dorman, Susan E.
2014-01-01
The quantification of antituberculosis drug concentrations in multinational trials currently requires the collection of modest blood volumes, centrifugation, aliquoting of plasma, freezing, and keeping samples frozen during shipping. We prospectively enrolled healthy individuals into the Tuberculosis Trials Consortium Study 29B, a phase I dose escalation study of rifapentine, a rifamycin under evaluation in tuberculosis treatment trials. We developed a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for quantifying rifapentine in whole blood on dried blood spots (DBS) to facilitate pharmacokinetic/pharmacodynamic analyses in clinical trials. Paired plasma and whole-blood samples were collected by venipuncture, and whole blood was spotted on Whatman protein saver 903 cards. The methods were optimized for plasma and then validated for DBS. The analytical measuring range for quantification of rifapentine and its metabolite was 50 to 80,000 ng/ml in whole-blood DBS. The analyte was stable on the cards for 11 weeks with a desiccant at room temperature and protected from light. The method concordance for paired plasma and whole-blood DBS samples was determined after correcting for participant hematocrit or population-based estimates of bias from Bland-Altman plots. The application of either correction factor resulted in acceptable correlation between plasma and whole-blood DBS (Passing-Bablok regression corrected for hematocrit; y = 0.98x + 356). Concentrations of rifapentine may be determined from whole-blood DBS collected via venipuncture after normalization in order to account for the dilutional effects of red blood cells. Additional studies are focused on the application of this methodology to capillary blood collected by finger stick. The simplicity of processing, storage, shipping, and low blood volume makes whole-blood DBS attractive for rifapentine pharmacokinetic evaluations, especially in international and pediatric trials. PMID:25182637
Quantitative Chemical Shift-Encoded MRI Is an Accurate Method to Quantify Hepatic Steatosis
Kühn, Jens-Peter; Hernando, Diego; Mensel, Birger; Krüger, Paul C.; Ittermann, Till; Mayerle, Julia; Hosten, Norbert; Reeder, Scott B.
2014-01-01
Purpose To compare the accuracy of liver fat quantification using a three-echo chemical shift-encoded magnetic resonance imaging (MRI) technique without and with correction for confounders with spectroscopy (MRS) as the reference standard. Materials and Methods Fifty patients (23 women, mean age 56.6 ± 13.2 years) with fatty liver disease were enrolled. Patients underwent T2-corrected single-voxel MRS and a three-echo chemical shift-encoded gradient echo (GRE) sequence at 3.0T. MRI fat fraction (FF) was calculated without and with T2* and T1 correction and multispectral modeling of fat and compared with MRS-FF using linear regression. Results The spectroscopic range of liver fat was 0.11%–38.7%. Excellent correlation between MRS-FF and MRI-FF was observed when using T2* correction (R2=0.96). With use of T2* correction alone, the slope was significantly different from 1 (1.16 ± 0.03, P < 0.001) and the intercept was different from 0 (1.14% ± 0.50%, P < 0.023). This slope was significantly different than 1.0 when no T1 correction was used (P=0.001). When T2*, T1, and spectral complexity of fat were addressed, the results showed equivalence between fat quantification using MRI and MRS (slope: 1.02 ± 0.03, P=0.528; intercept: 0.26% ± 0.46%, P=0.572). Conclusion Complex three-echo chemical shift-encoded MRI is equivalent to MRS for quantifying liver fat, but only with correction for T2* decay and T1 recovery and use of spectral modeling of fat. This is necessary because T2* decay, T1 recovery, and multispectral complexity of fat are processes which may otherwise bias the measurements. PMID:24123655
Gabrani-Juma, Hanif; Clarkin, Owen J; Pourmoghaddas, Amir; Driscoll, Brandon; Wells, R Glenn; deKemp, Robert A; Klein, Ran
2017-01-01
Simple and robust techniques are lacking to assess performance of flow quantification using dynamic imaging. We therefore developed a method to qualify flow quantification technologies using a physical compartment exchange phantom and image analysis tool. We validate and demonstrate utility of this method using dynamic PET and SPECT. Dynamic image sequences were acquired on two PET/CT and a cardiac dedicated SPECT (with and without attenuation and scatter corrections) systems. A two-compartment exchange model was fit to image derived time-activity curves to quantify flow rates. Flowmeter measured flow rates (20-300 mL/min) were set prior to imaging and were used as reference truth to which image derived flow rates were compared. Both PET cameras had excellent agreement with truth ( [Formula: see text]). High-end PET had no significant bias (p > 0.05) while lower-end PET had minimal slope bias (wash-in and wash-out slopes were 1.02 and 1.01) but no significant reduction in precision relative to high-end PET (<15% vs. <14% limits of agreement, p > 0.3). SPECT (without scatter and attenuation corrections) slope biases were noted (0.85 and 1.32) and attributed to camera saturation in early time frames. Analysis of wash-out rates from non-saturated, late time frames resulted in excellent agreement with truth ( [Formula: see text], slope = 0.97). Attenuation and scatter corrections did not significantly impact SPECT performance. The proposed phantom, software and quality assurance paradigm can be used to qualify imaging instrumentation and protocols for quantification of kinetic rate parameters using dynamic imaging.
Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z
2018-05-01
Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
Kim, Song Soo; Seo, Joon Beom; Kim, Namkug; Chae, Eun Jin; Lee, Young Kyung; Oh, Yeon Mok; Lee, Sang Do
2014-01-01
To determine the improvement of emphysema quantification with density correction and to determine the optimal site to use for air density correction on volumetric computed tomography (CT). Seventy-eight CT scans of COPD patients (GOLD II-IV, smoking history 39.2±25.3 pack-years) were obtained from several single-vendor 16-MDCT scanners. After density measurement of aorta, tracheal- and external air, volumetric CT density correction was conducted (two reference values: air, -1,000 HU/blood, +50 HU). Using in-house software, emphysema index (EI) and mean lung density (MLD) were calculated. Differences in air densities, MLD and EI prior to and after density correction were evaluated (paired t-test). Correlation between those parameters and FEV1 and FEV1/FVC were compared (age- and sex adjusted partial correlation analysis). Measured densities (HU) of tracheal- and external air differed significantly (-990 ± 14, -1016 ± 9, P<0.001). MLD and EI on original CT data, after density correction using tracheal- and external air also differed significantly (MLD: -874.9 ± 27.6 vs. -882.3 ± 24.9 vs. -860.5 ± 26.6; EI: 16.8 ± 13.4 vs. 21.1 ± 14.5 vs. 9.7 ± 10.5, respectively, P<0.001). The correlation coefficients between CT quantification indices and FEV1, and FEV1/FVC increased after density correction. The tracheal air correction showed better results than the external air correction. Density correction of volumetric CT data can improve correlations of emphysema quantification and PFT. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
O' Doherty, Jim; Schleyer, Paul
2017-12-01
Simultaneous cardiac perfusion studies are an increasing trend in PET-MR imaging. During dynamic PET imaging, the introduction of gadolinium-based MR contrast agents (GBCA) at high concentrations during a dual injection of GBCA and PET radiotracer may cause increased attenuation effects of the PET signal, and thus errors in quantification of PET images. We thus aimed to calculate the change in linear attenuation coefficient (LAC) of a mixture of PET radiotracer and increasing concentrations of GBCA in solution and furthermore, to investigate if this change in LAC produced a measurable effect on the image-based PET activity concentration when attenuation corrected by three different AC strategies. We performed simultaneous PET-MR imaging of a phantom in a static scenario using a fixed activity of 40 MBq [18 F]-NaF, water, and an increasing GBCA concentration from 0 to 66 mM (based on an assumed maximum possible concentration of GBCA in the left ventricle in a clinical study). This simulated a range of clinical concentrations of GBCA. We investigated two methods to calculate the LAC of the solution mixture at 511 keV: (1) a mathematical mixture rule and (2) CT imaging of each concentration step and subsequent conversion to LAC at 511 keV. This comparison showed that the ranges of LAC produced by both methods are equivalent with an increase in LAC of the mixed solution of approximately 2% over the range of 0-66 mM. We then employed three different attenuation correction methods to the PET data: (1) each PET scan at a specific millimolar concentration of GBCA corrected by its corresponding CT scan, (2) each PET scan corrected by a CT scan with no GBCA present (i.e., at 0 mM GBCA), and (3) a manually generated attenuation map, whereby all CT voxels in the phantom at 0 mM were replaced by LAC = 0.1 cm -1 . All attenuation correction methods (1-3) were accurate to the true measured activity concentration within 5%, and there were no trends in image-based activity concentrations upon increasing the GBCA concentration of the solution. The presence of high GBCA concentration (representing a worst-case scenario in dynamic cardiac studies) in solution with PET radiotracer produces a minimal effect on attenuation-corrected PET quantification.
Dieringer, Matthias A.; Deimling, Michael; Santoro, Davide; Wuerfel, Jens; Madai, Vince I.; Sobesky, Jan; von Knobelsdorff-Brenkenhoff, Florian; Schulz-Menger, Jeanette; Niendorf, Thoralf
2014-01-01
Introduction Visual but subjective reading of longitudinal relaxation time (T1) weighted magnetic resonance images is commonly used for the detection of brain pathologies. For this non-quantitative measure, diagnostic quality depends on hardware configuration, imaging parameters, radio frequency transmission field (B1+) uniformity, as well as observer experience. Parametric quantification of the tissue T1 relaxation parameter offsets the propensity for these effects, but is typically time consuming. For this reason, this study examines the feasibility of rapid 2D T1 quantification using a variable flip angles (VFA) approach at magnetic field strengths of 1.5 Tesla, 3 Tesla, and 7 Tesla. These efforts include validation in phantom experiments and application for brain T1 mapping. Methods T1 quantification included simulations of the Bloch equations to correct for slice profile imperfections, and a correction for B1+. Fast gradient echo acquisitions were conducted using three adjusted flip angles for the proposed T1 quantification approach that was benchmarked against slice profile uncorrected 2D VFA and an inversion-recovery spin-echo based reference method. Brain T1 mapping was performed in six healthy subjects, one multiple sclerosis patient, and one stroke patient. Results Phantom experiments showed a mean T1 estimation error of (-63±1.5)% for slice profile uncorrected 2D VFA and (0.2±1.4)% for the proposed approach compared to the reference method. Scan time for single slice T1 mapping including B1+ mapping could be reduced to 5 seconds using an in-plane resolution of (2×2) mm2, which equals a scan time reduction of more than 99% compared to the reference method. Conclusion Our results demonstrate that rapid 2D T1 quantification using a variable flip angle approach is feasible at 1.5T/3T/7T. It represents a valuable alternative for rapid T1 mapping due to the gain in speed versus conventional approaches. This progress may serve to enhance the capabilities of parametric MR based lesion detection and brain tissue characterization. PMID:24621588
LCC demons with divergence term for liver MRI motion correction
NASA Astrophysics Data System (ADS)
Oh, Jihun; Martin, Diego; Skrinjar, Oskar
2010-03-01
Contrast-enhanced liver MR image sequences acquired at multiple times before and after contrast administration have been shown to be critically important for the diagnosis and monitoring of liver tumors and may be used for the quantification of liver inflammation and fibrosis. However, over multiple acquisitions, the liver moves and deforms due to patient and respiratory motion. In order to analyze contrast agent uptake one first needs to correct for liver motion. In this paper we present a method for the motion correction of dynamic contrastenhanced liver MR images. For this purpose we use a modified version of the Local Correlation Coefficient (LCC) Demons non-rigid registration method. Since the liver is nearly incompressible its displacement field has small divergence. For this reason we add a divergence term to the energy that is minimized in the LCC Demons method. We applied the method to four sequences of contrast-enhanced liver MR images. Each sequence had a pre-contrast scan and seven post-contrast scans. For each post-contrast scan we corrected for the liver motion relative to the pre-contrast scan. Quantitative evaluation showed that the proposed method improved the liver alignment relative to the non-corrected and translation-corrected scans and visual inspection showed no visible misalignment of the motion corrected contrast-enhanced scans and pre-contrast scan.
Dapic, Irena; Kobetic, Renata; Brkljacic, Lidija; Kezic, Sanja; Jakasa, Ivone
2018-02-01
The free fatty acids (FFAs) are one of the major components of the lipids in the stratum corneum (SC), the uppermost layer of the skin. Relative composition of FFAs has been proposed as a biomarker of the skin barrier status in patients with atopic dermatitis (AD). Here, we developed an LC-ESI-MS/MS method for simultaneous quantification of a range of FFAs with long and very long chain length in the SC collected by adhesive tape (D-Squame). The method, based on derivatization with 2-bromo-1-methylpyridinium iodide and 3-carbinol-1-methylpyridinium iodide, allowed highly sensitive detection and quantification of FFAs using multiple reaction monitoring. For the quantification, we applied a surrogate analyte approach and internal standardization using isotope labeled derivatives of FFAs. Adhesive tapes showed the presence of several FFAs, which are also present in the SC, a problem encountered in previous studies. Therefore, the levels of FFAs in the SC were corrected using C12:0, which was present on the adhesive tape, but not detected in the SC. The method was applied to SC samples from patients with atopic dermatitis and healthy subjects. Quantification using multiple reaction monitoring allowed sufficient sensitivity to analyze FFAs of chain lengths C16-C28 in the SC collected on only one tape strip. Copyright © 2017 John Wiley & Sons, Ltd.
Quantification of fibre polymerization through Fourier space image analysis
Nekouzadeh, Ali; Genin, Guy M.
2011-01-01
Quantification of changes in the total length of randomly oriented and possibly curved lines appearing in an image is a necessity in a wide variety of biological applications. Here, we present an automated approach based upon Fourier space analysis. Scaled, band-pass filtered power spectral densities of greyscale images are integrated to provide a quantitative measurement of the total length of lines of a particular range of thicknesses appearing in an image. A procedure is presented to correct for changes in image intensity. The method is most accurate for two-dimensional processes with fibres that do not occlude one another. PMID:24959096
Semi-automatic knee cartilage segmentation
NASA Astrophysics Data System (ADS)
Dam, Erik B.; Folkesson, Jenny; Pettersen, Paola C.; Christiansen, Claus
2006-03-01
Osteo-Arthritis (OA) is a very common age-related cause of pain and reduced range of motion. A central effect of OA is wear-down of the articular cartilage that otherwise ensures smooth joint motion. Quantification of the cartilage breakdown is central in monitoring disease progression and therefore cartilage segmentation is required. Recent advances allow automatic cartilage segmentation with high accuracy in most cases. However, the automatic methods still fail in some problematic cases. For clinical studies, even if a few failing cases will be averaged out in the overall results, this reduces the mean accuracy and precision and thereby necessitates larger/longer studies. Since the severe OA cases are often most problematic for the automatic methods, there is even a risk that the quantification will introduce a bias in the results. Therefore, interactive inspection and correction of these problematic cases is desirable. For diagnosis on individuals, this is even more crucial since the diagnosis will otherwise simply fail. We introduce and evaluate a semi-automatic cartilage segmentation method combining an automatic pre-segmentation with an interactive step that allows inspection and correction. The automatic step consists of voxel classification based on supervised learning. The interactive step combines a watershed transformation of the original scan with the posterior probability map from the classification step at sub-voxel precision. We evaluate the method for the task of segmenting the tibial cartilage sheet from low-field magnetic resonance imaging (MRI) of knees. The evaluation shows that the combined method allows accurate and highly reproducible correction of the segmentation of even the worst cases in approximately ten minutes of interaction.
Rosing, H.; Hillebrand, M. J. X.; Blesson, S.; Mengesha, B.; Diro, E.; Hailu, A.; Schellens, J. H. M.; Beijnen, J. H.
2016-01-01
To facilitate future pharmacokinetic studies of combination treatments against leishmaniasis in remote regions in which the disease is endemic, a simple cheap sampling method is required for miltefosine quantification. The aims of this study were to validate a liquid chromatography-tandem mass spectrometry method to quantify miltefosine in dried blood spot (DBS) samples and to validate its use with Ethiopian patients with visceral leishmaniasis (VL). Since hematocrit (Ht) levels are typically severely decreased in VL patients, returning to normal during treatment, the method was evaluated over a range of clinically relevant Ht values. Miltefosine was extracted from DBS samples using a simple method of pretreatment with methanol, resulting in >97% recovery. The method was validated over a calibration range of 10 to 2,000 ng/ml, and accuracy and precision were within ±11.2% and ≤7.0% (≤19.1% at the lower limit of quantification), respectively. The method was accurate and precise for blood spot volumes between 10 and 30 μl and for Ht levels of 20 to 35%, although a linear effect of Ht levels on miltefosine quantification was observed in the bioanalytical validation. DBS samples were stable for at least 162 days at 37°C. Clinical validation of the method using paired DBS and plasma samples from 16 VL patients showed a median observed DBS/plasma miltefosine concentration ratio of 0.99, with good correlation (Pearson's r = 0.946). Correcting for patient-specific Ht levels did not further improve the concordance between the sampling methods. This successfully validated method to quantify miltefosine in DBS samples was demonstrated to be a valid and practical alternative to venous blood sampling that can be applied in future miltefosine pharmacokinetic studies with leishmaniasis patients, without Ht correction. PMID:26787691
Hayashi, Tatsuya; Saitoh, Satoshi; Takahashi, Junji; Tsuji, Yoshinori; Ikeda, Kenji; Kobayashi, Masahiro; Kawamura, Yusuke; Fujii, Takeshi; Inoue, Masafumi; Miyati, Tosiaki; Kumada, Hiromitsu
2017-04-01
The two-point Dixon method for magnetic resonance imaging (MRI) is commonly used to non-invasively measure fat deposition in the liver. The aim of the present study was to assess the usefulness of MRI-fat fraction (MRI-FF) using the two-point Dixon method based on the non-alcoholic fatty liver disease activity score. This retrospective study included 106 patients who underwent liver MRI and MR spectroscopy, and 201 patients who underwent liver MRI and histological assessment. The relationship between MRI-FF and MR spectroscopy-fat fraction was used to estimate the corrected MRI-FF for hepatic multi-peaks of fat. Then, a color FF map was generated with the corrected MRI-FF based on the non-alcoholic fatty liver disease activity score. We defined FF variability as the standard deviation of FF in regions of interest. Uniformity of hepatic fat was visually graded on a three-point scale using both gray-scale and color FF maps. Confounding effects of histology (iron, inflammation and fibrosis) on corrected MRI-FF were assessed by multiple linear regression. The linear correlations between MRI-FF and MR spectroscopy-fat fraction, and between corrected MRI-FF and histological steatosis were strong (R 2 = 0.90 and R 2 = 0.88, respectively). Liver fat variability significantly increased with visual fat uniformity grade using both of the maps (ρ = 0.67-0.69, both P < 0.001). Hepatic iron, inflammation and fibrosis had no significant confounding effects on the corrected MRI-FF (all P > 0.05). The two-point Dixon method and the gray-scale or color FF maps based on the non-alcoholic fatty liver disease activity score were useful for fat quantification in the liver of patients without severe iron deposition. © 2016 The Japan Society of Hepatology.
NASA Astrophysics Data System (ADS)
Hubert, Maxime; Pacureanu, Alexandra; Guilloud, Cyril; Yang, Yang; da Silva, Julio C.; Laurencin, Jerome; Lefebvre-Joud, Florence; Cloetens, Peter
2018-05-01
In X-ray tomography, ring-shaped artifacts present in the reconstructed slices are an inherent problem degrading the global image quality and hindering the extraction of quantitative information. To overcome this issue, we propose a strategy for suppression of ring artifacts originating from the coherent mixing of the incident wave and the object. We discuss the limits of validity of the empty beam correction in the framework of a simple formalism. We then deduce a correction method based on two-dimensional random sample displacement, with minimal cost in terms of spatial resolution, acquisition, and processing time. The method is demonstrated on bone tissue and on a hydrogen electrode of a ceramic-metallic solid oxide cell. Compared to the standard empty beam correction, we obtain high quality nanotomography images revealing detailed object features. The resulting absence of artifacts allows straightforward segmentation and posterior quantification of the data.
Díaz, Gloria; González, Fabio A; Romero, Eduardo
2009-04-01
Visual quantification of parasitemia in thin blood films is a very tedious, subjective and time-consuming task. This study presents an original method for quantification and classification of erythrocytes in stained thin blood films infected with Plasmodium falciparum. The proposed approach is composed of three main phases: a preprocessing step, which corrects luminance differences. A segmentation step that uses the normalized RGB color space for classifying pixels either as erythrocyte or background followed by an Inclusion-Tree representation that structures the pixel information into objects, from which erythrocytes are found. Finally, a two step classification process identifies infected erythrocytes and differentiates the infection stage, using a trained bank of classifiers. Additionally, user intervention is allowed when the approach cannot make a proper decision. Four hundred fifty malaria images were used for training and evaluating the method. Automatic identification of infected erythrocytes showed a specificity of 99.7% and a sensitivity of 94%. The infection stage was determined with an average sensitivity of 78.8% and average specificity of 91.2%.
Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat
2017-05-01
Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.
Dieringer, Matthias A; Deimling, Michael; Santoro, Davide; Wuerfel, Jens; Madai, Vince I; Sobesky, Jan; von Knobelsdorff-Brenkenhoff, Florian; Schulz-Menger, Jeanette; Niendorf, Thoralf
2014-01-01
Visual but subjective reading of longitudinal relaxation time (T1) weighted magnetic resonance images is commonly used for the detection of brain pathologies. For this non-quantitative measure, diagnostic quality depends on hardware configuration, imaging parameters, radio frequency transmission field (B1+) uniformity, as well as observer experience. Parametric quantification of the tissue T1 relaxation parameter offsets the propensity for these effects, but is typically time consuming. For this reason, this study examines the feasibility of rapid 2D T1 quantification using a variable flip angles (VFA) approach at magnetic field strengths of 1.5 Tesla, 3 Tesla, and 7 Tesla. These efforts include validation in phantom experiments and application for brain T1 mapping. T1 quantification included simulations of the Bloch equations to correct for slice profile imperfections, and a correction for B1+. Fast gradient echo acquisitions were conducted using three adjusted flip angles for the proposed T1 quantification approach that was benchmarked against slice profile uncorrected 2D VFA and an inversion-recovery spin-echo based reference method. Brain T1 mapping was performed in six healthy subjects, one multiple sclerosis patient, and one stroke patient. Phantom experiments showed a mean T1 estimation error of (-63±1.5)% for slice profile uncorrected 2D VFA and (0.2±1.4)% for the proposed approach compared to the reference method. Scan time for single slice T1 mapping including B1+ mapping could be reduced to 5 seconds using an in-plane resolution of (2×2) mm2, which equals a scan time reduction of more than 99% compared to the reference method. Our results demonstrate that rapid 2D T1 quantification using a variable flip angle approach is feasible at 1.5T/3T/7T. It represents a valuable alternative for rapid T1 mapping due to the gain in speed versus conventional approaches. This progress may serve to enhance the capabilities of parametric MR based lesion detection and brain tissue characterization.
Issues in quantification of registered respiratory gated PET/CT in the lung.
Cuplov, Vesna; Holman, Beverley F; McClelland, Jamie; Modat, Marc; Hutton, Brian F; Thielemans, Kris
2017-12-14
PET/CT quantification of lung tissue is limited by several difficulties: the lung density and local volume changes during respiration, the anatomical mismatch between PET and CT and the relative contributions of tissue, air and blood to the PET signal (the tissue fraction effect). Air fraction correction (AFC) has been shown to improve PET image quantification in the lungs. Methods to correct for the movement and anatomical mismatch involve respiratory gating and image registration techniques. While conventional registration methods only account for spatial mismatch, the Jacobian determinant of the deformable registration transformation field can be used to estimate local volume changes and could therefore potentially be used to correct (i.e. Jacobian Correction, JC) the PET signal for changes in concentration due to local volume changes. This work aims to investigate the relationship between variations in the lung due to respiration, specifically density, tracer concentration and local volume changes. In particular, we study the effect of AFC and JC on PET quantitation after registration of respiratory gated PET/CT patient data. Six patients suffering from lung cancer with solitary pulmonary nodules underwent [Formula: see text]F-FDG PET/cine-CT. The PET data were gated into six respiratory gates using displacement gating based on a real-time position management (RPM) signal and reconstructed with matched gated CT. The PET tracer concentration and tissue density were extracted from registered gated PET and CT images before and after corrections (AFC or JC) and compared to the values from the reference images. Before correction, we observed a linear correlation between the PET tracer concentration values and density. Across all gates and patients, the maximum relative change in PET tracer concentration before (after) AFC was found to be 16.2% (4.1%) and the maximum relative change in tissue density and PET tracer concentration before (after) JC was found to be 17.1% (5.5%) and 16.2% (6.8%) respectively. Overall our results show that both AFC or JC largely explain the observed changes in PET tracer activity over the respiratory cycle. We also speculate that a second order effect is related to change in fluid content but this needs further investigation. Consequently, either AFC or JC is recommended when combining lung PET images from different gates to reduce noise.
Issues in quantification of registered respiratory gated PET/CT in the lung
NASA Astrophysics Data System (ADS)
Cuplov, Vesna; Holman, Beverley F.; McClelland, Jamie; Modat, Marc; Hutton, Brian F.; Thielemans, Kris
2018-01-01
PET/CT quantification of lung tissue is limited by several difficulties: the lung density and local volume changes during respiration, the anatomical mismatch between PET and CT and the relative contributions of tissue, air and blood to the PET signal (the tissue fraction effect). Air fraction correction (AFC) has been shown to improve PET image quantification in the lungs. Methods to correct for the movement and anatomical mismatch involve respiratory gating and image registration techniques. While conventional registration methods only account for spatial mismatch, the Jacobian determinant of the deformable registration transformation field can be used to estimate local volume changes and could therefore potentially be used to correct (i.e. Jacobian Correction, JC) the PET signal for changes in concentration due to local volume changes. This work aims to investigate the relationship between variations in the lung due to respiration, specifically density, tracer concentration and local volume changes. In particular, we study the effect of AFC and JC on PET quantitation after registration of respiratory gated PET/CT patient data. Six patients suffering from lung cancer with solitary pulmonary nodules underwent 18 F-FDG PET/cine-CT. The PET data were gated into six respiratory gates using displacement gating based on a real-time position management (RPM) signal and reconstructed with matched gated CT. The PET tracer concentration and tissue density were extracted from registered gated PET and CT images before and after corrections (AFC or JC) and compared to the values from the reference images. Before correction, we observed a linear correlation between the PET tracer concentration values and density. Across all gates and patients, the maximum relative change in PET tracer concentration before (after) AFC was found to be 16.2% (4.1%) and the maximum relative change in tissue density and PET tracer concentration before (after) JC was found to be 17.1% (5.5%) and 16.2% (6.8%) respectively. Overall our results show that both AFC or JC largely explain the observed changes in PET tracer activity over the respiratory cycle. We also speculate that a second order effect is related to change in fluid content but this needs further investigation. Consequently, either AFC or JC is recommended when combining lung PET images from different gates to reduce noise.
Assessment of spill flow emissions on the basis of measured precipitation and waste water data
NASA Astrophysics Data System (ADS)
Hochedlinger, Martin; Gruber, Günter; Kainz, Harald
2005-09-01
Combined sewer overflows (CSOs) are substantial contributors to the total emissions into surface water bodies. The emitted pollution results from dry-weather waste water loads, surface runoff pollution and from the remobilisation of sewer deposits and sewer slime during storm events. One possibility to estimate overflow loads is a calculation with load quantification models. Input data for these models are pollution concentrations, e.g. Total Chemical Oxygen Demand (COD tot), Total Suspended Solids (TSS) or Soluble Chemical Oxygen Demand (COD sol), rainfall series and flow measurements for model calibration and validation. It is important for the result of overflow loads to model with reliable input data, otherwise this inevitably leads to bad results. In this paper the correction of precipitation measurements and the sewer online-measurements are presented to satisfy the load quantification model requirements already described. The main focus is on tipping bucket gauge measurements and their corrections. The results evidence the importance of their corrections due the effects on load quantification modelling and show the difference between corrected and not corrected data of storm events with high rain intensities.
Bezrukov, Ilja; Schmidt, Holger; Gatidis, Sergios; Mantlik, Frédéric; Schäfer, Jürgen F; Schwenzer, Nina; Pichler, Bernd J
2015-07-01
Pediatric imaging is regarded as a key application for combined PET/MR imaging systems. Because existing MR-based attenuation-correction methods were not designed specifically for pediatric patients, we assessed the impact of 2 potentially influential factors: inter- and intrapatient variability of attenuation coefficients and anatomic variability. Furthermore, we evaluated the quantification accuracy of 3 methods for MR-based attenuation correction without (SEGbase) and with bone prediction using an adult and a pediatric atlas (SEGwBONEad and SEGwBONEpe, respectively) on PET data of pediatric patients. The variability of attenuation coefficients between and within pediatric (5-17 y, n = 17) and adult (27-66 y, n = 16) patient collectives was assessed on volumes of interest (VOIs) in CT datasets for different tissue types. Anatomic variability was assessed on SEGwBONEad/pe attenuation maps by computing mean differences to CT-based attenuation maps for regions of bone tissue, lungs, and soft tissue. PET quantification was evaluated on VOIs with physiologic uptake and on 80% isocontour VOIs with elevated uptake in the thorax and abdomen/pelvis. Inter- and intrapatient variability of the bias was assessed for each VOI group and method. Statistically significant differences in mean VOI Hounsfield unit values and linear attenuation coefficients between adult and pediatric collectives were found in the lungs and femur. The prediction of attenuation maps using the pediatric atlas showed a reduced error in bone tissue and better delineation of bone structure. Evaluation of PET quantification accuracy showed statistically significant mean errors in mean standardized uptake values of -14% ± 5% and -23% ± 6% in bone marrow and femur-adjacent VOIs with physiologic uptake for SEGbase, which could be reduced to 0% ± 4% and -1% ± 5% using SEGwBONEpe attenuation maps. Bias in soft-tissue VOIs was less than 5% for all methods. Lung VOIs showed high SDs in the range of 15% for all methods. For VOIs with elevated uptake, mean and SD were less than 5% except in the thorax. The use of a dedicated atlas for the pediatric patient collective resulted in improved attenuation map prediction in osseous regions and reduced interpatient bias variation in femur-adjacent VOIs. For the lungs, in which intrapatient variation was higher for the pediatric collective, a patient- or group-specific attenuation coefficient might improve attenuation map accuracy. Mean errors of -14% and -23% in bone marrow and femur-adjacent VOIs can affect PET quantification in these regions when bone tissue is ignored. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
NASA Astrophysics Data System (ADS)
Schramm, G.; Maus, J.; Hofheinz, F.; Petr, J.; Lougovski, A.; Beuthien-Baumann, B.; Platzek, I.; van den Hoff, J.
2014-06-01
The aim of this paper is to describe a new automatic method for compensation of metal-implant-induced segmentation errors in MR-based attenuation maps (MRMaps) and to evaluate the quantitative influence of those artifacts on the reconstructed PET activity concentration. The developed method uses a PET-based delineation of the patient contour to compensate metal-implant-caused signal voids in the MR scan that is segmented for PET attenuation correction. PET emission data of 13 patients with metal implants examined in a Philips Ingenuity PET/MR were reconstructed with the vendor-provided method for attenuation correction (MRMaporig, PETorig) and additionally with a method for attenuation correction (MRMapcor, PETcor) developed by our group. MRMaps produced by both methods were visually inspected for segmentation errors. The segmentation errors in MRMaporig were classified into four classes (L1 and L2 artifacts inside the lung and B1 and B2 artifacts inside the remaining body depending on the assigned attenuation coefficients). The average relative SUV differences (\\varepsilon _{rel}^{av}) between PETorig and PETcor of all regions showing wrong attenuation coefficients in MRMaporig were calculated. Additionally, relative SUVmean differences (ɛrel) of tracer accumulations in hot focal structures inside or in the vicinity of these regions were evaluated. MRMaporig showed erroneous attenuation coefficients inside the regions affected by metal artifacts and inside the patients' lung in all 13 cases. In MRMapcor, all regions with metal artifacts, except for the sternum, were filled with the soft-tissue attenuation coefficient and the lung was correctly segmented in all patients. MRMapcor only showed small residual segmentation errors in eight patients. \\varepsilon _{rel}^{av} (mean ± standard deviation) were: ( - 56 ± 3)% for B1, ( - 43 ± 4)% for B2, (21 ± 18)% for L1, (120 ± 47)% for L2 regions. ɛrel (mean ± standard deviation) of hot focal structures were: ( - 52 ± 12)% in B1, ( - 45 ± 13)% in B2, (19 ± 19)% in L1, (51 ± 31)% in L2 regions. Consequently, metal-implant-induced artifacts severely disturb MR-based attenuation correction and SUV quantification in PET/MR. The developed algorithm is able to compensate for these artifacts and improves SUV quantification accuracy distinctly.
Porra, Luke; Swan, Hans; Ho, Chien
2015-08-01
Introduction: Acoustic Radiation Force Impulse (ARFI) Quantification measures shear wave velocities (SWVs) within the liver. It is a reliable method for predicting the severity of liver fibrosis and has the potential to assess fibrosis in any part of the liver, but previous research has found ARFI quantification in the right lobe more accurate than in the left lobe. A lack of standardised applied transducer force when performing ARFI quantification in the left lobe of the liver may account for some of this inaccuracy. The research hypothesis of this present study predicted that an increase in applied transducer force would result in an increase in SWVs measured. Methods: ARFI quantification within the left lobe of the liver was performed within a group of healthy volunteers (n = 28). During each examination, each participant was subjected to ARFI quantification at six different levels of transducer force applied to the epigastric abdominal wall. Results: A repeated measures ANOVA test showed that ARFI quantification was significantly affected by applied transducer force (p = 0.002). Significant pairwise comparisons using Bonferroni correction for multiple comparisons showed that with an increase in applied transducer force, there was a decrease in SWVs. Conclusion: Applied transducer force has a significant effect on SWVs within the left lobe of the liver and it may explain some of the less accurate and less reliable results in previous studies where transducer force was not taken into consideration. Future studies in the left lobe of the liver should take this into account and control for applied transducer force.
Kurzhunov, Dmitry; Borowiak, Robert; Reisert, Marco; Özen, Ali Caglar; Bock, Michael
2018-05-16
To provide a data post-processing method that corrects for partial volume effects (PVE) and fast T2* decay in dynamic 17 O MRI for the mapping of cerebral metabolic rates of oxygen consumption (CMRO 2 ). CMRO 2 is altered in neurodegenerative diseases and tumors and can be measured after 17 O gas inhalation using dynamic 17 O MRI. CMRO 2 quantification is difficult because of PVE. To correct for PVE, a direct estimation of the MR images (DIESIS) method is proposed and used in 4 dynamic 17 O MRI data sets of a healthy volunteer acquired on a 3T MRI system. With DIESIS, 17 O MR signal time curves in selected regions were directly estimated based on parcellation of a coregistered 1 H MPRAGE image. Profile likelihood analysis of the DIESIS method showed identifiability of CMRO 2 . In white matter (WM), DIESES reduced CMRO 2 from 0.97 ± 0.25 µmol/g tissue /min with Kaiser-Bessel gridding reconstruction to 0.85 ± 0.21 µmol/g tissue /min, whereas in gray matter (GM) it increases from 1.3 ± 0.31 µmol/g tissue /min to 1.86 ± 0.36 µmol/g tissue /min; both values are closer to the literature values from the 15 O-PET studies. DIESIS provided an increased separation of CMRO 2 values in GM and WM brain regions and corrected for partial volume effects in 17 O-MRI inhalation experiments. DIESIS could also be applied to more heterogeneous tissues such as glioblastomas if subregions of the tumor can be represented as additional parcels. © 2018 International Society for Magnetic Resonance in Medicine.
Van, Anh T.; Weidlich, Dominik; Kooijman, Hendrick; Hock, Andreas; Rummeny, Ernst J.; Gersing, Alexandra; Kirschke, Jan S.; Karampinos, Dimitrios C.
2018-01-01
Purpose To perform in vivo isotropic‐resolution diffusion tensor imaging (DTI) of lumbosacral and sciatic nerves with a phase‐navigated diffusion‐prepared (DP) 3D turbo spin echo (TSE) acquisition and modified reconstruction incorporating intershot phase‐error correction and to investigate the improvement on image quality and diffusion quantification with the proposed phase correction. Methods Phase‐navigated DP 3D TSE included magnitude stabilizers to minimize motion and eddy‐current effects on the signal magnitude. Phase navigation of motion‐induced phase errors was introduced before readout in 3D TSE. DTI of lower back nerves was performed in vivo using 3D TSE and single‐shot echo planar imaging (ss‐EPI) in 13 subjects. Diffusion data were phase‐corrected per k z plane with respect to T2‐weighted data. The effects of motion‐induced phase errors on DTI quantification was assessed for 3D TSE and compared with ss‐EPI. Results Non–phase‐corrected 3D TSE resulted in artifacts in diffusion‐weighted images and overestimated DTI parameters in the sciatic nerve (mean diffusivity [MD] = 2.06 ± 0.45). Phase correction of 3D TSE DTI data resulted in reductions in all DTI parameters (MD = 1.73 ± 0.26) of statistical significance (P ≤ 0.001) and in closer agreement with ss‐EPI DTI parameters (MD = 1.62 ± 0.21). Conclusion DP 3D TSE with phase correction allows distortion‐free isotropic diffusion imaging of lower back nerves with robustness to motion‐induced artifacts and DTI quantification errors. Magn Reson Med 80:609–618, 2018. © 2018 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:29380414
Schönbichler, S A; Bittner, L K H; Weiss, A K H; Griesser, U J; Pallua, J D; Huck, C W
2013-08-01
The aim of this study was to evaluate the ability of near-infrared chemical imaging (NIR-CI), near-infrared (NIR), Raman and attenuated-total-reflectance infrared (ATR-IR) spectroscopy to quantify three polymorphic forms (I, II, III) of furosemide in ternary powder mixtures. For this purpose, partial least-squares (PLS) regression models were developed, and different data preprocessing algorithms such as normalization, standard normal variate (SNV), multiplicative scatter correction (MSC) and 1st to 3rd derivatives were applied to reduce the influence of systematic disturbances. The performance of the methods was evaluated by comparison of the standard error of cross-validation (SECV), R(2), and the ratio performance deviation (RPD). Limits of detection (LOD) and limits of quantification (LOQ) of all methods were determined. For NIR-CI, a SECVcorr-spec and a SECVsingle-pixel corrected were calculated to assess the loss of accuracy by taking advantage of the spatial information. NIR-CI showed a SECVcorr-spec (SECVsingle-pixel corrected) of 2.82% (3.71%), 3.49% (4.65%), and 4.10% (5.06%) for form I, II, III. NIR had a SECV of 2.98%, 3.62%, and 2.75%, and Raman reached 3.25%, 3.08%, and 3.18%. The SECV of the ATR-IR models were 7.46%, 7.18%, and 12.08%. This study proves that NIR-CI, NIR, and Raman are well suited to quantify forms I-III of furosemide in ternary mixtures. Because of the pressure-dependent conversion of form II to form I, ATR-IR was found to be less appropriate for an accurate quantification of the mixtures. In this study, the capability of NIR-CI for the quantification of polymorphic ternary mixtures was compared with conventional spectroscopic techniques for the first time. For this purpose, a new way of spectra selection was chosen, and two kinds of SECVs were calculated to achieve a better comparability of NIR-CI to NIR, Raman, and ATR-IR. Copyright © 2013 Elsevier B.V. All rights reserved.
Bezrukov, Ilja; Schmidt, Holger; Mantlik, Frédéric; Schwenzer, Nina; Brendle, Cornelia; Schölkopf, Bernhard; Pichler, Bernd J
2013-10-01
Hybrid PET/MR systems have recently entered clinical practice. Thus, the accuracy of MR-based attenuation correction in simultaneously acquired data can now be investigated. We assessed the accuracy of 4 methods of MR-based attenuation correction in lesions within soft tissue, bone, and MR susceptibility artifacts: 2 segmentation-based methods (SEG1, provided by the manufacturer, and SEG2, a method with atlas-based susceptibility artifact correction); an atlas- and pattern recognition-based method (AT&PR), which also used artifact correction; and a new method combining AT&PR and SEG2 (SEG2wBONE). Attenuation maps were calculated for the PET/MR datasets of 10 patients acquired on a whole-body PET/MR system, allowing for simultaneous acquisition of PET and MR data. Eighty percent iso-contour volumes of interest were placed on lesions in soft tissue (n = 21), in bone (n = 20), near bone (n = 19), and within or near MR susceptibility artifacts (n = 9). Relative mean volume-of-interest differences were calculated with CT-based attenuation correction as a reference. For soft-tissue lesions, none of the methods revealed a significant difference in PET standardized uptake value relative to CT-based attenuation correction (SEG1, -2.6% ± 5.8%; SEG2, -1.6% ± 4.9%; AT&PR, -4.7% ± 6.5%; SEG2wBONE, 0.2% ± 5.3%). For bone lesions, underestimation of PET standardized uptake values was found for all methods, with minimized error for the atlas-based approaches (SEG1, -16.1% ± 9.7%; SEG2, -11.0% ± 6.7%; AT&PR, -6.6% ± 5.0%; SEG2wBONE, -4.7% ± 4.4%). For lesions near bone, underestimations of lower magnitude were observed (SEG1, -12.0% ± 7.4%; SEG2, -9.2% ± 6.5%; AT&PR, -4.6% ± 7.8%; SEG2wBONE, -4.2% ± 6.2%). For lesions affected by MR susceptibility artifacts, quantification errors could be reduced using the atlas-based artifact correction (SEG1, -54.0% ± 38.4%; SEG2, -15.0% ± 12.2%; AT&PR, -4.1% ± 11.2%; SEG2wBONE, 0.6% ± 11.1%). For soft-tissue lesions, none of the evaluated methods showed statistically significant errors. For bone lesions, significant underestimations of -16% and -11% occurred for methods in which bone tissue was ignored (SEG1 and SEG2). In the present attenuation correction schemes, uncorrected MR susceptibility artifacts typically result in reduced attenuation values, potentially leading to highly reduced PET standardized uptake values, rendering lesions indistinguishable from background. While AT&PR and SEG2wBONE show accurate results in both soft tissue and bone, SEG2wBONE uses a two-step approach for tissue classification, which increases the robustness of prediction and can be applied retrospectively if more precision in bone areas is needed.
1984-10-01
8 iii "i t-. Table of Contents (cont.) Section Title Page -APPENDIX A Acronyms, Definitions, Nomenclature and Units of Measure B Scope of Work, Task...Identification/Records Search Phase II - Problem Confirmation and Quantification Phase III - Technology Base Development Phase IV - Corrective Action Only...Problem Identification/Records Search Phase II - Problem Confirmation and Quantification Phase III - Technology Base Development Phase IV - Corrective
NASA Astrophysics Data System (ADS)
Wiemker, Rafael; Opfer, Roland; Bülow, Thomas; Rogalla, Patrik; Steinberg, Amnon; Dharaiya, Ekta; Subramanyan, Krishna
2007-03-01
Computer aided quantification of emphysema in high resolution CT data is based on identifying low attenuation areas below clinically determined Hounsfield thresholds. However, the emphysema quantification is prone to error since a gravity effect can influence the mean attenuation of healthy lung parenchyma up to +/- 50 HU between ventral and dorsal lung areas. Comparing ultra-low-dose (7 mAs) and standard-dose (70 mAs) CT scans of each patient we show that measurement of the ventrodorsal gravity effect is patient specific but reproducible. It can be measured and corrected in an unsupervised way using robust fitting of a linear function.
Demeke, Tigst; Dobnik, David
2018-07-01
The number of genetically modified organisms (GMOs) on the market is steadily increasing. Because of regulation of cultivation and trade of GMOs in several countries, there is pressure for their accurate detection and quantification. Today, DNA-based approaches are more popular for this purpose than protein-based methods, and real-time quantitative PCR (qPCR) is still the gold standard in GMO analytics. However, digital PCR (dPCR) offers several advantages over qPCR, making this new technique appealing also for GMO analysis. This critical review focuses on the use of dPCR for the purpose of GMO quantification and addresses parameters which are important for achieving accurate and reliable results, such as the quality and purity of DNA and reaction optimization. Three critical factors are explored and discussed in more depth: correct classification of partitions as positive, correctly determined partition volume, and dilution factor. This review could serve as a guide for all laboratories implementing dPCR. Most of the parameters discussed are applicable to fields other than purely GMO testing. Graphical abstract There are generally three different options for absolute quantification of genetically modified organisms (GMOs) using digital PCR: droplet- or chamber-based and droplets in chambers. All have in common the distribution of reaction mixture into several partitions, which are all subjected to PCR and scored at the end-point as positive or negative. Based on these results GMO content can be calculated.
NASA Astrophysics Data System (ADS)
Benítez, Hernán D.; Ibarra-Castanedo, Clemente; Bendada, AbdelHakim; Maldague, Xavier; Loaiza, Humberto; Caicedo, Eduardo
2008-01-01
It is well known that the methods of thermographic non-destructive testing based on the thermal contrast are strongly affected by non-uniform heating at the surface. Hence, the results obtained from these methods considerably depend on the chosen reference point. The differential absolute contrast (DAC) method was developed to eliminate the need of determining a reference point that defined the thermal contrast with respect to an ideal sound area. Although, very useful at early times, the DAC accuracy decreases when the heat front approaches the sample rear face. We propose a new DAC version by explicitly introducing the sample thickness using the thermal quadrupoles theory and showing that the new DAC range of validity increases for long times while preserving the validity for short times. This new contrast is used for defect quantification in composite, Plexiglas™ and aluminum samples.
Worbs, Sylvia; Fiebig, Uwe; Zeleny, Reinhard; Schimmel, Heinz; Rummel, Andreas; Luginbühl, Werner; Dorner, Brigitte G.
2015-01-01
In the framework of the EU project EQuATox, a first international proficiency test (PT) on the detection and quantification of botulinum neurotoxins (BoNT) was conducted. Sample materials included BoNT serotypes A, B and E spiked into buffer, milk, meat extract and serum. Different methods were applied by the participants combining different principles of detection, identification and quantification. Based on qualitative assays, 95% of all results reported were correct. Successful strategies for BoNT detection were based on a combination of complementary immunological, MS-based and functional methods or on suitable functional in vivo/in vitro approaches (mouse bioassay, hemidiaphragm assay and Endopep-MS assay). Quantification of BoNT/A, BoNT/B and BoNT/E was performed by 48% of participating laboratories. It turned out that precise quantification of BoNT was difficult, resulting in a substantial scatter of quantitative data. This was especially true for results obtained by the mouse bioassay which is currently considered as “gold standard” for BoNT detection. The results clearly demonstrate the urgent need for certified BoNT reference materials and the development of methods replacing animal testing. In this context, the BoNT PT provided the valuable information that both the Endopep-MS assay and the hemidiaphragm assay delivered quantitative results superior to the mouse bioassay. PMID:26703724
Demeke, Tigst; Ratnayaka, Indira; Phan, Anh
2009-01-01
The quality of DNA affects the accuracy and repeatability of quantitative PCR results. Different DNA extraction and purification methods were compared for quantification of Roundup Ready (RR) soybean (event 40-3-2) by real-time PCR. DNA was extracted using cetylmethylammonium bromide (CTAB), DNeasy Plant Mini Kit, and Wizard Magnetic DNA purification system for food. CTAB-extracted DNA was also purified using the Zymo (DNA Clean & Concentrator 25 kit), Qtip 100 (Qiagen Genomic-Tip 100/G), and QIAEX II Gel Extraction Kit. The CTAB extraction method provided the largest amount of DNA, and the Zymo purification kit resulted in the highest percentage of DNA recovery. The Abs260/280 and Abs260/230 ratios were less than the expected values for some of the DNA extraction and purification methods used, indicating the presence of substances that could inhibit PCR reactions. Real-time quantitative PCR results were affected by the DNA extraction and purification methods used. Further purification or dilution of the CTAB DNA was required for successful quantification of RR soybean. Less variability of quantitative PCR results was observed among experiments and replications for DNA extracted and/or purified by CTAB, CTAB+Zymo, CTAB+Qtip 100, and DNeasy methods. Correct and repeatable results for real-time PCR quantification of RR soybean were achieved using CTAB DNA purified with Zymo and Qtip 100 methods.
Mannheim, Julia G; Schmid, Andreas M; Pichler, Bernd J
2017-12-01
Non-invasive in vivo positron emission tomography (PET) provides high detection sensitivity in the nano- to picomolar range and in addition to other advantages, the possibility to absolutely quantify the acquired data. The present study focuses on the comparison of transmission data acquired with an X-ray computed tomography (CT) scanner or a Co-57 source for the Inveon small animal PET scanner (Siemens Healthcare, Knoxville, TN, USA), as well as determines their influences on the quantification accuracy and partial volume effect (PVE). A special focus included the impact of the performed calibration on the quantification accuracy. Phantom measurements were carried out to determine the quantification accuracy, the influence of the object size on the quantification, and the PVE for different sphere sizes, along the field of view and for different contrast ratios. An influence of the emission activity on the Co-57 transmission measurements was discovered (deviations up to 24.06 % measured to true activity), whereas no influence of the emission activity on the CT attenuation correction was identified (deviations <3 % for measured to true activity). The quantification accuracy was substantially influenced by the applied calibration factor and by the object size. The PVE demonstrated a dependency on the sphere size, the position within the field of view, the reconstruction and correction algorithms and the count statistics. Depending on the reconstruction algorithm, only ∼30-40 % of the true activity within a small sphere could be resolved. The iterative 3D reconstruction algorithms uncovered substantially increased recovery values compared to the analytical and 2D iterative reconstruction algorithms (up to 70.46 % and 80.82 % recovery for the smallest and largest sphere using iterative 3D reconstruction algorithms). The transmission measurement (CT or Co-57 source) to correct for attenuation did not severely influence the PVE. The analysis of the quantification accuracy and the PVE revealed an influence of the object size, the reconstruction algorithm and the applied corrections. Particularly, the influence of the emission activity during the transmission measurement performed with a Co-57 source must be considered. To receive comparable results, also among different scanner configurations, standardization of the acquisition (imaging parameters, as well as applied reconstruction and correction protocols) is necessary.
Quantitative magnetic resonance spectroscopy at 3T based on the principle of reciprocity.
Zoelch, Niklaus; Hock, Andreas; Henning, Anke
2018-05-01
Quantification of magnetic resonance spectroscopy signals using the phantom replacement method requires an adequate correction of differences between the acquisition of the reference signal in the phantom and the measurement in vivo. Applying the principle of reciprocity, sensitivity differences can be corrected at low field strength by measuring the RF transmitter gain needed to obtain a certain flip angle in the measured volume. However, at higher field strength the transmit sensitivity may vary from the reception sensitivity, which leads to wrongly estimated concentrations. To address this issue, a quantification approach based on the principle of reciprocity for use at 3T is proposed and validated thoroughly. In this approach, the RF transmitter gain is determined automatically using a volume-selective power optimization and complemented with information from relative reception sensitivity maps derived from contrast-minimized images to correct differences in transmission and reception sensitivity. In this way, a reliable measure of the local sensitivity was obtained. The proposed method is used to derive in vivo concentrations of brain metabolites and tissue water in two studies with different coil sets in a total of 40 healthy volunteers. Resulting molar concentrations are compared with results using internal water referencing (IWR) and Electric REference To access In vivo Concentrations (ERETIC). With the proposed method, changes in coil loading and regional sensitivity due to B 1 inhomogeneities are successfully corrected, as demonstrated in phantom and in vivo measurements. For the tissue water content, coefficients of variation between 2% and 3.5% were obtained (0.6-1.4% in a single subject). The coefficients of variation of the three major metabolites ranged from 3.4-14.5%. In general, the derived concentrations agree well with values estimated with IWR. Hence, the presented method is a valuable alternative for IWR, without the need for additional hardware such as ERETIC and with potential advantages in diseased tissue. Copyright © 2018 John Wiley & Sons, Ltd.
Liu, Ruolin; Dickerson, Julie
2017-11-01
We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.
Quantitative chemical shift-encoded MRI is an accurate method to quantify hepatic steatosis.
Kühn, Jens-Peter; Hernando, Diego; Mensel, Birger; Krüger, Paul C; Ittermann, Till; Mayerle, Julia; Hosten, Norbert; Reeder, Scott B
2014-06-01
To compare the accuracy of liver fat quantification using a three-echo chemical shift-encoded magnetic resonance imaging (MRI) technique without and with correction for confounders with spectroscopy (MRS) as the reference standard. Fifty patients (23 women, mean age 56.6 ± 13.2 years) with fatty liver disease were enrolled. Patients underwent T2-corrected single-voxel MRS and a three-echo chemical shift-encoded gradient echo (GRE) sequence at 3.0T. MRI fat fraction (FF) was calculated without and with T2* and T1 correction and multispectral modeling of fat and compared with MRS-FF using linear regression. The spectroscopic range of liver fat was 0.11%-38.7%. Excellent correlation between MRS-FF and MRI-FF was observed when using T2* correction (R(2) = 0.96). With use of T2* correction alone, the slope was significantly different from 1 (1.16 ± 0.03, P < 0.001) and the intercept was different from 0 (1.14% ± 0.50%, P < 0.023). This slope was significantly different than 1.0 when no T1 correction was used (P = 0.001). When T2*, T1, and spectral complexity of fat were addressed, the results showed equivalence between fat quantification using MRI and MRS (slope: 1.02 ± 0.03, P = 0.528; intercept: 0.26% ± 0.46%, P = 0.572). Complex three-echo chemical shift-encoded MRI is equivalent to MRS for quantifying liver fat, but only with correction for T2* decay and T1 recovery and use of spectral modeling of fat. This is necessary because T2* decay, T1 recovery, and multispectral complexity of fat are processes which may otherwise bias the measurements. Copyright © 2013 Wiley Periodicals, Inc.
Teuho, Jarmo; Saunavaara, Virva; Tolvanen, Tuula; Tuokkola, Terhi; Karlsson, Antti; Tuisku, Jouni; Teräs, Mika
2017-10-01
In PET, corrections for photon scatter and attenuation are essential for visual and quantitative consistency. MR attenuation correction (MRAC) is generally conducted by image segmentation and assignment of discrete attenuation coefficients, which offer limited accuracy compared with CT attenuation correction. Potential inaccuracies in MRAC may affect scatter correction, because the attenuation image (μ-map) is used in single scatter simulation (SSS) to calculate the scatter estimate. We assessed the impact of MRAC to scatter correction using 2 scatter-correction techniques and 3 μ-maps for MRAC. Methods: The tail-fitted SSS (TF-SSS) and a Monte Carlo-based single scatter simulation (MC-SSS) algorithm implementations on the Philips Ingenuity TF PET/MR were used with 1 CT-based and 2 MR-based μ-maps. Data from 7 subjects were used in the clinical evaluation, and a phantom study using an anatomic brain phantom was conducted. Scatter-correction sinograms were evaluated for each scatter correction method and μ-map. Absolute image quantification was investigated with the phantom data. Quantitative assessment of PET images was performed by volume-of-interest and ratio image analysis. Results: MRAC did not result in large differences in scatter algorithm performance, especially with TF-SSS. Scatter sinograms and scatter fractions did not reveal large differences regardless of the μ-map used. TF-SSS showed slightly higher absolute quantification. The differences in volume-of-interest analysis between TF-SSS and MC-SSS were 3% at maximum in the phantom and 4% in the patient study. Both algorithms showed excellent correlation with each other with no visual differences between PET images. MC-SSS showed a slight dependency on the μ-map used, with a difference of 2% on average and 4% at maximum when a μ-map without bone was used. Conclusion: The effect of different MR-based μ-maps on the performance of scatter correction was minimal in non-time-of-flight 18 F-FDG PET/MR brain imaging. The SSS algorithm was not affected significantly by MRAC. The performance of the MC-SSS algorithm is comparable but not superior to TF-SSS, warranting further investigations of algorithm optimization and performance with different radiotracers and time-of-flight imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Vosough, Maryam; Salemi, Amir
2007-08-15
In the present work two second-order calibration methods, generalized rank annihilation method (GRAM) and multivariate curve resolution-alternating least square (MCR-ALS) have been applied on standard addition data matrices obtained by gas chromatography-mass spectrometry (GC-MS) to characterize and quantify four unsaturated fatty acids cis-9-hexadecenoic acid (C16:1omega7c), cis-9-octadecenoic acid (C18:1omega9c), cis-11-eicosenoic acid (C20:1omega9) and cis-13-docosenoic acid (C22:1omega9) in fish oil considering matrix interferences. With these methods, the area does not need to be directly measured and predictions are more accurate. Because of non-trilinear conditions of GC-MS data matrices, at first MCR-ALS and GRAM have been used on uncorrected data matrices. In comparison to MCR-ALS, biased and imprecise concentrations (%R.S.D.=27.3) were obtained using GRAM without correcting the retention time-shift. As trilinearity is the essential requirement for implementing GRAM, the data need to be corrected. Multivariate rank alignment objectively corrects the run-to-run retention time variations between sample GC-MS data matrix and a standard addition GC-MS data matrix. Then, two second-order algorithms have been compared with each other. The above algorithms provided similar mean predictions, pure concentrations and spectral profiles. The results validated using standard mass spectra of target compounds. In addition, some of the quantification results were compared with the concentration values obtained using the selected mass chromatograms. As in the case of strong peak-overlap and the matrix effect, the classical univariate method of determination of the area of the peaks of the analytes will fail, the "second-order advantage" has solved this problem successfully.
Heats of Mixing Using an Isothermal Titration Calorimeter: Associated Thermal Effects
de Rivera, Manuel Rodríguez; Socorro, Fabiola; Matos, José S.
2009-01-01
The correct determination of the energy generated or absorbed in the sample cell of an Isothermal Titration Calorimeter (ITC) requires a thorough analysis of the calorimetric signal. This means the identification and quantification of any thermal effect inherent to the working method. In this work, it is carried out a review on several thermal effects, studied by us in previous work, and which appear when an ITC is used for measuring the heats of mixing of liquids in a continuous mode. These effects are due to: (i) the difference between the temperature of the injected liquid and the temperature of the mixture during the mixing process, (ii) the increase of the liquid volume located in the mixing cell and (iii) the stirring velocity. Besides, methods for the identification and quantification of the mentioned effects are suggested. PMID:19742175
Pérez-Castaño, Estefanía; Sánchez-Viñas, Mercedes; Gázquez-Evangelista, Domingo; Bagur-González, M Gracia
2018-01-15
This paper describes and discusses the application of trimethylsilyl (TMS)-4,4'-desmethylsterols derivatives chromatographic fingerprints (obtained from an off-line HPLC-GC-FID system) for the quantification of extra virgin olive oil in commercial vinaigrettes, dressing salad and in-house reference materials (i-HRM) using two different Partial Least Square-Regression (PLS-R) multivariate quantification methods. Different data pre-processing strategies were carried out being the whole one: (i) internal normalization; (ii) sampling based on The Nyquist Theorem; (iii) internal correlation optimized shifting, icoshift; (iv) baseline correction (v) mean centering and (vi) selecting zones. The first model corresponds to a matrix of dimensions 'n×911' variables and the second one to a matrix of dimensions 'n×431' variables. It has to be highlighted that the proposed two PLS-R models allow the quantification of extra virgin olive oil in binary blends, foodstuffs, etc., when the provided percentage is greater than 25%. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Behtani, A.; Bouazzouni, A.; Khatir, S.; Tiachacht, S.; Zhou, Y.-L.; Abdel Wahab, M.
2017-05-01
In this paper, the problem of using measured modal parameters to detect and locate damage in beam composite stratified structures with four layers of graphite/epoxy [0°/902°/0°] is investigated. A technique based on the residual force method is applied to composite stratified structure with different boundary conditions, the results of damage detection for several damage cases demonstrate that using residual force method as damage index, the damage location can be identified correctly and the damage extents can be estimated as well.
Kim, Dahan; Curthoys, Nikki M.; Parent, Matthew T.; Hess, Samuel T.
2015-01-01
Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined. PMID:26185614
Kim, Dahan; Curthoys, Nikki M; Parent, Matthew T; Hess, Samuel T
2013-09-01
Multi-colour localization microscopy has enabled sub-diffraction studies of colocalization between multiple biological species and quantification of their correlation at length scales previously inaccessible with conventional fluorescence microscopy. However, bleed-through, or misidentification of probe species, creates false colocalization and artificially increases certain types of correlation between two imaged species, affecting the reliability of information provided by colocalization and quantified correlation. Despite the potential risk of these artefacts of bleed-through, neither the effect of bleed-through on correlation nor methods of its correction in correlation analyses has been systematically studied at typical rates of bleed-through reported to affect multi-colour imaging. Here, we present a reliable method of bleed-through correction applicable to image rendering and correlation analysis of multi-colour localization microscopy. Application of our bleed-through correction shows our method accurately corrects the artificial increase in both types of correlations studied (Pearson coefficient and pair correlation), at all rates of bleed-through tested, in all types of correlations examined. In particular, anti-correlation could not be quantified without our bleed-through correction, even at rates of bleed-through as low as 2%. Demonstrated with dichroic-based multi-colour FPALM here, our presented method of bleed-through correction can be applied to all types of localization microscopy (PALM, STORM, dSTORM, GSDIM, etc.), including both simultaneous and sequential multi-colour modalities, provided the rate of bleed-through can be reliably determined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laberge, S; Beauregard, J; Archambault, L
2016-06-15
Purpose: Textural biomarkers as a tool for quantifying intratumoral heterogeneity hold great promise for diagnosis and early assessment of treatment response in prostate cancer. However, spill-in counts from the bladder uptake are suspected to have an impact on the textural measurements of the prostate volume. This work proposes a correction method for the FCh-PET bladder uptake and investigates its impact on intraprostatic textural properties. Methods: Two patients with PC received pre-treatment dynamic FCh-PET scans reconstructed at four time points (interval: 2 min), for which prostate and bladder contours were obtained. Projection bins affected by bladder uptake were determined by forward-projection.more » For each time point and axial position, virtual sinograms were obtained and affected bins replaced by a weighted combination of original values and values interpolated using cubic spline from non-affected bins of the current and adjacent projection angles. The process was optimized using a genetic algorithm in terms of minimization of the root-mean-square error (RMSE) within the bladder between the corrected dynamic time point volume and a reference initial uptake volume. Finally, the impact of the bladder uptake correction on the prostate region was investigated using two standard SUV metrics (1) and three texture metrics (2): 1) SUVmax, SUVmean; 2) Contrast, Homogeneity, Coarseness. Results: Without bladder uptake correction, SUVmax and SUVmean were on average overestimated in the prostate by 0%, 0%, 33.2%, 51.2%, and 3.6%, 6.0%, 2.9%, 3.2%, for each time point respectively. Contrast varied by −9.1%, −6.7%, +40.4%, +107.7%, and Homogeneity and Coarseness by +4.5%, +1.8%, −8.8%, −14.8% and +1.0%, +0.5%, −9.5%, +0.9%. Conclusion: We proposed a method for FCh-PET bladder uptake correction and showed an impact on the quantification of the prostate signal. This method achieved a large reduction of intra-prostatic SUVmax while minimizing the impact on SUVmean. Further investigation is necessary to interpret changes in textural features. SL acknowledges partial support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290).« less
NASA Astrophysics Data System (ADS)
Akamatsu, G.; Ikari, Y.; Ohnishi, A.; Nishida, H.; Aita, K.; Sasaki, M.; Yamamoto, Y.; Sasaki, M.; Senda, M.
2016-08-01
Amyloid PET is useful for early and/or differential diagnosis of Alzheimer’s disease (AD). Quantification of amyloid deposition using PET has been employed to improve diagnosis and to monitor AD therapy, particularly in research. Although MRI is often used for segmentation of gray matter and for spatial normalization into standard Montreal Neurological Institute (MNI) space where region-of-interest (ROI) template is defined, 3D MRI is not always available in clinical practice. The purpose of this study was to examine the feasibility of PET-only amyloid quantification with an adaptive template and a pre-defined standard ROI template that has been empirically generated from typical cases. A total of 68 subjects who underwent brain 11C-PiB PET were examined. The 11C-PiB images were non-linearly spatially normalized to the standard MNI T1 atlas using the same transformation parameters of MRI-based normalization. The automatic-anatomical-labeling-ROI (AAL-ROI) template was applied to the PET images. All voxel values were normalized by the mean value of cerebellar cortex to generate the SUVR-scaled images. Eleven typical positive images and eight typical negative images were normalized and averaged, respectively, and were used as the positive and negative template. Positive and negative masks which consist of voxels with SUVR ⩾1.7 were extracted from both templates. Empirical PiB-prone ROI (EPP-ROI) was generated by subtracting the negative mask from the positive mask. The 11C-PiB image of each subject was non-rigidly normalized to the positive and negative template, respectively, and the one with higher cross-correlation was adopted. The EPP-ROI was then inversely transformed to individual PET images. We evaluated differences of SUVR between standard MRI-based method and PET-only method. We additionally evaluated whether the PET-only method would correctly categorize 11C-PiB scans as positive or negative. Significant correlation was observed between the SUVRs obtained with AAL-ROI and those with EPP-ROI when MRI-based normalization was used, the latter providing higher SUVR. When EPP-ROI was used, MRI-based method and PET-only method provided almost identical SUVR. All 11C-PiB scans were correctly categorized into positive and negative using a cutoff value of 1.7 as compared to visual interpretation. The 11C-PiB SUVR were 2.30 ± 0.24 and 1.25 ± 0.11 for the positive and negative images. PET-only amyloid quantification method with adaptive templates and EPP-ROI can provide accurate, robust and simple amyloid quantification without MRI.
NASA Astrophysics Data System (ADS)
Mihalcescu, Irina; Van-Melle Gateau, Mathilde; Chelli, Bernard; Pinel, Corinne; Ravanat, Jean-Luc
2015-12-01
The intrinsic green autofluorescence of an Escherichia coli culture has long been overlooked and empirically corrected in green fluorescent protein (GFP) reporter experiments. We show here, by using complementary methods of fluorescence analysis and HPLC, that this autofluorescence, principally arise from the secreted flavins in the external media. The cells secrete roughly 10 times more than what they keep inside. We show next that the secreted flavin fluorescence can be used as a complementary method in measuring the cell concentration particularly when the classical method, based on optical density measure, starts to fail. We also demonstrate that the same external flavins limit the dynamical range of GFP quantification and can lead to a false interpretation of lower global dynamic range of expression than what really happens. In the end we evaluate different autofluorescence correction methods to extract the real GFP signal.
Brouckaert, D; Uyttersprot, J-S; Broeckx, W; De Beer, T
2017-09-01
Calibration transfer of partial least squares (PLS) quantification models is established between two Raman spectrometers located at two liquid detergent production plants. As full recalibration of existing calibration models is time-consuming, labour-intensive and costly, it is investigated whether the use of mathematical correction methods requiring only a handful of standardization samples can overcome the dissimilarities in spectral response observed between both measurement systems. Univariate and multivariate standardization approaches are investigated, ranging from simple slope/bias correction (SBC), local centring (LC) and single wavelength standardization (SWS) to more complex direct standardization (DS) and piecewise direct standardization (PDS). The results of these five calibration transfer methods are compared reciprocally, as well as with regard to a full recalibration. Four PLS quantification models, each predicting the concentration of one of the four main ingredients in the studied liquid detergent composition, are aimed at transferring. Accuracy profiles are established from the original and transferred quantification models for validation purposes. A reliable representation of the calibration models performance before and after transfer is thus established, based on β-expectation tolerance intervals. For each transferred model, it is investigated whether every future measurement that will be performed in routine will be close enough to the unknown true value of the sample. From this validation, it is concluded that instrument standardization is successful for three out of four investigated calibration models using multivariate (DS and PDS) transfer approaches. The fourth transferred PLS model could not be validated over the investigated concentration range, due to a lack of precision of the slave instrument. Comparing these transfer results to a full recalibration on the slave instrument allows comparison of the predictive power of both Raman systems and leads to the formulation of guidelines for further standardization projects. It is concluded that it is essential to evaluate the performance of the slave instrument prior to transfer, even when it is theoretically identical to the master apparatus. Copyright © 2017 Elsevier B.V. All rights reserved.
A modified TEW approach to scatter correction for In-111 and Tc-99m dual-isotope small-animal SPECT.
Prior, Paul; Timmins, Rachel; Petryk, Julia; Strydhorst, Jared; Duan, Yin; Wei, Lihui; Glenn Wells, R
2016-10-01
In dual-isotope (Tc-99m/In-111) small-animal single-photon emission computed tomography (SPECT), quantitative accuracy of Tc-99m activity measurements is degraded due to the detection of Compton-scattered photons in the Tc-99m photopeak window, which originate from the In-111 emissions (cross talk) and from the Tc-99m emission (self-scatter). The standard triple-energy window (TEW) estimates the total scatter (self-scatter and cross talk) using one scatter window on either side of the Tc-99m photopeak window, but the estimate is biased due to the presence of unscattered photons in the scatter windows. The authors present a modified TEW method to correct for total scatter that compensates for this bias and evaluate the method in phantoms and in vivo. The number of unscattered Tc-99m and In-111 photons present in each scatter-window projection is estimated based on the number of photons detected in the photopeak of each isotope, using the isotope-dependent energy resolution of the detector. The camera-head-specific energy resolutions for the 140 keV Tc-99m and 171 keV In-111 emissions were determined experimentally by separately sampling the energy spectra of each isotope. Each sampled spectrum was fit with a Linear + Gaussian function. The fitted Gaussian functions were integrated across each energy window to determine the proportion of unscattered photons from each emission detected in the scatter windows. The method was first tested and compared to the standard TEW in phantoms containing Tc-99m:In-111 activity ratios between 0.15 and 6.90. True activities were determined using a dose calibrator, and SPECT activities were estimated from CT-attenuation-corrected images with and without scatter-correction. The method was then tested in vivo in six rats using In-111-liposome and Tc-99m-tetrofosmin to generate cross talk in the area of the myocardium. The myocardium was manually segmented using the SPECT and CT images, and partial-volume correction was performed using a template-based approach. The rat heart was counted in a well-counter to determine the true activity. In the phantoms without correction for Compton-scatter, Tc-99m activity quantification errors as high as 85% were observed. The standard TEW method quantified Tc-99m activity with an average accuracy of -9.0% ± 0.7%, while the modified TEW was accurate within 5% of truth in phantoms with Tc-99m:In-111 activity ratios ≥0.52. Without scatter-correction, In-111 activity was quantified with an average accuracy of 4.1%, and there was no dependence of accuracy on the activity ratio. In rat myocardia, uncorrected images were overestimated by an average of 23% ± 5%, and the standard TEW had an accuracy of -13.8% ± 1.6%, while the modified TEW yielded an accuracy of -4.0% ± 1.6%. Cross talk and self-scatter were shown to produce quantification errors in phantoms as well as in vivo. The standard TEW provided inaccurate results due to the inclusion of unscattered photons in the scatter windows. The modified TEW improved the scatter estimate and reduced the quantification errors in phantoms and in vivo.
Yokoo, Takeshi; Bydder, Mark; Hamilton, Gavin; Middleton, Michael S.; Gamst, Anthony C.; Wolfson, Tanya; Hassanein, Tarek; Patton, Heather M.; Lavine, Joel E.; Schwimmer, Jeffrey B.; Sirlin, Claude B.
2009-01-01
Purpose: To assess the accuracy of four fat quantification methods at low-flip-angle multiecho gradient-recalled-echo (GRE) magnetic resonance (MR) imaging in nonalcoholic fatty liver disease (NAFLD) by using MR spectroscopy as the reference standard. Materials and Methods: In this institutional review board–approved, HIPAA-compliant prospective study, 110 subjects (29 with biopsy-confirmed NAFLD, 50 overweight and at risk for NAFLD, and 31 healthy volunteers) (mean age, 32.6 years ± 15.6 [standard deviation]; range, 8–66 years) gave informed consent and underwent MR spectroscopy and GRE MR imaging of the liver. Spectroscopy involved a long repetition time (to suppress T1 effects) and multiple echo times (to estimate T2 effects); the reference fat fraction (FF) was calculated from T2-corrected fat and water spectral peak areas. Imaging involved a low flip angle (to suppress T1 effects) and multiple echo times (to estimate T2* effects); imaging FF was calculated by using four analysis methods of progressive complexity: dual echo, triple echo, multiecho, and multiinterference. All methods except dual echo corrected for T2* effects. The multiinterference method corrected for multiple spectral interference effects of fat. For each method, the accuracy for diagnosis of fatty liver, as defined with a spectroscopic threshold, was assessed by estimating sensitivity and specificity; fat-grading accuracy was assessed by comparing imaging and spectroscopic FF values by using linear regression. Results: Dual-echo, triple-echo, multiecho, and multiinterference methods had a sensitivity of 0.817, 0.967, 0.950, and 0.983 and a specificity of 1.000, 0.880, 1.000, and 0.880, respectively. On the basis of regression slope and intercept, the multiinterference (slope, 0.98; intercept, 0.91%) method had high fat-grading accuracy without statistically significant error (P > .05). Dual-echo (slope, 0.98; intercept, −2.90%), triple-echo (slope, 0.94; intercept, 1.42%), and multiecho (slope, 0.85; intercept, −0.15%) methods had statistically significant error (P < .05). Conclusion: Relaxation- and interference-corrected fat quantification at low-flip-angle multiecho GRE MR imaging provides high diagnostic and fat-grading accuracy in NAFLD. © RSNA, 2009 PMID:19221054
Automated lobar quantification of emphysema in patients with severe COPD.
Revel, Marie-Pierre; Faivre, Jean-Baptiste; Remy-Jardin, Martine; Deken, Valérie; Duhamel, Alain; Marquette, Charles-Hugo; Tacelli, Nunzia; Bakai, Anne-Marie; Remy, Jacques
2008-12-01
Automated lobar quantification of emphysema has not yet been evaluated. Unenhanced 64-slice MDCT was performed in 47 patients evaluated before bronchoscopic lung-volume reduction. CT images reconstructed with a standard (B20) and high-frequency (B50) kernel were analyzed using a dedicated prototype software (MevisPULMO) allowing lobar quantification of emphysema extent. Lobar quantification was obtained following (a) a fully automatic delineation of the lobar limits by the software and (b) a semiautomatic delineation with manual correction of the lobar limits when necessary and was compared with the visual scoring of emphysema severity per lobe. No statistically significant difference existed between automated and semiautomated lobar quantification (p > 0.05 in the five lobes), with differences ranging from 0.4 to 3.9%. The agreement between the two methods (intraclass correlation coefficient, ICC) was excellent for left upper lobe (ICC = 0.94), left lower lobe (ICC = 0.98), and right lower lobe (ICC = 0.80). The agreement was good for right upper lobe (ICC = 0.68) and moderate for middle lobe (IC = 0.53). The Bland and Altman plots confirmed these results. A good agreement was observed between the software and visually assessed lobar predominance of emphysema (kappa 0.78; 95% CI 0.64-0.92). Automated and semiautomated lobar quantifications of emphysema are concordant and show good agreement with visual scoring.
Quantification of Liver Fat in the Presence of Iron Overload
Horng, Debra E.; Hernando, Diego; Reeder, Scott B.
2017-01-01
Purpose To evaluate the accuracy of R2* models (1/T2* = R2*) for chemical shift-encoded magnetic resonance imaging (CSE-MRI)-based proton density fat-fraction (PDFF) quantification in patients with fatty liver and iron overload, using MR spectroscopy (MRS) as the reference standard. Materials and Methods Two Monte Carlo simulations were implemented to compare the root-mean-squared-error (RMSE) performance of single-R2* and dual-R2* correction in a theoretical liver environment with high iron. Fatty liver was defined as hepatic PDFF >5.6% based on MRS; only subjects with fatty liver were considered for analyses involving fat. From a group of 40 patients with known/suspected iron overload, nine patients were identified at 1.5T, and 13 at 3.0T with fatty liver. MRS linewidth measurements were used to estimate R2* values for water and fat peaks. PDFF was measured from CSE-MRI data using single-R2* and dual-R2* correction with magnitude and complex fitting. Results Spectroscopy-based R2* analysis demonstrated that the R2* of water and fat remain close in value, both increasing as iron overload increases: linear regression between R2*W and R2*F resulted in slope = 0.95 [0.79–1.12] (95% limits of agreement) at 1.5T and slope = 0.76 [0.49–1.03] at 3.0T. MRI-PDFF using dual-R2* correction had severe artifacts. MRI-PDFF using single-R2* correction had good agreement with MRS-PDFF: Bland–Altman analysis resulted in −0.7% (bias) ± 2.9% (95% limits of agreement) for magnitude-fit and −1.3% ± 4.3% for complex-fit at 1.5T, and −1.5% ± 8.4% for magnitude-fit and −2.2% ± 9.6% for complex-fit at 3.0T. Conclusion Single-R2* modeling enables accurate PDFF quantification, even in patients with iron overload. PMID:27405703
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlier, Thomas, E-mail: thomas.carlier@chu-nantes.fr; Willowson, Kathy P.; Fourkal, Eugene
Purpose: {sup 90}Y -positron emission tomography (PET) imaging is becoming a recognized modality for postinfusion quantitative assessment following radioembolization therapy. However, the extremely low counts and high random fraction associated with {sup 90}Y -PET may significantly impair both qualitative and quantitative results. The aim of this work was to study image quality and noise level in relation to the quantification and bias performance of two types of Siemens PET scanners when imaging {sup 90}Y and to compare experimental results with clinical data from two types of commercially available {sup 90}Y microspheres. Methods: Data were acquired on both Siemens Biograph TruePointmore » [non-time-of-flight (TOF)] and Biograph microcomputed tomography (mCT) (TOF) PET/CT scanners. The study was conducted in three phases. The first aimed to assess quantification and bias for different reconstruction methods according to random fraction and number of true counts in the scan. The NEMA 1994 PET phantom was filled with water with one cylindrical insert left empty (air) and the other filled with a solution of {sup 90}Y . The phantom was scanned for 60 min in the PET/CT scanner every one or two days. The second phase used the NEMA 2001 PET phantom to derive noise and image quality metrics. The spheres and the background were filled with a {sup 90}Y solution in an 8:1 contrast ratio and four 30 min acquisitions were performed over a one week period. Finally, 32 patient data (8 treated with Therasphere{sup ®} and 24 with SIR-Spheres{sup ®}) were retrospectively reconstructed and activity in the whole field of view and the liver was compared to theoretical injected activity. Results: The contribution of both bremsstrahlung and LSO trues was found to be negligible, allowing data to be decay corrected to obtain correct quantification. In general, the recovered activity for all reconstruction methods was stable over the range studied, with a small bias appearing at extremely high random fraction and low counts for iterative algorithms. Point spread function (PSF) correction and TOF reconstruction in general reduce background variability and noise and increase recovered concentration. Results for patient data indicated a good correlation between the expected and PET reconstructed activities. A linear relationship between the expected and the measured activities in the organ of interest was observed for all reconstruction method used: a linearity coefficient of 0.89 ± 0.05 for the Biograph mCT and 0.81 ± 0.05 for the Biograph TruePoint. Conclusions: Due to the low counts and high random fraction, accurate image quantification of {sup 90}Y during selective internal radionuclide therapy is affected by random coincidence estimation, scatter correction, and any positivity constraint of the algorithm. Nevertheless, phantom and patient studies showed that the impact of number of true and random coincidences on quantitative results was found to be limited as long as ordinary Poisson ordered subsets expectation maximization reconstruction algorithms with random smoothing are used. Adding PSF correction and TOF information to the reconstruction greatly improves the image quality in terms of bias, variability, noise reduction, and detectability. On the patient studies, the total activity in the field of view is in general accurately measured by Biograph mCT and slightly overestimated by the Biograph TruePoint.« less
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-09-21
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units ([Formula: see text]) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into [Formula: see text] was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of [Formula: see text] corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
NASA Astrophysics Data System (ADS)
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-10-01
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
Idilman, Ilkay S; Keskin, Onur; Celik, Azim; Savas, Berna; Elhan, Atilla Halil; Idilman, Ramazan; Karcaaltincaba, Musturay
2016-03-01
Many imaging methods have been defined for quantification of hepatic steatosis in non-alcoholic fatty liver disease (NAFLD). However, studies comparing the efficiency of magnetic resonance imaging-proton density fat fraction (MRI-PDFF), magnetic resonance spectroscopy (MRS), and liver histology for quantification of liver fat content are limited. To compare the efficiency of MRI-PDFF and MRS in the quantification of liver fat content in individuals with NAFLD. A total of 19 NAFLD patients underwent MRI-PDFF, MRS, and liver biopsy for quantification of liver fat content. The MR examinations were performed on a 1.5 HDx MRI system. The MRI protocol included T1-independent volumetric multi-echo gradient-echo imaging with T2* correction and spectral fat modeling and MRS with STEAM technique. A close correlation was observed between liver MRI-PDFF- and histology- determined steatosis (r = 0.743, P < 0.001) and between liver MRS- and histology-determined steatosis (r = 0.712, P < 0.001), with no superiority between them (ƶ = 0.19, P = 0.849). For quantification of hepatic steatosis, a high correlation was observed between the two MRI methods (r = 0.986, P < 0.001). MRI-PDFF and MRS accurately differentiated moderate/severe steatosis from mild/no hepatic steatosis (P = 0.007 and 0.013, respectively), with no superiority between them (AUCMRI-PDFF = 0.881 ± 0.0856 versus AUCMRS = 0.857 ± 0.0924, P = 0.461). Both MRI-PDFF and MRS can be used for accurate quantification of hepatic steatosis. © The Foundation Acta Radiologica 2015.
Croker, Denise M; Hennigan, Michelle C; Maher, Anthony; Hu, Yun; Ryder, Alan G; Hodnett, Benjamin K
2012-04-07
Diffraction and spectroscopic methods were evaluated for quantitative analysis of binary powder mixtures of FII(6.403) and FIII(6.525) piracetam. The two polymorphs of piracetam could be distinguished using powder X-ray diffraction (PXRD), Raman and near-infrared (NIR) spectroscopy. The results demonstrated that Raman and NIR spectroscopy are most suitable for quantitative analysis of this polymorphic mixture. When the spectra are treated with the combination of multiplicative scatter correction (MSC) and second derivative data pretreatments, the partial least squared (PLS) regression model gave a root mean square error of calibration (RMSEC) of 0.94 and 0.99%, respectively. FIII(6.525) demonstrated some preferred orientation in PXRD analysis, making PXRD the least preferred method of quantification. Copyright © 2012 Elsevier B.V. All rights reserved.
Beuthien-Baumann, B
2018-05-01
Positron emission tomography (PET) is a procedure in nuclear medicine, which is applied predominantly in oncological diagnostics. In the form of modern hybrid machines, such as PET computed tomography (PET/CT) and PET magnetic resonance imaging (PET/MRI) it has found wide acceptance and availability. The PET procedure is more than just another imaging technique, but a functional method with the capability for quantification in addition to the distribution pattern of the radiopharmaceutical, the results of which are used for therapeutic decisions. A profound knowledge of the principles of PET including the correct indications, patient preparation, and possible artifacts is mandatory for the correct interpretation of PET results.
Sequencing small genomic targets with high efficiency and extreme accuracy
Schmitt, Michael W.; Fox, Edward J.; Prindle, Marc J.; Reid-Bayliss, Kate S.; True, Lawrence D.; Radich, Jerald P.; Loeb, Lawrence A.
2015-01-01
The detection of minority variants in mixed samples demands methods for enrichment and accurate sequencing of small genomic intervals. We describe an efficient approach based on sequential rounds of hybridization with biotinylated oligonucleotides, enabling more than one-million fold enrichment of genomic regions of interest. In conjunction with error correcting double-stranded molecular tags, our approach enables the quantification of mutations in individual DNA molecules. PMID:25849638
Chaudhry, Waseem; Hussain, Nasir; Ahlberg, Alan W.; Croft, Lori B.; Fernandez, Antonio B.; Parker, Mathew W.; Swales, Heather H.; Slomka, Piotr J.; Henzlova, Milena J.; Duvall, W. Lane
2016-01-01
Background A stress-first myocardial perfusion imaging (MPI) protocol saves time, is cost effective, and decreases radiation exposure. A limitation of this protocol is the requirement for physician review of the stress images to determine the need for rest images. This hurdle could be eliminated if an experienced technologist and/or automated computer quantification could make this determination. Methods Images from consecutive patients who were undergoing a stress-first MPI with attenuation correction at two tertiary care medical centers were prospectively reviewed independently by a technologist and cardiologist blinded to clinical and stress test data. Their decision on the need for rest imaging along with automated computer quantification of perfusion results was compared with the clinical reference standard of an assessment of perfusion images by a board-certified nuclear cardiologist that included clinical and stress test data. Results A total of 250 patients (mean age 61 years and 55% female) who underwent a stress-first MPI were studied. According to the clinical reference standard, 42 (16.8%) and 208 (83.2%) stress-first images were interpreted as “needing” and “not needing” rest images, respectively. The technologists correctly classified 229 (91.6%) stress-first images as either “needing” (n = 28) or “not needing” (n = 201) rest images. Their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 66.7%, 96.6%, 80.0%, and 93.5%, respectively. An automated stress TPD score ≥1.2 was associated with optimal sensitivity and specificity and correctly classified 179 (71.6%) stress-first images as either “needing” (n = 31) or “not needing” (n = 148) rest images. Its sensitivity, specificity, PPV, and NPV were 73.8%, 71.2%, 34.1%, and 93.1%, respectively. In a model whereby the computer or technologist could correct for the other's incorrect classification, 242 (96.8%) stress-first images were correctly classified. The composite sensitivity, specificity, PPV, and NPV were 83.3%, 99.5%, 97.2%, and 96.7%, respectively. Conclusion Technologists and automated quantification software had a high degree of agreement with the clinical reference standard for determining the need for rest images in a stress-first imaging protocol. Utilizing an experienced technologist and automated systems to screen stress-first images could expand the use of stress-first MPI to sites where the cardiologist is not immediately available for interpretation. PMID:26566774
Automatic detection of cardiac cycle and measurement of the mitral annulus diameter in 4D TEE images
NASA Astrophysics Data System (ADS)
Graser, Bastian; Hien, Maximilian; Rauch, Helmut; Meinzer, Hans-Peter; Heimann, Tobias
2012-02-01
Mitral regurgitation is a wide spread problem. For successful surgical treatment quantification of the mitral annulus, especially its diameter, is essential. Time resolved 3D transesophageal echocardiography (TEE) is suitable for this task. Yet, manual measurement in four dimensions is extremely time consuming, which confirms the need for automatic quantification methods. The method we propose is capable of automatically detecting the cardiac cycle (systole or diastole) for each time step and measuring the mitral annulus diameter. This is done using total variation noise filtering, the graph cut segmentation algorithm and morphological operators. An evaluation took place using expert measurements on 4D TEE data of 13 patients. The cardiac cycle was detected correctly on 78% of all images and the mitral annulus diameter was measured with an average error of 3.08 mm. Its full automatic processing makes the method easy to use in the clinical workflow and it provides the surgeon with helpful information.
Inoue, Koichi; Miyazaki, Yasuto; Unno, Keiko; Min, Jun Zhe; Todoroki, Kenichiro; Toyo'oka, Toshimasa
2016-01-01
In this study, we developed the stable isotope dilution hydrophilic interaction liquid chromatography with tandem mass spectrometry (HILIC-MS/MS) technique for the accurate, reasonable and simultaneous quantification of glutamic acid (Glu), glutamine (Gln), pyroglutamic acid (pGlu), γ-aminobutyric acid (GABA) and theanine in mouse brain tissues. The quantification of these analytes was accomplished using stable isotope internal standards and the HILIC separating mode to fully correct the intramolecular cyclization during the electrospray ionization. It was shown that linear calibrations were available with high coefficients of correlation (r(2) > 0.999, range from 10 pmol/mL to 50 mol/mL). For application of the theanine intake, the determination of Glu, Gln, pGlu, GABA and theanine in the hippocampus and central cortex tissues was performed based on our developed method. In the region of the hippocampus, the concentration levels of Glu and pGlu were significantly reduced during reality-based theanine intake. Conversely, the concentration level of GABA increased. This result showed that transited theanine has an effect on the metabolic balance of Glu analogs in the hippocampus. Copyright © 2015 John Wiley & Sons, Ltd.
Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib
2016-04-15
In quantitative PET/MR imaging, attenuation correction (AC) of PET data is markedly challenged by the need of deriving accurate attenuation maps from MR images. A number of strategies have been developed for MRI-guided attenuation correction with different degrees of success. In this work, we compare the quantitative performance of three generic AC methods, including standard 3-class MR segmentation-based, advanced atlas-registration-based and emission-based approaches in the context of brain time-of-flight (TOF) PET/MRI. Fourteen patients referred for diagnostic MRI and (18)F-FDG PET/CT brain scans were included in this comparative study. For each study, PET images were reconstructed using four different attenuation maps derived from CT-based AC (CTAC) serving as reference, standard 3-class MR-segmentation, atlas-registration and emission-based AC methods. To generate 3-class attenuation maps, T1-weighted MRI images were segmented into background air, fat and soft-tissue classes followed by assignment of constant linear attenuation coefficients of 0, 0.0864 and 0.0975 cm(-1) to each class, respectively. A robust atlas-registration based AC method was developed for pseudo-CT generation using local weighted fusion of atlases based on their morphological similarity to target MR images. Our recently proposed MRI-guided maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm was employed to estimate the attenuation map from TOF emission data. The performance of the different AC algorithms in terms of prediction of bones and quantification of PET tracer uptake was objectively evaluated with respect to reference CTAC maps and CTAC-PET images. Qualitative evaluation showed that the MLAA-AC method could sparsely estimate bones and accurately differentiate them from air cavities. It was found that the atlas-AC method can accurately predict bones with variable errors in defining air cavities. Quantitative assessment of bone extraction accuracy based on Dice similarity coefficient (DSC) showed that MLAA-AC and atlas-AC resulted in DSC mean values of 0.79 and 0.92, respectively, in all patients. The MLAA-AC and atlas-AC methods predicted mean linear attenuation coefficients of 0.107 and 0.134 cm(-1), respectively, for the skull compared to reference CTAC mean value of 0.138cm(-1). The evaluation of the relative change in tracer uptake within 32 distinct regions of the brain with respect to CTAC PET images showed that the 3-class MRAC, MLAA-AC and atlas-AC methods resulted in quantification errors of -16.2 ± 3.6%, -13.3 ± 3.3% and 1.0 ± 3.4%, respectively. Linear regression and Bland-Altman concordance plots showed that both 3-class MRAC and MLAA-AC methods result in a significant systematic bias in PET tracer uptake, while the atlas-AC method results in a negligible bias. The standard 3-class MRAC method significantly underestimated cerebral PET tracer uptake. While current state-of-the-art MLAA-AC methods look promising, they were unable to noticeably reduce quantification errors in the context of brain imaging. Conversely, the proposed atlas-AC method provided the most accurate attenuation maps, and thus the lowest quantification bias. Copyright © 2016 Elsevier Inc. All rights reserved.
2010-01-01
throughout the entire 3D volume which made quantification of the different tissues in the breast possible. The p eaks representing glandular and fat in...coefficients. Keywords: tissue quantification , absolute attenuation coefficient, scatter correction, computed tomography, tomography... tissue types. 1-4 Accurate measurements of t he quantification and di fferentiation of numerous t issues can be useful to identify di sease from
Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging
NASA Astrophysics Data System (ADS)
Eldib, Mootaz; Bini, Jason; Robson, Philip M.; Calcagno, Claudia; Faul, David D.; Tsoumpas, Charalampos; Fayad, Zahi A.
2015-06-01
The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.
Huo, Yinghe; Vincken, Koen L; van der Heijde, Desiree; de Hair, Maria J H; Lafeber, Floris P; Viergever, Max A
2017-11-01
Objective: Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification for RA patients has not been widely investigated. The aim of this paper is to present an automatic method to quantify the JSW of three wrist joints that are least affected by bone overlapping and are frequently involved in RA. These joints are located around the scaphoid bone, viz. the multangular-navicular, capitate-navicular-lunate, and radiocarpal joints. Methods: The joint space around the scaphoid bone is detected by using consecutive searches of separate path segments, where each segment location aids in constraining the subsequent one. For joint margin delineation, first the boundary not affected by X-ray projection is extracted, followed by a backtrace process to obtain the actual joint margin. The accuracy of the quantified JSW is evaluated by comparison with the manually obtained ground truth. Results: Two of the 50 radiographs used for evaluation of the method did not yield a correct path through all three wrist joints. The delineated joint margins of the remaining 48 radiographs were used for JSW quantification. It was found that 90% of the joints had a JSW deviating less than 20% from the mean JSW of manual indications, with the mean JSW error less than 10%. Conclusion: The proposed method is able to automatically quantify the JSW of radiographic wrist joints reliably. The proposed method may aid clinical researchers to study the progression of wrist joint damage in RA studies. Objective: Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification for RA patients has not been widely investigated. The aim of this paper is to present an automatic method to quantify the JSW of three wrist joints that are least affected by bone overlapping and are frequently involved in RA. These joints are located around the scaphoid bone, viz. the multangular-navicular, capitate-navicular-lunate, and radiocarpal joints. Methods: The joint space around the scaphoid bone is detected by using consecutive searches of separate path segments, where each segment location aids in constraining the subsequent one. For joint margin delineation, first the boundary not affected by X-ray projection is extracted, followed by a backtrace process to obtain the actual joint margin. The accuracy of the quantified JSW is evaluated by comparison with the manually obtained ground truth. Results: Two of the 50 radiographs used for evaluation of the method did not yield a correct path through all three wrist joints. The delineated joint margins of the remaining 48 radiographs were used for JSW quantification. It was found that 90% of the joints had a JSW deviating less than 20% from the mean JSW of manual indications, with the mean JSW error less than 10%. Conclusion: The proposed method is able to automatically quantify the JSW of radiographic wrist joints reliably. The proposed method may aid clinical researchers to study the progression of wrist joint damage in RA studies.
Reduction of Solvent Effect in Reverse Phase Gradient Elution LC-ICP-MS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, Patrick Allen
2005-12-17
Quantification in liquid chromatography (LC) is becoming very important as more researchers are using LC, not as an analytical tool itself, but as a sample introduction system for other analytical instruments. The ability of LC instrumentation to quickly separate a wide variety of compounds makes it ideal for analysis of complex mixtures. For elemental speciation, LC is joined with inductively coupled plasma mass spectrometry (ICP-MS) to separate and detect metal-containing, organic compounds in complex mixtures, such as biological samples. Often, the solvent gradients required to perform complex separations will cause matrix effects within the plasma. This limits the sensitivity ofmore » the ICP-MS and the quantification methods available for use in such analyses. Traditionally, isotope dilution has been the method of choice for LC-ICP-MS quantification. The use of naturally abundant isotopes of a single element in quantification corrects for most of the effects that LC solvent gradients produce within the plasma. However, not all elements of interest in speciation studies have multiple naturally occurring isotopes; and polyatomic interferences for a given isotope can develop within the plasma, depending on the solvent matrix. This is the case for reverse phase LC separations, where increasing amounts of organic solvent are required. For such separations, an alternative to isotope dilution for quantification would be is needed. To this end, a new method was developed using the Apex-Q desolvation system (ESI, Omaha, NE) to couple LC instrumentation with an ICP-MS device. The desolvation power of the system allowed greater concentrations of methanol to be introduced to the plasma prior to destabilization than with direct methanol injection into the plasma. Studies were performed, using simulated and actual linear methanol gradients, to find analyte-internal standard (AIS) pairs whose ratio remains consistent (deviations {+-} 10%) over methanol concentration ranges of 5%-35% (simulated) and 8%-32% (actual). Quadrupole (low resolution) and sector field (high resolution) ICP-MS instrumentation were utilized in these studies. Once an AIS pair is determined, quantification studies can be performed. First, an analysis is performed by adding both elements of the AIS pair post-column while performing the gradient elution without sample injection. A comparison of the ratio of the measured intensities to the atomic ratio of the two standards is used to determine a correction factor that can be used to account for the matrix effects caused by the mobile phase. Then, organic and/or biological molecules containing one of the two elements in the AIS pair are injected into the LC column. A gradient method is used to vary the methanol-water mixture in the mobile phase and to separate out the compounds in a given sample. A standard solution of the second ion in the AIS pair is added continuously post-column. By comparing the ratio of the measured intensities to the atomic ratio of the eluting compound and internal standard, the concentration of the injected compound can be determined.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunter, Chad R. R. N.; Kemp, Robert A. de, E-mail: RAdeKemp@ottawaheart.ca; Klein, Ran
Purpose: Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET–CT misalignment. Methods: A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers wasmore » resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. Results: In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET–CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Conclusions: Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.« less
Grain growth prediction based on data assimilation by implementing 4DVar on multi-phase-field model
NASA Astrophysics Data System (ADS)
Ito, Shin-ichi; Nagao, Hiromichi; Kasuya, Tadashi; Inoue, Junya
2017-12-01
We propose a method to predict grain growth based on data assimilation by using a four-dimensional variational method (4DVar). When implemented on a multi-phase-field model, the proposed method allows us to calculate the predicted grain structures and uncertainties in them that depend on the quality and quantity of the observational data. We confirm through numerical tests involving synthetic data that the proposed method correctly reproduces the true phase-field assumed in advance. Furthermore, it successfully quantifies uncertainties in the predicted grain structures, where such uncertainty quantifications provide valuable information to optimize the experimental design.
Methodological aspects of multicenter studies with quantitative PET.
Boellaard, Ronald
2011-01-01
Quantification of whole-body FDG PET studies is affected by many physiological and physical factors. Much of the variability in reported standardized uptake value (SUV) data seen in the literature results from the variability in methodology applied among these studies, i.e., due to the use of different scanners, acquisition and reconstruction settings, region of interest strategies, SUV normalization, and/or corrections methods. To date, the variability in applied methodology prohibits a proper comparison and exchange of quantitative FDG PET data. Consequently, the promising role of quantitative PET has been demonstrated in several monocentric studies, but these published results cannot be used directly as a guideline for clinical (multicenter) trials performed elsewhere. In this chapter, the main causes affecting whole-body FDG PET quantification and strategies to minimize its inter-institute variability are addressed.
Technical Note: Deep learning based MRAC using rapid ultra-short echo time imaging.
Jang, Hyungseok; Liu, Fang; Zhao, Gengyan; Bradshaw, Tyler; McMillan, Alan B
2018-05-15
In this study, we explore the feasibility of a novel framework for MR-based attenuation correction for PET/MR imaging based on deep learning via convolutional neural networks, which enables fully automated and robust estimation of a pseudo CT image based on ultrashort echo time (UTE), fat, and water images obtained by a rapid MR acquisition. MR images for MRAC are acquired using dual echo ramped hybrid encoding (dRHE), where both UTE and out-of-phase echo images are obtained within a short single acquisition (35 sec). Tissue labeling of air, soft tissue, and bone in the UTE image is accomplished via a deep learning network that was pre-trained with T1-weighted MR images. UTE images are used as input to the network, which was trained using labels derived from co-registered CT images. The tissue labels estimated by deep learning are refined by a conditional random field based correction. The soft tissue labels are further separated into fat and water components using the two-point Dixon method. The estimated bone, air, fat, and water images are then assigned appropriate Hounsfield units, resulting in a pseudo CT image for PET attenuation correction. To evaluate the proposed MRAC method, PET/MR imaging of the head was performed on 8 human subjects, where Dice similarity coefficients of the estimated tissue labels and relative PET errors were evaluated through comparison to a registered CT image. Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76±0.03, 0.96±0.006, and 0.88±0.01. In PET quantification, the proposed MRAC method produced relative PET errors less than 1% within most brain regions. The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantification with accurate and rapid pseudo CT generation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Schryvers, D; Salje, E K H; Nishida, M; De Backer, A; Idrissi, H; Van Aert, S
2017-05-01
The present contribution gives a review of recent quantification work of atom displacements, atom site occupations and level of crystallinity in various systems and based on aberration corrected HR(S)TEM images. Depending on the case studied, picometer range precisions for individual distances can be obtained, boundary widths at the unit cell level determined or statistical evolutions of fractions of the ordered areas calculated. In all of these cases, these quantitative measures imply new routes for the applications of the respective materials. Copyright © 2017 Elsevier B.V. All rights reserved.
Wigg, Jonathan P.; Zhang, Hong; Yang, Dong
2015-01-01
Introduction In-vivo imaging of choroidal neovascularization (CNV) has been increasingly recognized as a valuable tool in the investigation of age-related macular degeneration (AMD) in both clinical and basic research applications. Arguably the most widely utilised model replicating AMD is laser generated CNV by rupture of Bruch’s membrane in rodents. Heretofore CNV evaluation via in-vivo imaging techniques has been hamstrung by a lack of appropriate rodent fundus camera and a non-standardised analysis method. The aim of this study was to establish a simple, quantifiable method of fluorescein fundus angiogram (FFA) image analysis for CNV lesions. Methods Laser was applied to 32 Brown Norway Rats; FFA images were taken using a rodent specific fundus camera (Micron III, Phoenix Laboratories) over 3 weeks and compared to conventional ex-vivo CNV assessment. FFA images acquired with fluorescein administered by intraperitoneal injection and intravenous injection were compared and shown to greatly influence lesion properties. Utilising commonly used software packages, FFA images were assessed for CNV and chorioretinal burns lesion area by manually outlining the maximum border of each lesion and normalising against the optic nerve head. Net fluorescence above background and derived value of area corrected lesion intensity were calculated. Results CNV lesions of rats treated with anti-VEGF antibody were significantly smaller in normalised lesion area (p<0.001) and fluorescent intensity (p<0.001) than the PBS treated control two weeks post laser. The calculated area corrected lesion intensity was significantly smaller (p<0.001) in anti-VEGF treated animals at 2 and 3 weeks post laser. The results obtained using FFA correlated with, and were confirmed by conventional lesion area measurements from isolectin stained choroidal flatmounts, where lesions of anti-VEGF treated rats were significantly smaller at 2 weeks (p = 0.049) and 3 weeks (p<0.001) post laser. Conclusion The presented method of in-vivo FFA quantification of CNV, including acquisition variable corrections, using the Micron III system and common use software establishes a reliable method for detecting and quantifying CNV enabling longitudinal studies and represents an important alternative to conventional CNV quantification methods. PMID:26024231
Liao, Hsiao-Wei; Chen, Guan-Yuan; Wu, Ming-Shiang; Liao, Wei-Chih; Lin, Ching-Hung; Kuo, Ching-Hua
2017-02-03
Quantitative metabolomics has become much more important in clinical research in recent years. Individual differences in matrix effects (MEs) and the injection order effect are two major factors that reduce the quantification accuracy in liquid chromatography-electrospray ionization-mass spectrometry-based (LC-ESI-MS) metabolomics studies. This study proposed a postcolumn infused-internal standard (PCI-IS) combined with a matrix normalization factor (MNF) strategy to improve the analytical accuracy of quantitative metabolomics. The PCI-IS combined with the MNF method was applied for a targeted metabolomics study of amino acids (AAs). D8-Phenylalanine was used as the PCI-IS, and it was postcolumn-infused into the ESI interface for calibration purposes. The MNF was used to bridge the AA response in a standard solution with the plasma samples. The MEs caused signal changes that were corrected by dividing the AA signal intensities by the PCI-IS intensities after adjustment with the MNF. After the method validation, we evaluated the method applicability for breast cancer research using 100 plasma samples. The quantification results revealed that the 11 tested AAs exhibit an accuracy between 88.2 and 110.7%. The principal component analysis score plot revealed that the injection order effect can be successfully removed, and most of the within-group variation of the tested AAs decreased after the PCI-IS correction. Finally, targeted metabolomics studies on the AAs showed that tryptophan was expressed more in malignant patients than in the benign group. We anticipate that a similar approach can be applied to other endogenous metabolites to facilitate quantitative metabolomics studies.
A universal real-time PCR assay for the quantification of group-M HIV-1 proviral load.
Malnati, Mauro S; Scarlatti, Gabriella; Gatto, Francesca; Salvatori, Francesca; Cassina, Giulia; Rutigliano, Teresa; Volpi, Rosy; Lusso, Paolo
2008-01-01
Quantification of human immunodeficiency virus type-1 (HIV-1) proviral DNA is increasingly used to measure the HIV-1 cellular reservoirs, a helpful marker to evaluate the efficacy of antiretroviral therapeutic regimens in HIV-1-infected individuals. Furthermore, the proviral DNA load represents a specific marker for the early diagnosis of perinatal HIV-1 infection and might be predictive of HIV-1 disease progression independently of plasma HIV-1 RNA levels and CD4(+) T-cell counts. The high degree of genetic variability of HIV-1 poses a serious challenge for the design of a universal quantitative assay capable of detecting all the genetic subtypes within the main (M) HIV-1 group with similar efficiency. Here, we describe a highly sensitive real-time PCR protocol that allows for the correct quantification of virtually all group-M HIV-1 strains with a higher degree of accuracy compared with other methods. The protocol involves three stages, namely DNA extraction/lysis, cellular DNA quantification and HIV-1 proviral load assessment. Owing to the robustness of the PCR design, this assay can be performed on crude cellular extracts, and therefore it may be suitable for the routine analysis of clinical samples even in developing countries. An accurate quantification of the HIV-1 proviral load can be achieved within 1 d from blood withdrawal.
Consistency of flow quantifications in tridirectional phase-contrast MRI
NASA Astrophysics Data System (ADS)
Unterhinninghofen, R.; Ley, S.; Dillmann, R.
2009-02-01
Tridirectionally encoded phase-contrast MRI is a technique to non-invasively acquire time-resolved velocity vector fields of blood flow. These may not only be used to analyze pathological flow patterns, but also to quantify flow at arbitrary positions within the acquired volume. In this paper we examine the validity of this approach by analyzing the consistency of related quantifications instead of comparing it with an external reference measurement. Datasets of the thoracic aorta were acquired from 6 pigs, 1 healthy volunteer and 3 patients with artificial aortic valves. Using in-house software an elliptical flow quantification plane was placed manually at 6 positions along the descending aorta where it was rotated to 5 different angles. For each configuration flow was computed based on the original data and data that had been corrected for phase offsets. Results reveal that quantifications are more dependent on changes in position than on changes in angle. Phase offset correction considerably reduces this dependency. Overall consistency is good with a maximum variation coefficient of 9.9% and a mean variation coefficient of 7.2%.
Onofrejová, Lucia; Farková, Marta; Preisler, Jan
2009-04-13
The application of an internal standard in quantitative analysis is desirable in order to correct for variations in sample preparation and instrumental response. In mass spectrometry of organic compounds, the internal standard is preferably labelled with a stable isotope, such as (18)O, (15)N or (13)C. In this study, a method for the quantification of fructo-oligosaccharides using matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI TOF MS) was proposed and tested on raftilose, a partially hydrolysed inulin with a degree of polymeration 2-7. A tetraoligosaccharide nystose, which is chemically identical to the raftilose tetramer, was used as an internal standard rather than an isotope-labelled analyte. Two mathematical approaches used for data processing, conventional calculations and artificial neural networks (ANN), were compared. The conventional data processing relies on the assumption that a constant oligomer dispersion profile will change after the addition of the internal standard and some simple numerical calculations. On the other hand, ANN was found to compensate for a non-linear MALDI response and variations in the oligomer dispersion profile with raftilose concentration. As a result, the application of ANN led to lower quantification errors and excellent day-to-day repeatability compared to the conventional data analysis. The developed method is feasible for MS quantification of raftilose in the range of 10-750 pg with errors below 7%. The content of raftilose was determined in dietary cream; application can be extended to other similar polymers. It should be stressed that no special optimisation of the MALDI process was carried out. A common MALDI matrix and sample preparation were used and only the basic parameters, such as sampling and laser energy, were optimised prior to quantification.
Yoshinaga, Kazuaki; Obi, Junji; Nagai, Toshiharu; Iioka, Hiroyuki; Yoshida, Akihiko; Beppu, Fumiaki; Gotoh, Naohiro
2017-03-01
In the present study, the resolution parameters and correction factors (CFs) of triacylglycerol (TAG) standards were estimated by gas chromatography-flame ionization detector (GC-FID) to achieve the precise quantification of the TAG composition in edible fats and oils. Forty seven TAG standards comprising capric acid, lauric acid, myristic acid, pentadecanoic acid, palmitic acid, palmitoleic acid, stearic acid, oleic acid, linoleic acid, and/or linolenic acid were analyzed, and the CFs of these TAGs were obtained against tripentadecanoyl glycerol as the internal standard. The capillary column was Ultra ALLOY + -65 (30 m × 0.25 mm i.d., 0.10 μm thickness) and the column temperature was programmed to rise from 250°C to 360°C at 4°C/min and then hold for 25 min. The limit of detection (LOD) and limit of quantification (LOQ) values of the TAG standards were > 0.10 mg and > 0.32 mg per 100 mg fat and oil, respectively, except for LnLnLn, and the LOD and LOQ values of LnLnLn were 0.55 mg and 1.84 mg per 100 mg fat and oil, respectively. The CFs of TAG standards decreased with increasing total acyl carbon number and degree of desaturation of TAG molecules. Also, there were no remarkable differences in the CFs between TAG positional isomers such as 1-palmitoyl-2-oleoyl-3-stearoyl-rac-glycerol, 1-stearoyl-2-palmitoyl-3-oleoyl-rac-glycerol, and 1-palmitoyl-2-stearoyl-3-oleoyl-rac-glycerol, which cannot be separated by GC-FID. Furthermore, this method was able to predict the CFs of heterogeneous (AAB- and ABC-type) TAGs from the CFs of homogenous (AAA-, BBB-, and CCC-type) TAGs. In addition, the TAG composition in cocoa butter, palm oil, and canola oil was determined using CFs, and the results were found to be in good agreement with those reported in the literature. Therefore, the GC-FID method using CFs can be successfully used for the quantification of TAG molecular species in natural fats and oils.
Normalized Polarization Ratios for the Analysis of Cell Polarity
Shimoni, Raz; Pham, Kim; Yassin, Mohammed; Ludford-Menting, Mandy J.; Gu, Min; Russell, Sarah M.
2014-01-01
The quantification and analysis of molecular localization in living cells is increasingly important for elucidating biological pathways, and new methods are rapidly emerging. The quantification of cell polarity has generated much interest recently, and ratiometric analysis of fluorescence microscopy images provides one means to quantify cell polarity. However, detection of fluorescence, and the ratiometric measurement, is likely to be sensitive to acquisition settings and image processing parameters. Using imaging of EGFP-expressing cells and computer simulations of variations in fluorescence ratios, we characterized the dependence of ratiometric measurements on processing parameters. This analysis showed that image settings alter polarization measurements; and that clustered localization is more susceptible to artifacts than homogeneous localization. To correct for such inconsistencies, we developed and validated a method for choosing the most appropriate analysis settings, and for incorporating internal controls to ensure fidelity of polarity measurements. This approach is applicable to testing polarity in all cells where the axis of polarity is known. PMID:24963926
Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust
Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin
2015-01-01
Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881
Consideration of kaolinite interference correction for quartz measurements in coal mine dust.
Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin
2013-01-01
Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.
Dippold, Michaela A; Boesel, Stefanie; Gunina, Anna; Kuzyakov, Yakov; Glaser, Bruno
2014-03-30
Amino sugars build up microbial cell walls and are important components of soil organic matter. To evaluate their sources and turnover, δ(13)C analysis of soil-derived amino sugars by liquid chromatography was recently suggested. However, amino sugar δ(13)C determination remains challenging due to (1) a strong matrix effect, (2) CO2 -binding by alkaline eluents, and (3) strongly different chromatographic behavior and concentrations of basic and acidic amino sugars. To overcome these difficulties we established an ion chromatography-oxidation-isotope ratio mass spectrometry method to improve and facilitate soil amino sugar analysis. After acid hydrolysis of soil samples, the extract was purified from salts and other components impeding chromatographic resolution. The amino sugar concentrations and δ(13)C values were determined by coupling an ion chromatograph to an isotope ratio mass spectrometer. The accuracy and precision of quantification and δ(13)C determination were assessed. Internal standards enabled correction for losses during analysis, with a relative standard deviation <6%. The higher magnitude peaks of basic than of acidic amino sugars required an amount-dependent correction of δ(13)C values. This correction improved the accuracy of the determination of δ(13)C values to <1.5‰ and the precision to <0.5‰ for basic and acidic amino sugars in a single run. This method enables parallel quantification and δ(13)C determination of basic and acidic amino sugars in a single chromatogram due to the advantages of coupling an ion chromatograph to the isotope ratio mass spectrometer. Small adjustments of sample amount and injection volume are necessary to optimize precision and accuracy for individual soils. Copyright © 2014 John Wiley & Sons, Ltd.
Bliem, Rupert; Schauer, Sonja; Plicka, Helga; Obwaller, Adelheid; Sommer, Regina; Steinrigl, Adolf; Alam, Munirul; Reischer, Georg H.; Farnleitner, Andreas H.
2015-01-01
Vibrio cholerae is a severe human pathogen and a frequent member of aquatic ecosystems. Quantification of V. cholerae in environmental water samples is therefore fundamental for ecological studies and health risk assessment. Beside time-consuming cultivation techniques, quantitative PCR (qPCR) has the potential to provide reliable quantitative data and offers the opportunity to quantify multiple targets simultaneously. A novel triplex qPCR strategy was developed in order to simultaneously quantify toxigenic and nontoxigenic V. cholerae in environmental water samples. To obtain quality-controlled PCR results, an internal amplification control was included. The qPCR assay was specific, highly sensitive, and quantitative across the tested 5-log dynamic range down to a method detection limit of 5 copies per reaction. Repeatability and reproducibility were high for all three tested target genes. For environmental application, global DNA recovery (GR) rates were assessed for drinking water, river water, and water from different lakes. GR rates ranged from 1.6% to 76.4% and were dependent on the environmental background. Uncorrected and GR-corrected V. cholerae abundances were determined in two lakes with extremely high turbidity. Uncorrected abundances ranged from 4.6 × 102 to 2.3 × 104 cell equivalents liter−1, whereas GR-corrected abundances ranged from 4.7 × 103 to 1.6 × 106 cell equivalents liter−1. GR-corrected qPCR results were in good agreement with an independent cell-based direct detection method but were up to 1.6 log higher than cultivation-based abundances. We recommend the newly developed triplex qPCR strategy as a powerful tool to simultaneously quantify toxigenic and nontoxigenic V. cholerae in various aquatic environments for ecological studies as well as for risk assessment programs. PMID:25724966
Effects of Regularisation Priors and Anatomical Partial Volume Correction on Dynamic PET Data
NASA Astrophysics Data System (ADS)
Caldeira, Liliana L.; Silva, Nuno da; Scheins, Jürgen J.; Gaens, Michaela E.; Shah, N. Jon
2015-08-01
Dynamic PET provides temporal information about the tracer uptake. However, each PET frame has usually low statistics, resulting in noisy images. Furthermore, PET images suffer from partial volume effects. The goal of this study is to understand the effects of prior regularisation on dynamic PET data and subsequent anatomical partial volume correction. The Median Root Prior (MRP) regularisation method was used in this work during reconstruction. The quantification and noise in image-domain and time-domain (time-activity curves) as well as the impact on parametric images is assessed and compared with Ordinary Poisson Ordered Subset Expectation Maximisation (OP-OSEM) reconstruction with and without Gaussian filter. This study shows the improvement in PET images and time-activity curves (TAC) in terms of noise as well as in the parametric images when using prior regularisation in dynamic PET data. Anatomical partial volume correction improves the TAC and consequently, parametric images. Therefore, the use of MRP with anatomical partial volume correction is of interest for dynamic PET studies.
Rieger, Benedikt; Zimmer, Fabian; Zapp, Jascha; Weingärtner, Sebastian; Schad, Lothar R
2017-11-01
To develop an implementation of the magnetic resonance fingerprinting (MRF) paradigm for quantitative imaging using echo-planar imaging (EPI) for simultaneous assessment of T 1 and T2∗. The proposed MRF method (MRF-EPI) is based on the acquisition of 160 gradient-spoiled EPI images with rapid, parallel-imaging accelerated, Cartesian readout and a measurement time of 10 s per slice. Contrast variation is induced using an initial inversion pulse, and varying the flip angles, echo times, and repetition times throughout the sequence. Joint quantification of T 1 and T2∗ is performed using dictionary matching with integrated B1+ correction. The quantification accuracy of the method was validated in phantom scans and in vivo in 6 healthy subjects. Joint T 1 and T2∗ parameter maps acquired with MRF-EPI in phantoms are in good agreement with reference measurements, showing deviations under 5% and 4% for T 1 and T2∗, respectively. In vivo baseline images were visually free of artifacts. In vivo relaxation times are in good agreement with gold-standard techniques (deviation T 1 : 4 ± 2%, T2∗: 4 ± 5%). The visual quality was comparable to the in vivo gold standard, despite substantially shortened scan times. The proposed MRF-EPI method provides fast and accurate T 1 and T2∗ quantification. This approach offers a rapid supplement to the non-Cartesian MRF portfolio, with potentially increased usability and robustness. Magn Reson Med 78:1724-1733, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Lee, Noah; Laine, Andrew F; Smith, R Theodore
2007-01-01
Fundus auto-fluorescence (FAF) images with hypo-fluorescence indicate geographic atrophy (GA) of the retinal pigment epithelium (RPE) in age-related macular degeneration (AMD). Manual quantification of GA is time consuming and prone to inter- and intra-observer variability. Automatic quantification is important for determining disease progression and facilitating clinical diagnosis of AMD. In this paper we describe a hybrid segmentation method for GA quantification by identifying hypo-fluorescent GA regions from other interfering retinal vessel structures. First, we employ background illumination correction exploiting a non-linear adaptive smoothing operator. Then, we use the level set framework to perform segmentation of hypo-fluorescent areas. Finally, we present an energy function combining morphological scale-space analysis with a geometric model-based approach to perform segmentation refinement of false positive hypo- fluorescent areas due to interfering retinal structures. The clinically apparent areas of hypo-fluorescence were drawn by an expert grader and compared on a pixel by pixel basis to our segmentation results. The mean sensitivity and specificity of the ROC analysis were 0.89 and 0.98%.
Boncyk, Wayne C.; Markham, Brian L.; Barker, John L.; Helder, Dennis
1996-01-01
The Landsat-7 Image Assessment System (IAS), part of the Landsat-7 Ground System, will calibrate and evaluate the radiometric and geometric performance of the Enhanced Thematic Mapper Plus (ETM +) instrument. The IAS incorporates new instrument radiometric artifact correction and absolute radiometric calibration techniques which overcome some limitations to calibration accuracy inherent in historical calibration methods. Knowledge of ETM + instrument characteristics gleaned from analysis of archival Thematic Mapper in-flight data and from ETM + prelaunch tests allow the determination and quantification of the sources of instrument artifacts. This a priori knowledge will be utilized in IAS algorithms designed to minimize the effects of the noise sources before calibration, in both ETM + image and calibration data.
Suhr, Anna Catharina; Vogeser, Michael; Grimm, Stefanie H
2016-05-30
For quotable quantitative analysis of endogenous analytes in complex biological samples by isotope dilution LC-MS/MS, the creation of appropriate calibrators is a challenge, since analyte-free authentic material is in general not available. Thus, surrogate matrices are often used to prepare calibrators and controls. However, currently employed validation protocols do not include specific experiments to verify the suitability of a surrogate matrix calibration for quantification of authentic matrix samples. The aim of the study was the development of a novel validation experiment to test whether surrogate matrix based calibrators enable correct quantification of authentic matrix samples. The key element of the novel validation experiment is the inversion of nonlabelled analytes and their stable isotope labelled (SIL) counterparts in respect to their functions, i.e. SIL compound is the analyte and nonlabelled substance is employed as internal standard. As a consequence, both surrogate and authentic matrix are analyte-free regarding SIL analytes, which allows a comparison of both matrices. We called this approach Isotope Inversion Experiment. As figure of merit we defined the accuracy of inverse quality controls in authentic matrix quantified by means of a surrogate matrix calibration curve. As a proof-of-concept application a LC-MS/MS assay addressing six corticosteroids (cortisol, cortisone, corticosterone, 11-deoxycortisol, 11-deoxycorticosterone, and 17-OH-progesterone) was chosen. The integration of the Isotope Inversion Experiment in the validation protocol for the steroid assay was successfully realized. The accuracy results of the inverse quality controls were all in all very satisfying. As a consequence the suitability of a surrogate matrix calibration for quantification of the targeted steroids in human serum as authentic matrix could be successfully demonstrated. The Isotope Inversion Experiment fills a gap in the validation process for LC-MS/MS assays quantifying endogenous analytes. We consider it a valuable and convenient tool to evaluate the correct quantification of authentic matrix samples based on a calibration curve in surrogate matrix. Copyright © 2016 Elsevier B.V. All rights reserved.
Sedgewick, Gerald J.; Ericson, Marna
2015-01-01
Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568
Varrone, Andrea; Dickson, John C; Tossici-Bolt, Livia; Sera, Terez; Asenbaum, Susanne; Booij, Jan; Kapucu, Ozlem L; Kluge, Andreas; Knudsen, Gitte M; Koulibaly, Pierre Malick; Nobili, Flavio; Pagani, Marco; Sabri, Osama; Vander Borght, Thierry; Van Laere, Koen; Tatsch, Klaus
2013-01-01
Dopamine transporter (DAT) imaging with [(123)I]FP-CIT (DaTSCAN) is an established diagnostic tool in parkinsonism and dementia. Although qualitative assessment criteria are available, DAT quantification is important for research and for completion of a diagnostic evaluation. One critical aspect of quantification is the availability of normative data, considering possible age and gender effects on DAT availability. The aim of the European Normal Control Database of DaTSCAN (ENC-DAT) study was to generate a large database of [(123)I]FP-CIT SPECT scans in healthy controls. SPECT data from 139 healthy controls (74 men, 65 women; age range 20-83 years, mean 53 years) acquired in 13 different centres were included. Images were reconstructed using the ordered-subset expectation-maximization algorithm without correction (NOACSC), with attenuation correction (AC), and with both attenuation and scatter correction using the triple-energy window method (ACSC). Region-of-interest analysis was performed using the BRASS software (caudate and putamen), and the Southampton method (striatum). The outcome measure was the specific binding ratio (SBR). A significant effect of age on SBR was found for all data. Gender had a significant effect on SBR in the caudate and putamen for the NOACSC and AC data, and only in the left caudate for the ACSC data (BRASS method). Significant effects of age and gender on striatal SBR were observed for all data analysed with the Southampton method. Overall, there was a significant age-related decline in SBR of between 4 % and 6.7 % per decade. This study provides a large database of [(123)I]FP-CIT SPECT scans in healthy controls across a wide age range and with balanced gender representation. Higher DAT availability was found in women than in men. An average age-related decline in DAT availability of 5.5 % per decade was found for both genders, in agreement with previous reports. The data collected in this study may serve as a reference database for nuclear medicine centres and for clinical trials using [(123)I]FP-CIT SPECT as the imaging marker.
Open-path FTIR data reduction algorithm with atmospheric absorption corrections: the NONLIN code
NASA Astrophysics Data System (ADS)
Phillips, William; Russwurm, George M.
1999-02-01
This paper describes the progress made to date in developing, testing, and refining a data reduction computer code, NONLIN, that alleviates many of the difficulties experienced in the analysis of open path FTIR data. Among the problems that currently effect FTIR open path data quality are: the inability to obtain a true I degree or background, spectral interferences of atmospheric gases such as water vapor and carbon dioxide, and matching the spectral resolution and shift of the reference spectra to a particular field instrument. This algorithm is based on a non-linear fitting scheme and is therefore not constrained by many of the assumptions required for the application of linear methods such as classical least squares (CLS). As a result, a more realistic mathematical model of the spectral absorption measurement process can be employed in the curve fitting process. Applications of the algorithm have proven successful in circumventing open path data reduction problems. However, recent studies, by one of the authors, of the temperature and pressure effects on atmospheric absorption indicate there exist temperature and water partial pressure effects that should be incorporated into the NONLIN algorithm for accurate quantification of gas concentrations. This paper investigates the sources of these phenomena. As a result of this study a partial pressure correction has been employed in NONLIN computer code. Two typical field spectra are examined to determine what effect the partial pressure correction has on gas quantification.
Hautvast, Gilion L T F; Salton, Carol J; Chuang, Michael L; Breeuwer, Marcel; O'Donnell, Christopher J; Manning, Warren J
2012-05-01
Quantitative analysis of short-axis functional cardiac magnetic resonance images can be performed using automatic contour detection methods. The resulting myocardial contours must be reviewed and possibly corrected, which can be time-consuming, particularly when performed across all cardiac phases. We quantified the impact of manual contour corrections on both analysis time and quantitative measurements obtained from left ventricular short-axis cine images acquired from 1555 participants of the Framingham Heart Study Offspring cohort using computer-aided contour detection methods. The total analysis time for a single case was 7.6 ± 1.7 min for an average of 221 ± 36 myocardial contours per participant. This included 4.8 ± 1.6 min for manual contour correction of 2% of all automatically detected endocardial contours and 8% of all automatically detected epicardial contours. However, the impact of these corrections on global left ventricular parameters was limited, introducing differences of 0.4 ± 4.1 mL for end-diastolic volume, -0.3 ± 2.9 mL for end-systolic volume, 0.7 ± 3.1 mL for stroke volume, and 0.3 ± 1.8% for ejection fraction. We conclude that left ventricular functional parameters can be obtained under 5 min from short-axis functional cardiac magnetic resonance images using automatic contour detection methods. Manual correction more than doubles analysis time, with minimal impact on left ventricular volumes and ejection fraction. Copyright © 2011 Wiley Periodicals, Inc.
Pekar, Heidi; Westerberg, Erik; Bruno, Oscar; Lääne, Ants; Persson, Kenneth M; Sundström, L Fredrik; Thim, Anna-Maria
2016-01-15
Freshwater blooms of cyanobacteria (blue-green algae) in source waters are generally composed of several different strains with the capability to produce a variety of toxins. The major exposure routes for humans are direct contact with recreational waters and ingestion of drinking water not efficiently treated. The ultra high pressure liquid chromatography tandem mass spectrometry based analytical method presented here allows simultaneous analysis of 22 cyanotoxins from different toxin groups, including anatoxins, cylindrospermopsins, nodularin and microcystins in raw water and drinking water. The use of reference standards enables correct identification of toxins as well as precision of the quantification and due to matrix effects, recovery correction is required. The multi-toxin group method presented here, does not compromise sensitivity, despite the large number of analytes. The limit of quantification was set to 0.1 μg/L for 75% of the cyanotoxins in drinking water and 0.5 μg/L for all cyanotoxins in raw water, which is compliant with the WHO guidance value for microcystin-LR. The matrix effects experienced during analysis were reasonable for most analytes, considering the large volume injected into the mass spectrometer. The time of analysis, including lysing of cell bound toxins, is less than three hours. Furthermore, the method was tested in Swedish source waters and infiltration ponds resulting in evidence of presence of anatoxin, homo-anatoxin, cylindrospermopsin and several variants of microcystins for the first time in Sweden, proving its usefulness. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Chang, Guoping; Chang, Tingting; Pan, Tinsu; Clark, John W; Mawlawi, Osama R
2010-12-01
Respiratory motion artifacts and partial volume effects (PVEs) are two degrading factors that affect the accuracy of image quantification in PET/CT imaging. In this article, the authors propose a joint motion and PVE correction approach (JMPC) to improve PET quantification by simultaneously correcting for respiratory motion artifacts and PVE in patients with lung/thoracic cancer. The objective of this article is to describe this approach and evaluate its performance using phantom and patient studies. The proposed joint correction approach incorporates a model of motion blurring, PVE, and object size/shape. A motion blurring kernel (MBK) is then estimated from the deconvolution of the joint model, while the activity concentration (AC) of the tumor is estimated from the normalization of the derived MBK. To evaluate the performance of this approach, two phantom studies and eight patient studies were performed. In the phantom studies, two motion waveforms-a linear sinusoidal and a circular motion-were used to control the motion of a sphere, while in the patient studies, all participants were instructed to breathe regularly. For the phantom studies, the resultant MBK was compared to the true MBK by measuring a correlation coefficient between the two kernels. The measured sphere AC derived from the proposed method was compared to the true AC as well as the ACs in images exhibiting PVE only and images exhibiting both PVE and motion blurring. For the patient studies, the resultant MBK was compared to the motion extent derived from a 4D-CT study, while the measured tumor AC was compared to the AC in images exhibiting both PVE and motion blurring. For the phantom studies, the estimated MBK approximated the true MBK with an average correlation coefficient of 0.91. The tumor ACs following the joint correction technique were similar to the true AC with an average difference of 2%. Furthermore, the tumor ACs on the PVE only images and images with both motion blur and PVE effects were, on average, 75% and 47.5% (10%) of the true AC, respectively, for the linear (circular) motion phantom study. For the patient studies, the maximum and mean AC/SUV on the PET images following the joint correction are, on average, increased by 125.9% and 371.6%, respectively, when compared to the PET images with both PVE and motion. The motion extents measured from the derived MBK and 4D-CT exhibited an average difference of 1.9 mm. The proposed joint correction approach can improve the accuracy of PET quantification by simultaneously compensating for the respiratory motion artifacts and PVE in lung/thoracic PET/CT imaging.
Su, Yi; Blazey, Tyler M; Owen, Christopher J; Christensen, Jon J; Friedrichsen, Karl; Joseph-Mathurin, Nelly; Wang, Qing; Hornbeck, Russ C; Ances, Beau M; Snyder, Abraham Z; Cash, Lisa A; Koeppe, Robert A; Klunk, William E; Galasko, Douglas; Brickman, Adam M; McDade, Eric; Ringman, John M; Thompson, Paul M; Saykin, Andrew J; Ghetti, Bernardino; Sperling, Reisa A; Johnson, Keith A; Salloway, Stephen P; Schofield, Peter R; Masters, Colin L; Villemagne, Victor L; Fox, Nick C; Förster, Stefan; Chen, Kewei; Reiman, Eric M; Xiong, Chengjie; Marcus, Daniel S; Weiner, Michael W; Morris, John C; Bateman, Randall J; Benzinger, Tammie L S
2016-01-01
Amyloid imaging plays an important role in the research and diagnosis of dementing disorders. Substantial variation in quantitative methods to measure brain amyloid burden exists in the field. The aim of this work is to investigate the impact of methodological variations to the quantification of amyloid burden using data from the Dominantly Inherited Alzheimer's Network (DIAN), an autosomal dominant Alzheimer's disease population. Cross-sectional and longitudinal [11C]-Pittsburgh Compound B (PiB) PET imaging data from the DIAN study were analyzed. Four candidate reference regions were investigated for estimation of brain amyloid burden. A regional spread function based technique was also investigated for the correction of partial volume effects. Cerebellar cortex, brain-stem, and white matter regions all had stable tracer retention during the course of disease. Partial volume correction consistently improves sensitivity to group differences and longitudinal changes over time. White matter referencing improved statistical power in the detecting longitudinal changes in relative tracer retention; however, the reason for this improvement is unclear and requires further investigation. Full dynamic acquisition and kinetic modeling improved statistical power although it may add cost and time. Several technical variations to amyloid burden quantification were examined in this study. Partial volume correction emerged as the strategy that most consistently improved statistical power for the detection of both longitudinal changes and across-group differences. For the autosomal dominant Alzheimer's disease population with PiB imaging, utilizing brainstem as a reference region with partial volume correction may be optimal for current interventional trials. Further investigation of technical issues in quantitative amyloid imaging in different study populations using different amyloid imaging tracers is warranted.
Su, Yi; Blazey, Tyler M.; Owen, Christopher J.; Christensen, Jon J.; Friedrichsen, Karl; Joseph-Mathurin, Nelly; Wang, Qing; Hornbeck, Russ C.; Ances, Beau M.; Snyder, Abraham Z.; Cash, Lisa A.; Koeppe, Robert A.; Klunk, William E.; Galasko, Douglas; Brickman, Adam M.; McDade, Eric; Ringman, John M.; Thompson, Paul M.; Saykin, Andrew J.; Ghetti, Bernardino; Sperling, Reisa A.; Johnson, Keith A.; Salloway, Stephen P.; Schofield, Peter R.; Masters, Colin L.; Villemagne, Victor L.; Fox, Nick C.; Förster, Stefan; Chen, Kewei; Reiman, Eric M.; Xiong, Chengjie; Marcus, Daniel S.; Weiner, Michael W.; Morris, John C.; Bateman, Randall J.; Benzinger, Tammie L. S.
2016-01-01
Amyloid imaging plays an important role in the research and diagnosis of dementing disorders. Substantial variation in quantitative methods to measure brain amyloid burden exists in the field. The aim of this work is to investigate the impact of methodological variations to the quantification of amyloid burden using data from the Dominantly Inherited Alzheimer’s Network (DIAN), an autosomal dominant Alzheimer’s disease population. Cross-sectional and longitudinal [11C]-Pittsburgh Compound B (PiB) PET imaging data from the DIAN study were analyzed. Four candidate reference regions were investigated for estimation of brain amyloid burden. A regional spread function based technique was also investigated for the correction of partial volume effects. Cerebellar cortex, brain-stem, and white matter regions all had stable tracer retention during the course of disease. Partial volume correction consistently improves sensitivity to group differences and longitudinal changes over time. White matter referencing improved statistical power in the detecting longitudinal changes in relative tracer retention; however, the reason for this improvement is unclear and requires further investigation. Full dynamic acquisition and kinetic modeling improved statistical power although it may add cost and time. Several technical variations to amyloid burden quantification were examined in this study. Partial volume correction emerged as the strategy that most consistently improved statistical power for the detection of both longitudinal changes and across-group differences. For the autosomal dominant Alzheimer’s disease population with PiB imaging, utilizing brainstem as a reference region with partial volume correction may be optimal for current interventional trials. Further investigation of technical issues in quantitative amyloid imaging in different study populations using different amyloid imaging tracers is warranted. PMID:27010959
NASA Astrophysics Data System (ADS)
Bodnar, Victoria; Ganeev, Alexander; Gubal, Anna; Solovyev, Nikolay; Glumov, Oleg; Yakobson, Viktor; Murin, Igor
2018-07-01
A pulsed direct current glow discharge time-of-flight mass spectrometry (GD TOF MS) method for the quantification of fluorine in insoluble crystal materials with fluorine doped potassium titanyl phosphate (KTP) KTiOPO4:KF as an example has been proposed. The following parameters were optimized: repelling pulse delay, discharge duration, discharge voltage, and pressure in the discharge cell. Effective ionization of fluorine in the space between sampler and skimmer under short repelling pulse delay, related to the high-energy electron impact at the discharge front, has been demonstrated. A combination of instrumental and mathematical correction approaches was used to cope for the interferences of 38Ar2+ and 1H316O + on 19F+. To maintain surface conductivity in the dielectric KTP crystals and insure its effective sputtering in combined hollow cathode cell, silver suspension applied by the dip-coating method was employed. Fluorine quantification was performed using relative sensitivity factors. The analysis of a reference material and scanning electron microscope-energy dispersive X-ray spectroscopy was used for validation. Fluorine limit of detection by pulsed direct current GD TOF MS was 0.01 mass%. Real sample analysis showed that fluorine seems to be inhomogeneously distributed in the crystals. That is why depth profiling of F, K, O, and P was performed to evaluate the crystals' non-stoichiometry. The approaches designed allow for fluorine quantification in insoluble dielectric materials with minimal sample preparation and destructivity as well as performing depth profiling to assess crystal non-stoichiometry.
PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics1[OPEN
Poeschl, Yvonne; Plötner, Romina
2017-01-01
Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. PMID:28931626
DOE Office of Scientific and Technical Information (OSTI.GOV)
West, Bradley M.; Stuckelberger, Michael; Jeffries, April
The study of a multilayered and multicomponent system by spatially resolved X-ray fluorescence microscopy poses unique challenges in achieving accurate quantification of elemental distributions. This is particularly true for the quantification of materials with high X-ray attenuation coefficients, depth-dependent composition variations and thickness variations. A widely applicable procedure for use after spectrum fitting and quantification is described. This procedure corrects the elemental distribution from the measured fluorescence signal, taking into account attenuation of the incident beam and generated fluorescence from multiple layers, and accounts for sample thickness variations. Deriving from Beer–Lambert's law, formulae are presented in a general integral formmore » and numerically applicable framework. Here, the procedure is applied using experimental data from a solar cell with a Cu(In,Ga)Se 2 absorber layer, measured at two separate synchrotron beamlines with varied measurement geometries. This example shows the importance of these corrections in real material systems, which can change the interpretation of the measured distributions dramatically.« less
West, Bradley M.; Stuckelberger, Michael; Jeffries, April; ...
2017-01-01
The study of a multilayered and multicomponent system by spatially resolved X-ray fluorescence microscopy poses unique challenges in achieving accurate quantification of elemental distributions. This is particularly true for the quantification of materials with high X-ray attenuation coefficients, depth-dependent composition variations and thickness variations. A widely applicable procedure for use after spectrum fitting and quantification is described. This procedure corrects the elemental distribution from the measured fluorescence signal, taking into account attenuation of the incident beam and generated fluorescence from multiple layers, and accounts for sample thickness variations. Deriving from Beer–Lambert's law, formulae are presented in a general integral formmore » and numerically applicable framework. Here, the procedure is applied using experimental data from a solar cell with a Cu(In,Ga)Se 2 absorber layer, measured at two separate synchrotron beamlines with varied measurement geometries. This example shows the importance of these corrections in real material systems, which can change the interpretation of the measured distributions dramatically.« less
Hawkins, Liam J; Storey, Kenneth B
2017-01-01
Common Western-blot imaging systems have previously been adapted to measure signals from luminescent microplate assays. This can be a cost saving measure as Western-blot imaging systems are common laboratory equipment and could substitute a dedicated luminometer if one is not otherwise available. One previously unrecognized limitation is that the signals captured by the cameras in these systems are not equal for all wells. Signals are dependent on the angle of incidence to the camera, and thus the location of the well on the microplate. Here we show that: •The position of a well on a microplate significantly affects the signal captured by a common Western-blot imaging system from a luminescent assay.•The effect of well position can easily be corrected for.•This method can be applied to commercially available luminescent assays, allowing for high-throughput quantification of a wide range of biological processes and biochemical reactions.
Cornejo-Aragón, Luz G; Santos-Cuevas, Clara L; Ocampo-García, Blanca E; Chairez-Oria, Isaac; Diaz-Nieto, Lorenza; García-Quiroz, Janice
2017-01-01
The aim of this study was to develop a semi automatic image processing algorithm (AIPA) based on the simultaneous information provided by X-ray and radioisotopic images to determine the biokinetic models of Tc-99m radiopharmaceuticals from quantification of image radiation activity in murine models. These radioisotopic images were obtained by a CCD (charge couple device) camera coupled to an ultrathin phosphorous screen in a preclinical multimodal imaging system (Xtreme, Bruker). The AIPA consisted of different image processing methods for background, scattering and attenuation correction on the activity quantification. A set of parametric identification algorithms was used to obtain the biokinetic models that characterize the interaction between different tissues and the radiopharmaceuticals considered in the study. The set of biokinetic models corresponded to the Tc-99m biodistribution observed in different ex vivo studies. This fact confirmed the contribution of the semi-automatic image processing technique developed in this study.
MR-assisted PET motion correction in simultaneous PET/MRI studies of dementia subjects.
Chen, Kevin T; Salcedo, Stephanie; Chonde, Daniel B; Izquierdo-Garcia, David; Levine, Michael A; Price, Julie C; Dickerson, Bradford C; Catana, Ciprian
2018-03-08
Subject motion in positron emission tomography (PET) studies leads to image blurring and artifacts; simultaneously acquired magnetic resonance imaging (MRI) data provides a means for motion correction (MC) in integrated PET/MRI scanners. To assess the effect of realistic head motion and MR-based MC on static [ 18 F]-fluorodeoxyglucose (FDG) PET images in dementia patients. Observational study. Thirty dementia subjects were recruited. 3T hybrid PET/MR scanner where EPI-based and T 1 -weighted sequences were acquired simultaneously with the PET data. Head motion parameters estimated from high temporal resolution MR volumes were used for PET MC. The MR-based MC method was compared to PET frame-based MC methods in which motion parameters were estimated by coregistering 5-minute frames before and after accounting for the attenuation-emission mismatch. The relative changes in standardized uptake value ratios (SUVRs) between the PET volumes processed with the various MC methods, without MC, and the PET volumes with simulated motion were compared in relevant brain regions. The absolute value of the regional SUVR relative change was assessed with pairwise paired t-tests testing at the P = 0.05 level, comparing the values obtained through different MR-based MC processing methods as well as across different motion groups. The intraregion voxelwise variability of regional SUVRs obtained through different MR-based MC processing methods was also assessed with pairwise paired t-tests testing at the P = 0.05 level. MC had a greater impact on PET data quantification in subjects with larger amplitude motion (higher than 18% in the medial orbitofrontal cortex) and greater changes were generally observed for the MR-based MC method compared to the frame-based methods. Furthermore, a mean relative change of ∼4% was observed after MC even at the group level, suggesting the importance of routinely applying this correction. The intraregion voxelwise variability of regional SUVRs was also decreased using MR-based MC. All comparisons were significant at the P = 0.05 level. Incorporating temporally correlated MR data to account for intraframe motion has a positive impact on the FDG PET image quality and data quantification in dementia patients. 3 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.
Hegstad, S; Havnen, H; Helland, A; Spigset, O; Frost, J
2018-03-01
To distinguish between legal and illegal consumption of amphetamine reliable analytical methods for chiral separation of the R- and S-enantiomers of amphetamine in biological specimens are required. In this regard, supercritical fluid chromatography (SFC) has several potential advantages over liquid chromatography, including rapid separation of enantiomers due to low viscosity and high diffusivity of supercritical carbon dioxide, the main component in the SFC mobile phase. A method for enantiomeric separation and quantification of R- and S-amphetamine in urine was developed and validated using ultra-high performance supercritical fluid chromatography-tandem mass spectrometry (UHPSFC-MS/MS). Sample preparation prior to UHPSFC-MS/MS analysis was a semi-automatic solid phase extraction method. The UHPSFC-MS/MS method used a Chiralpak AD-3 column with a mobile phase consisting of CO 2 and 0.2% cyclohexylamine in 2-propanol. The injection volume was 2 μL and run-time was 6 min. MS/MS detection was performed with positive electrospray ionization and two multiple reaction monitoring transitions (m/z 136.1 > 119.0 and m/z 136.1 > 91.0). The calibration range was 50-10,000 ng/mL for each enantiomer. The between-assay relative standard deviations were in the range of 3.7-7.6%. Recovery was 92-93% and matrix effects ranged from 100 to 104% corrected with internal standard. After development and validation, the method has been successfully implemented in routine use at our laboratory for both separation and quantification of R/S-amphetamine, and has proved to be a reliable and useful tool for distinguishing intake of R- and S-amphetamine in authentic patient samples. Copyright © 2018 Elsevier B.V. All rights reserved.
QR in Child Grammar: Evidence from Antecedent-Contained Deletion
ERIC Educational Resources Information Center
Syrett, Kristen; Lidz, Jeffrey
2009-01-01
We show that 4-year-olds assign the correct interpretation to antecedent-contained deletion (ACD) sentences because they have the correct representation of these structures. This representation involves Quantifier Raising (QR) of a Quantificational Noun Phrase (QNP) that must move out of the site of the verb phrase in which it is contained to…
Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT
NASA Astrophysics Data System (ADS)
Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi
2015-09-01
Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.
Sensitivity of Chemical Shift-Encoded Fat Quantification to Calibration of Fat MR Spectrum
Wang, Xiaoke; Hernando, Diego; Reeder, Scott B.
2015-01-01
Purpose To evaluate the impact of different fat spectral models on proton density fat-fraction (PDFF) quantification using chemical shift-encoded (CSE) MRI. Material and Methods Simulations and in vivo imaging were performed. In a simulation study, spectral models of fat were compared pairwise. Comparison of magnitude fitting and mixed fitting was performed over a range of echo times and fat fractions. In vivo acquisitions from 41 patients were reconstructed using 7 published spectral models of fat. T2-corrected STEAM-MRS was used as reference. Results Simulations demonstrate that imperfectly calibrated spectral models of fat result in biases that depend on echo times and fat fraction. Mixed fitting is more robust against this bias than magnitude fitting. Multi-peak spectral models showed much smaller differences among themselves than when compared to the single-peak spectral model. In vivo studies show all multi-peak models agree better (for mixed fitting, slope ranged from 0.967–1.045 using linear regression) with reference standard than the single-peak model (for mixed fitting, slope=0.76). Conclusion It is essential to use a multi-peak fat model for accurate quantification of fat with CSE-MRI. Further, fat quantification techniques using multi-peak fat models are comparable and no specific choice of spectral model is shown to be superior to the rest. PMID:25845713
Mann, Steve D.; Perez, Kristy L.; McCracken, Emily K. E.; Shah, Jainil P.; Wong, Terence Z.; Tornai, Martin P.
2012-01-01
A pilot study is underway to quantify in vivo the uptake and distribution of Tc-99m Sestamibi in subjects without previous history of breast cancer using a dedicated SPECT-CT breast imaging system. Subjects undergoing diagnostic parathyroid imaging studies were consented and imaged as part of this IRB-approved breast imaging study. For each of the seven subjects, one randomly selected breast was imaged prone-pendant using the dedicated, compact breast SPECT-CT system underneath the shielded patient support. Iteratively reconstructed and attenuation and/or scatter corrected images were coregistered; CT images were segmented into glandular and fatty tissue by three different methods; the average concentration of Sestamibi was determined from the SPECT data using the CT-based segmentation and previously established quantification techniques. Very minor differences between the segmentation methods were observed, and the results indicate an average image-based in vivo Sestamibi concentration of 0.10 ± 0.16 μCi/mL with no preferential uptake by glandular or fatty tissues. PMID:22956950
Unice, Kenneth M; Kreider, Marisa L; Panko, Julie M
2012-11-08
Pyrolysis(pyr)-GC/MS analysis of characteristic thermal decomposition fragments has been previously used for qualitative fingerprinting of organic sources in environmental samples. A quantitative pyr-GC/MS method based on characteristic tire polymer pyrolysis products was developed for tread particle quantification in environmental matrices including soil, sediment, and air. The feasibility of quantitative pyr-GC/MS analysis of tread was confirmed in a method evaluation study using artificial soil spiked with known amounts of cryogenically generated tread. Tread concentration determined by blinded analyses was highly correlated (r2 ≥ 0.88) with the known tread spike concentration. Two critical refinements to the initial pyrolysis protocol were identified including use of an internal standard and quantification by the dimeric markers vinylcyclohexene and dipentene, which have good specificity for rubber polymer with no other appreciable environmental sources. A novel use of deuterated internal standards of similar polymeric structure was developed to correct the variable analyte recovery caused by sample size, matrix effects, and ion source variability. The resultant quantitative pyr-GC/MS protocol is reliable and transferable between laboratories.
Confocal quantification of cis-regulatory reporter gene expression in living sea urchin.
Damle, Sagar; Hanser, Bridget; Davidson, Eric H; Fraser, Scott E
2006-11-15
Quantification of GFP reporter gene expression at single cell level in living sea urchin embryos can now be accomplished by a new method of confocal laser scanning microscopy (CLSM). Eggs injected with a tissue-specific GFP reporter DNA construct were grown to gastrula stage and their fluorescence recorded as a series of contiguous Z-section slices that spanned the entire embryo. To measure the depth-dependent signal decay seen in the successive slices of an image stack, the eggs were coinjected with a freely diffusible internal fluorescent standard, rhodamine dextran. The measured rhodamine fluorescence was used to generate a computational correction for the depth-dependent loss of GFP fluorescence per slice. The intensity of GFP fluorescence was converted to the number of GFP molecules using a conversion constant derived from CLSM imaging of eggs injected with a measured quantity of GFP protein. The outcome is a validated method for accurately counting GFP molecules in given cells in reporter gene transfer experiments, as we demonstrate by use of an expression construct expressed exclusively in skeletogenic cells.
SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.
Nik, S J; Thing, R S; Watts, R; Meyer, J
2012-06-01
To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations. © 2012 American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadava, G; Imai, Y; Hsieh, J
2014-06-15
Purpose: Quantitative accuracy of Iodine Hounsfield Unit (HU) in conventional single-kVp scanning is susceptible to beam-hardening effect. Dual-energy CT has unique capabilities of quantification using monochromatic CT images, but this scanning mode requires the availability of the state-of-the-art CT scanner and, therefore, is limited in routine clinical practice. Purpose of this work was to develop a beam-hardening-correction (BHC) for single-kVp CT that can linearize Iodine projections at any nominal energy, apply this approach to study Iodine response with respect to keV, and compare with dual-energy based monochromatic images obtained from material-decomposition using 80kVp and 140kVp. Methods: Tissue characterization phantoms (Gammexmore » Inc.), containing solid-Iodine inserts of different concentrations, were scanned using GE multi-slice CT scanner at 80, 100, 120, and 140 kVp. A model-based BHC algorithm was developed where Iodine was estimated using re-projection of image volume and corrected through an iterative process. In the correction, the re-projected Iodine was linearized using a polynomial mapping between monochromatic path-lengths at various nominal energies (40 to 140 keV) and physically modeled polychromatic path-lengths. The beam-hardening-corrected 80kVp and 140kVp images (linearized approximately at effective energy of the beam) were used for dual-energy material-decomposition in Water-Iodine basis-pair followed by generation of monochromatic images. Characterization of Iodine HU and noise in the images obtained from singlekVp with BHC at various nominal keV, and corresponding dual-energy monochromatic images, was carried out. Results: Iodine HU vs. keV response from single-kVp with BHC and dual-energy monochromatic images were found to be very similar, indicating that single-kVp data may be used to create material specific monochromatic equivalent using modelbased projection linearization. Conclusion: This approach may enable quantification of Iodine contrast enhancement and potential reduction in injected contrast without using dual-energy scanning. However, in general, dual-energy scanning has unique value in material characterization and quantification, and its value cannot be discounted. GE Healthcare Employee.« less
Dias, Philipe A; Dunkel, Thiemo; Fajado, Diego A S; Gallegos, Erika de León; Denecke, Martin; Wiedemann, Philipp; Schneider, Fabio K; Suhr, Hajo
2016-06-11
In the activated sludge process, problems of filamentous bulking and foaming can occur due to overgrowth of certain filamentous bacteria. Nowadays, these microorganisms are typically monitored by means of light microscopy, commonly combined with staining techniques. As drawbacks, these methods are susceptible to human errors, subjectivity and limited by the use of discontinuous microscopy. The in situ microscope appears as a suitable tool for continuous monitoring of filamentous bacteria, providing real-time examination, automated analysis and eliminating sampling, preparation and transport of samples. In this context, a proper image processing algorithm is proposed for automated recognition and measurement of filamentous objects. This work introduces a method for real-time evaluation of images without any staining, phase-contrast or dilution techniques, differently from studies present in the literature. Moreover, we introduce an algorithm which estimates the total extended filament length based on geodesic distance calculation. For a period of twelve months, samples from an industrial activated sludge plant were weekly collected and imaged without any prior conditioning, replicating real environment conditions. Trends of filament growth rate-the most important parameter for decision making-are correctly identified. For reference images whose filaments were marked by specialists, the algorithm correctly recognized 72 % of the filaments pixels, with a false positive rate of at most 14 %. An average execution time of 0.7 s per image was achieved. Experiments have shown that the designed algorithm provided a suitable quantification of filaments when compared with human perception and standard methods. The algorithm's average execution time proved its suitability for being optimally mapped into a computational architecture to provide real-time monitoring.
Cross, Russell; Olivieri, Laura; O'Brien, Kendall; Kellman, Peter; Xue, Hui; Hansen, Michael
2016-02-25
Traditional cine imaging for cardiac functional assessment requires breath-holding, which can be problematic in some situations. Free-breathing techniques have relied on multiple averages or real-time imaging, producing images that can be spatially and/or temporally blurred. To overcome this, methods have been developed to acquire real-time images over multiple cardiac cycles, which are subsequently motion corrected and reformatted to yield a single image series displaying one cardiac cycle with high temporal and spatial resolution. Application of these algorithms has required significant additional reconstruction time. The use of distributed computing was recently proposed as a way to improve clinical workflow with such algorithms. In this study, we have deployed a distributed computing version of motion corrected re-binning reconstruction for free-breathing evaluation of cardiac function. Twenty five patients and 25 volunteers underwent cardiovascular magnetic resonance (CMR) for evaluation of left ventricular end-systolic volume (ESV), end-diastolic volume (EDV), and end-diastolic mass. Measurements using motion corrected re-binning were compared to those using breath-held SSFP and to free-breathing SSFP with multiple averages, and were performed by two independent observers. Pearson correlation coefficients and Bland-Altman plots tested agreement across techniques. Concordance correlation coefficient and Bland-Altman analysis tested inter-observer variability. Total scan plus reconstruction times were tested for significant differences using paired t-test. Measured volumes and mass obtained by motion corrected re-binning and by averaged free-breathing SSFP compared favorably to those obtained by breath-held SSFP (r = 0.9863/0.9813 for EDV, 0.9550/0.9685 for ESV, 0.9952/0.9771 for mass). Inter-observer variability was good with concordance correlation coefficients between observers across all acquisition types suggesting substantial agreement. Both motion corrected re-binning and averaged free-breathing SSFP acquisition and reconstruction times were shorter than breath-held SSFP techniques (p < 0.0001). On average, motion corrected re-binning required 3 min less than breath-held SSFP imaging, a 37% reduction in acquisition and reconstruction time. The motion corrected re-binning image reconstruction technique provides robust cardiac imaging that can be used for quantification that compares favorably to breath-held SSFP as well as multiple average free-breathing SSFP, but can be obtained in a fraction of the time when using cloud-based distributed computing reconstruction.
PaCeQuant: A Tool for High-Throughput Quantification of Pavement Cell Shape Characteristics.
Möller, Birgit; Poeschl, Yvonne; Plötner, Romina; Bürstenbinder, Katharina
2017-11-01
Pavement cells (PCs) are the most frequently occurring cell type in the leaf epidermis and play important roles in leaf growth and function. In many plant species, PCs form highly complex jigsaw-puzzle-shaped cells with interlocking lobes. Understanding of their development is of high interest for plant science research because of their importance for leaf growth and hence for plant fitness and crop yield. Studies of PC development, however, are limited, because robust methods are lacking that enable automatic segmentation and quantification of PC shape parameters suitable to reflect their cellular complexity. Here, we present our new ImageJ-based tool, PaCeQuant, which provides a fully automatic image analysis workflow for PC shape quantification. PaCeQuant automatically detects cell boundaries of PCs from confocal input images and enables manual correction of automatic segmentation results or direct import of manually segmented cells. PaCeQuant simultaneously extracts 27 shape features that include global, contour-based, skeleton-based, and PC-specific object descriptors. In addition, we included a method for classification and analysis of lobes at two-cell junctions and three-cell junctions, respectively. We provide an R script for graphical visualization and statistical analysis. We validated PaCeQuant by extensive comparative analysis to manual segmentation and existing quantification tools and demonstrated its usability to analyze PC shape characteristics during development and between different genotypes. PaCeQuant thus provides a platform for robust, efficient, and reproducible quantitative analysis of PC shape characteristics that can easily be applied to study PC development in large data sets. © 2017 American Society of Plant Biologists. All Rights Reserved.
Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.
Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas
2009-03-01
Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.
Overbeek, Thérèse J M; van Boxtel, Anton; Westerink, Joyce H D M
2012-09-01
The literature shows large inconsistencies in respiratory sinus arrhythmia (RSA) responses to induced emotional states. This may be caused by differences in emotion induction methods, RSA quantification, and non-emotional demands of the situation. In 83 healthy subjects, we studied RSA responses to pictures and film fragments eliciting six different discrete emotions relative to neutral baseline stimuli. RSA responses were quantified in the time and frequency domain and were additionally corrected for differences in mean heart rate and respiration rate, resulting in eight different RSA response measures. Subjective ratings of emotional stimuli and facial electromyographic responses indicated that pictures and film fragments elicited the intended emotions. Although RSA measures showed various emotional effects, responses were quite heterogeneous and frequently nonsignificant. They were substantially influenced by methodological factors, in particular time vs. frequency domain response measures, correction for changes in respiration rate, use of pictures vs. film fragments, and sex of participants. Copyright © 2012 Elsevier B.V. All rights reserved.
Quantifying color variation: Improved formulas for calculating hue with segment classification1
Smith, Stacey D.
2014-01-01
• Premise of the study: Differences in color form a major component of biological variation, and quantifying these differences is the first step to understanding their evolutionary and ecological importance. One common method for measuring color variation is segment classification, which uses three variables (chroma, hue, and brightness) to describe the height and shape of reflectance curves. This study provides new formulas for calculating hue (the variable that describes the “type” of color) to give correct values in all regions of color space. • Methods and Results: Reflectance spectra were obtained from the literature, and chroma, hue, and brightness were computed for each spectrum using the original formulas as well as the new formulas. Only the new formulas result in correct values in the blue-green portion of color space. • Conclusions: Use of the new formulas for calculating hue will result in more accurate color quantification for a broad range of biological applications. PMID:25202612
Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W
2012-09-07
A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gokce, Emine; Shuford, Christopher M.; Franck, William L.; Dean, Ralph A.; Muddiman, David C.
2011-12-01
Normalization of spectral counts (SpCs) in label-free shotgun proteomic approaches is important to achieve reliable relative quantification. Three different SpC normalization methods, total spectral count (TSpC) normalization, normalized spectral abundance factor (NSAF) normalization, and normalization to selected proteins (NSP) were evaluated based on their ability to correct for day-to-day variation between gel-based sample preparation and chromatographic performance. Three spectral counting data sets obtained from the same biological conidia sample of the rice blast fungus Magnaporthe oryzae were analyzed by 1D gel and liquid chromatography-tandem mass spectrometry (GeLC-MS/MS). Equine myoglobin and chicken ovalbumin were spiked into the protein extracts prior to 1D-SDS- PAGE as internal protein standards for NSP. The correlation between SpCs of the same proteins across the different data sets was investigated. We report that TSpC normalization and NSAF normalization yielded almost ideal slopes of unity for normalized SpC versus average normalized SpC plots, while NSP did not afford effective corrections of the unnormalized data. Furthermore, when utilizing TSpC normalization prior to relative protein quantification, t-testing and fold-change revealed the cutoff limits for determining real biological change to be a function of the absolute number of SpCs. For instance, we observed the variance decreased as the number of SpCs increased, which resulted in a higher propensity for detecting statistically significant, yet artificial, change for highly abundant proteins. Thus, we suggest applying higher confidence level and lower fold-change cutoffs for proteins with higher SpCs, rather than using a single criterion for the entire data set. By choosing appropriate cutoff values to maintain a constant false positive rate across different protein levels (i.e., SpC levels), it is expected this will reduce the overall false negative rate, particularly for proteins with higher SpCs.
Shen, Xiaomeng; Hu, Qiang; Li, Jun; Wang, Jianmin; Qu, Jun
2015-10-02
Comprehensive and accurate evaluation of data quality and false-positive biomarker discovery is critical to direct the method development/optimization for quantitative proteomics, which nonetheless remains challenging largely due to the high complexity and unique features of proteomic data. Here we describe an experimental null (EN) method to address this need. Because the method experimentally measures the null distribution (either technical or biological replicates) using the same proteomic samples, the same procedures and the same batch as the case-vs-contol experiment, it correctly reflects the collective effects of technical variability (e.g., variation/bias in sample preparation, LC-MS analysis, and data processing) and project-specific features (e.g., characteristics of the proteome and biological variation) on the performances of quantitative analysis. To show a proof of concept, we employed the EN method to assess the quantitative accuracy and precision and the ability to quantify subtle ratio changes between groups using different experimental and data-processing approaches and in various cellular and tissue proteomes. It was found that choices of quantitative features, sample size, experimental design, data-processing strategies, and quality of chromatographic separation can profoundly affect quantitative precision and accuracy of label-free quantification. The EN method was also demonstrated as a practical tool to determine the optimal experimental parameters and rational ratio cutoff for reliable protein quantification in specific proteomic experiments, for example, to identify the necessary number of technical/biological replicates per group that affords sufficient power for discovery. Furthermore, we assessed the ability of EN method to estimate levels of false-positives in the discovery of altered proteins, using two concocted sample sets mimicking proteomic profiling using technical and biological replicates, respectively, where the true-positives/negatives are known and span a wide concentration range. It was observed that the EN method correctly reflects the null distribution in a proteomic system and accurately measures false altered proteins discovery rate (FADR). In summary, the EN method provides a straightforward, practical, and accurate alternative to statistics-based approaches for the development and evaluation of proteomic experiments and can be universally adapted to various types of quantitative techniques.
Multiplex Detection of Toxigenic Penicillium Species.
Rodríguez, Alicia; Córdoba, Juan J; Rodríguez, Mar; Andrade, María J
2017-01-01
Multiplex PCR-based methods for simultaneous detection and quantification of different mycotoxin-producing Penicillia are useful tools to be used in food safety programs. These rapid and sensitive techniques allow taking corrective actions during food processing or storage for avoiding accumulation of mycotoxins in them. In this chapter, three multiplex PCR-based methods to detect at least patulin- and ochratoxin A-producing Penicillia are detailed. Two of them are different multiplex real-time PCR suitable for monitoring and quantifying toxigenic Penicillium using the nonspecific dye SYBR Green and specific hydrolysis probes (TaqMan). All of them successfully use the same target genes involved in the biosynthesis of such mycotoxins for designing primers and/or probes.
Erratum: A Comparison of Closures for Stochastic Advection-Diffusion Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarman, Kenneth D.; Tartakovsky, Alexandre M.
2015-01-01
This note corrects an error in the authors' article [SIAM/ASA J. Uncertain. Quantif., 1 (2013), pp. 319 347] in which the cited work [Neuman, Water Resour. Res., 29(3) (1993), pp. 633 645] was incorrectly represented and attributed. Concentration covariance equations presented in our article as new were in fact previously derived in the latter work. In the original abstract, the phrase " . . .we propose a closed-form approximation to two-point covariance as a measure of uncertainty. . ." should be replaced by the phrase " . . .we study a closed-form approximation to two-point covariance, previously derived in [Neumanmore » 1993], as a measure of uncertainty." The primary results in our article--the analytical and numerical comparison of existing closure methods for specific example problems are not changed by this correction.« less
2017-02-02
Corresponding Author Abstract Accurate virus quantification is sought, but a perfect method still eludes the scientific community. Electron...unlimited. UNCLASSIFIED 2 provides morphology data and counts all viral particles, including partial or noninfectious particles; however, EM methods ...consistent, reproducible virus quantification method called Scanning Transmission Electron Microscopy – Virus Quantification (STEM-VQ) which simplifies
Chaudhry, Waseem; Hussain, Nasir; Ahlberg, Alan W; Croft, Lori B; Fernandez, Antonio B; Parker, Mathew W; Swales, Heather H; Slomka, Piotr J; Henzlova, Milena J; Duvall, W Lane
2017-06-01
A stress-first myocardial perfusion imaging (MPI) protocol saves time, is cost effective, and decreases radiation exposure. A limitation of this protocol is the requirement for physician review of the stress images to determine the need for rest images. This hurdle could be eliminated if an experienced technologist and/or automated computer quantification could make this determination. Images from consecutive patients who were undergoing a stress-first MPI with attenuation correction at two tertiary care medical centers were prospectively reviewed independently by a technologist and cardiologist blinded to clinical and stress test data. Their decision on the need for rest imaging along with automated computer quantification of perfusion results was compared with the clinical reference standard of an assessment of perfusion images by a board-certified nuclear cardiologist that included clinical and stress test data. A total of 250 patients (mean age 61 years and 55% female) who underwent a stress-first MPI were studied. According to the clinical reference standard, 42 (16.8%) and 208 (83.2%) stress-first images were interpreted as "needing" and "not needing" rest images, respectively. The technologists correctly classified 229 (91.6%) stress-first images as either "needing" (n = 28) or "not needing" (n = 201) rest images. Their sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 66.7%, 96.6%, 80.0%, and 93.5%, respectively. An automated stress TPD score ≥1.2 was associated with optimal sensitivity and specificity and correctly classified 179 (71.6%) stress-first images as either "needing" (n = 31) or "not needing" (n = 148) rest images. Its sensitivity, specificity, PPV, and NPV were 73.8%, 71.2%, 34.1%, and 93.1%, respectively. In a model whereby the computer or technologist could correct for the other's incorrect classification, 242 (96.8%) stress-first images were correctly classified. The composite sensitivity, specificity, PPV, and NPV were 83.3%, 99.5%, 97.2%, and 96.7%, respectively. Technologists and automated quantification software had a high degree of agreement with the clinical reference standard for determining the need for rest images in a stress-first imaging protocol. Utilizing an experienced technologist and automated systems to screen stress-first images could expand the use of stress-first MPI to sites where the cardiologist is not immediately available for interpretation.
Microfluidic immunosensor for rapid and highly-sensitive salivary cortisol quantification.
Pinto, V; Sousa, P; Catarino, S O; Correia-Neves, M; Minas, G
2017-04-15
This paper presents a novel poly(dimethylsiloxane) (PDMS) microfluidic immunosensor that integrates a complementary metal-oxide-semiconductor (CMOS) optical detection system for a rapid and highly-sensitive quantification of salivary cortisol. The simple and non-invasive method of saliva sampling provides an interesting alternative to the blood, allowing a fast sampling at short intervals, relevant for many clinical diagnostic applications. The developed approach is based on the covalent immobilization of a coating antibody (Ab), a polyclonal anti-IgG, onto a treated PDMS surface. The coating Ab binds the capture Ab, an IgG specific for cortisol, allowing its correct orientation. Horseradish peroxidase (HRP)-labelled cortisol is added to compete with the cortisol in the sample, for the capture Ab binding sites. The HRP-labelled cortisol, bonded to the capture Ab, is measured through the HRP enzyme and the tetramethylbenzidine (TMB) substrate reaction. The cortisol quantification is performed by colorimetric detection of HRP-labelled cortisol, through optical absorption at 450nm, using a CMOS silicon photodiode as the photodetector. Under the developed optimized conditions presented here, e.g., microfluidic channels geometry, immobilization method and immunoassay conditions, the immunosensor shows a linear range of detection between 0.01-20ng/mL, a limit of detection (LOD) of 18pg/mL and an analysis time of 35min, featuring a great potential for point-of-care applications requiring continuous monitoring of the salivary cortisol levels during a circadian cycle. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Al-Bagawi, A. H.; Ahmad, W.; Saigl, Z. M.; Alwael, H.; Al-Harbi, E. A.; El-Shahawi, M. S.
2017-12-01
The most common problems in spectrophotometric determination of various complex species originate from the background spectral interference. Thus, the present study aimed to overcome the spectral matrix interference for the precise analysis and speciation of mercury(II) in water by dual-wavelength β-correction spectrophotometry using 4-(2-thiazolylazo) resorcinol (TAR) as chromogenic reagent. The principle was based on measuring the correct absorbance for the formed complex of mercury(II) ions with TAR reagent at 547 nm (lambda max). Under optimized conditions, a linear dynamic range of 0.1-2.0 μg mL- 1 with correlation coefficient (R2) of 0.997 were obtained with lower limits of detection (LOD) of 0.024 μg mL- 1 and limit of quantification (LOQ) of 0.081 μg mL- 1. The values of RSD and relative error (RE) obtained for β-correction method and single wavelength spectrophotometry were 1.3, 1.32% and 4.7, 5.9%, respectively. The method was validated in tap and sea water in terms of the data obtained from inductively coupled plasma-optical emission spectrometry (ICP-OES) using student's t and F tests. The developed methodology satisfactorily overcomes the spectral interference in trace determination and speciation of mercury(II) ions in water.
Sensitivity estimation in time-of-flight list-mode positron emission tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herraiz, J. L.; Sitek, A., E-mail: sarkadiu@gmail.com
Purpose: An accurate quantification of the images in positron emission tomography (PET) requires knowing the actual sensitivity at each voxel, which represents the probability that a positron emitted in that voxel is finally detected as a coincidence of two gamma rays in a pair of detectors in the PET scanner. This sensitivity depends on the characteristics of the acquisition, as it is affected by the attenuation of the annihilation gamma rays in the body, and possible variations of the sensitivity of the scanner detectors. In this work, the authors propose a new approach to handle time-of-flight (TOF) list-mode PET data,more » which allows performing either or both, a self-attenuation correction, and self-normalization correction based on emission data only. Methods: The authors derive the theory using a fully Bayesian statistical model of complete data. The authors perform an initial evaluation of algorithms derived from that theory and proposed in this work using numerical 2D list-mode simulations with different TOF resolutions and total number of detected coincidences. Effects of randoms and scatter are not simulated. Results: The authors found that proposed algorithms successfully correct for unknown attenuation and scanner normalization for simulated 2D list-mode TOF-PET data. Conclusions: A new method is presented that can be used for corrections for attenuation and normalization (sensitivity) using TOF list-mode data.« less
Soriano, Brian D; Tam, Lei-Ting T; Lu, Hsieng S; Valladares, Violeta G
2012-01-01
Recombinant proteins expressed in Escherichia coli are often produced as unfolded, inactive forms accumulated in inclusion bodies. Redox-coupled thiols are typically employed in the refolding process in order to catalyze the formation of correct disulfide bonds at maximal folding efficiency. These thiols and the recombinant proteins can form mixed disulfide bonds to generate thiol-protein adducts. In this work, we apply a fluorescent-based assay for the quantification of cysteine and cysteamine adducts as observed in E. coli-derived proteins. The thiols are released by reduction of the adducted protein, collected and labeled with a fluorescent reagent, 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate. The derivatized thiols are separated by reversed-phase HPLC and can be accurately quantified after method optimization. The estimated thiol content represents total amount of adducted forms present in the analyzed samples. The limit of quantification (LOQ) was established; specifically, the lowest amount of quantifiable cysteine adduction is 30 picograms and the lowest amount of quantifiable cysteamine adduction is 60 picograms. The assay is useful for quantification of adducts in final purified products as well as in-process samples from various purification steps. The assay indicates that the purification process accomplishes a decrease in cysteine adduction from 0.19 nmol adduct/nmol protein to 0.03 nmol adduct/nmol protein as well as a decrease in cysteamine adduction from 0.24 nmol adduct/nmol protein to 0.14 nmol adduct/nmol protein. Copyright © 2011. Published by Elsevier B.V.
Liu, Chia-Ying; Redheuil, Alban; Ouwerkerk, Ronald; Lima, Joao A. C.; Bluemke, David A.
2011-01-01
Proton MR spectroscopy (1H-MRS) has been used for in vivo quantification of intracellular triglycerides within the sarcolemma. The purpose of this study was to assess whether breath-hold dual-echo in- and out-of-phase MRI at 3.0 T can quantify the fat content of the myocardium. Biases, including T1, T2∗, and noise, that confound the calculation of the fat fraction were carefully corrected. Thirty-four of 46 participants had both MRI and MRS data. The fat fractions from MRI showed a strong correlation with fat fractions from MRS (r = 0.78; P < 0.05). The mean myocardial fat fraction for all 34 subjects was 0.7 ± 0.5% (range: 0.11–3%) assessed with MRS and 1.04 ± 0.4% (range: 0.32–2.44%) assessed with in- and out-of-phase MRI (P < 0.05). Scanning times were less than 15 sec for Dixon imaging, plus an additional minute for the acquisition used for calculation, and 15-20 min for MRS. The average postprocessing time for MRS was 3 min and 5 min for MRI including T2∗ measurement. We conclude that the dual echo method provides a rapid means to detect and quantifying myocardial fat content in vivo. Correction/adjustment for field inhomogeneity using three or more echoes seems crucial for the dual echo approach. PMID:20373390
NASA Astrophysics Data System (ADS)
Batista Florindo, Joao; Landini, Gabriel; Almeida Filho, Humberto; Martinez Bruno, Odemir
2015-09-01
Here we propose a method for the analysis of the stomata distribution patterns on the surface of plant leaves. We also investigate how light exposure during growth can affect stomata distribution and the plasticity of leaves. Understanding foliar plasticity (the ability of leaves to modify their structural organization to adapt to changing environmental resources) is a fundamental problem in Agricultural and Environmental Sciences. Most published work on quantification of stomata has concentrated on descriptions of their density per unit of leaf area, however density alone does not provide a complete description of the problem and leaves several unanswered questions (e.g. whether the stomata patterns change across various areas of the leaf, or how the patterns change under varying observational scales). We used two approaches here, to know, multiscale fractal dimension and complex networks, as a means to provide a description of the complexity of these distributions. In the experiments, we used 18 samples from the plant Tradescantia Zebrina grown under three different conditions (4 hours of artificial light each day, 24 hours of artificial light each day, and sunlight) for a total of 69 days. The network descriptors were capable of correctly discriminating the different conditions in 88% of cases, while the fractal descriptors discriminated 83% of the samples. This is a significant improvement over the correct classification rates achieved when using only stomata density (56% of the samples).
Devonshire, Alison S; O'Sullivan, Denise M; Honeyborne, Isobella; Jones, Gerwyn; Karczmarczyk, Maria; Pavšič, Jernej; Gutteridge, Alice; Milavec, Mojca; Mendoza, Pablo; Schimmel, Heinz; Van Heuverswyn, Fran; Gorton, Rebecca; Cirillo, Daniela Maria; Borroni, Emanuele; Harris, Kathryn; Barnard, Marinus; Heydenrych, Anthenette; Ndusilo, Norah; Wallis, Carole L; Pillay, Keshree; Barry, Thomas; Reddington, Kate; Richter, Elvira; Mozioğlu, Erkan; Akyürek, Sema; Yalçınkaya, Burhanettin; Akgoz, Muslum; Žel, Jana; Foy, Carole A; McHugh, Timothy D; Huggett, Jim F
2016-08-03
Real-time PCR (qPCR) based methods, such as the Xpert MTB/RIF, are increasingly being used to diagnose tuberculosis (TB). While qualitative methods are adequate for diagnosis, the therapeutic monitoring of TB patients requires quantitative methods currently performed using smear microscopy. The potential use of quantitative molecular measurements for therapeutic monitoring has been investigated but findings have been variable and inconclusive. The lack of an adequate reference method and reference materials is a barrier to understanding the source of such disagreement. Digital PCR (dPCR) offers the potential for an accurate method for quantification of specific DNA sequences in reference materials which can be used to evaluate quantitative molecular methods for TB treatment monitoring. To assess a novel approach for the development of quality assurance materials we used dPCR to quantify specific DNA sequences in a range of prototype reference materials and evaluated accuracy between different laboratories and instruments. The materials were then also used to evaluate the quantitative performance of qPCR and Xpert MTB/RIF in eight clinical testing laboratories. dPCR was found to provide results in good agreement with the other methods tested and to be highly reproducible between laboratories without calibration even when using different instruments. When the reference materials were analysed with qPCR and Xpert MTB/RIF by clinical laboratories, all laboratories were able to correctly rank the reference materials according to concentration, however there was a marked difference in the measured magnitude. TB is a disease where the quantification of the pathogen could lead to better patient management and qPCR methods offer the potential to rapidly perform such analysis. However, our findings suggest that when precisely characterised materials are used to evaluate qPCR methods, the measurement result variation is too high to determine whether molecular quantification of Mycobacterium tuberculosis would provide a clinically useful readout. The methods described in this study provide a means by which the technical performance of quantitative molecular methods can be evaluated independently of clinical variability to improve accuracy of measurement results. These will assist in ultimately increasing the likelihood that such approaches could be used to improve patient management of TB.
Lin, Shu; Wein, Samuel; Gonzales-Cope, Michelle; Otte, Gabriel L.; Yuan, Zuo-Fei; Afjehi-Sadat, Leila; Maile, Tobias; Berger, Shelley L.; Rush, John; Lill, Jennie R.; Arnott, David; Garcia, Benjamin A.
2014-01-01
To facilitate accurate histone variant and post-translational modification (PTM) quantification via mass spectrometry, we present a library of 93 synthetic peptides using Protein-Aqua™ technology. The library contains 55 peptides representing different modified forms from histone H3 peptides, 23 peptides representing H4 peptides, 5 peptides representing canonical H2A peptides, 8 peptides representing H2A.Z peptides, and peptides for both macroH2A and H2A.X. The PTMs on these peptides include lysine mono- (me1), di- (me2), and tri-methylation (me3); lysine acetylation; arginine me1; serine/threonine phosphorylation; and N-terminal acetylation. The library was subjected to chemical derivatization with propionic anhydride, a widely employed protocol for histone peptide quantification. Subsequently, the detection efficiencies were quantified using mass spectrometry extracted ion chromatograms. The library yields a wide spectrum of detection efficiencies, with more than 1700-fold difference between the peptides with the lowest and highest efficiencies. In this paper, we describe the impact of different modifications on peptide detection efficiencies and provide a resource to correct for detection biases among the 93 histone peptides. In brief, there is no correlation between detection efficiency and molecular weight, hydrophobicity, basicity, or modification type. The same types of modifications may have very different effects on detection efficiencies depending on their positions within a peptide. We also observed antagonistic effects between modifications. In a study of mouse trophoblast stem cells, we utilized the detection efficiencies of the peptide library to correct for histone PTM/variant quantification. For most histone peptides examined, the corrected data did not change the biological conclusions but did alter the relative abundance of these peptides. For a low-abundant histone H2A variant, macroH2A, the corrected data led to a different conclusion than the uncorrected data. The peptide library and detection efficiencies presented here may serve as a resource to facilitate studies in the epigenetics and proteomics fields. PMID:25000943
Hunter, Chad R R N; Klein, Ran; Beanlands, Rob S; deKemp, Robert A
2016-04-01
Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET-CT misalignment. A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers was resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET-CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.
Kellman, Peter; Hansen, Michael S; Nielles-Vallespin, Sonia; Nickander, Jannike; Themudo, Raquel; Ugander, Martin; Xue, Hui
2017-04-07
Quantification of myocardial blood flow requires knowledge of the amount of contrast agent in the myocardial tissue and the arterial input function (AIF) driving the delivery of this contrast agent. Accurate quantification is challenged by the lack of linearity between the measured signal and contrast agent concentration. This work characterizes sources of non-linearity and presents a systematic approach to accurate measurements of contrast agent concentration in both blood and myocardium. A dual sequence approach with separate pulse sequences for AIF and myocardial tissue allowed separate optimization of parameters for blood and myocardium. A systems approach to the overall design was taken to achieve linearity between signal and contrast agent concentration. Conversion of signal intensity values to contrast agent concentration was achieved through a combination of surface coil sensitivity correction, Bloch simulation based look-up table correction, and in the case of the AIF measurement, correction of T2* losses. Validation of signal correction was performed in phantoms, and values for peak AIF concentration and myocardial flow are provided for 29 normal subjects for rest and adenosine stress. For phantoms, the measured fits were within 5% for both AIF and myocardium. In healthy volunteers the peak [Gd] was 3.5 ± 1.2 for stress and 4.4 ± 1.2 mmol/L for rest. The T2* in the left ventricle blood pool at peak AIF was approximately 10 ms. The peak-to-valley ratio was 5.6 for the raw signal intensities without correction, and was 8.3 for the look-up-table (LUT) corrected AIF which represents approximately 48% correction. Without T2* correction the myocardial blood flow estimates are overestimated by approximately 10%. The signal-to-noise ratio of the myocardial signal at peak enhancement (1.5 T) was 17.7 ± 6.6 at stress and the peak [Gd] was 0.49 ± 0.15 mmol/L. The estimated perfusion flow was 3.9 ± 0.38 and 1.03 ± 0.19 ml/min/g using the BTEX model and 3.4 ± 0.39 and 0.95 ± 0.16 using a Fermi model, for stress and rest, respectively. A dual sequence for myocardial perfusion cardiovascular magnetic resonance and AIF measurement has been optimized for quantification of myocardial blood flow. A validation in phantoms was performed to confirm that the signal conversion to gadolinium concentration was linear. The proposed sequence was integrated with a fully automatic in-line solution for pixel-wise mapping of myocardial blood flow and evaluated in adenosine stress and rest studies on N = 29 normal healthy subjects. Reliable perfusion mapping was demonstrated and produced estimates with low variability.
Moncayo, S; Manzoor, S; Rosales, J D; Anzano, J; Caceres, J O
2017-10-01
The present work focuses on the development of a fast and cost effective method based on Laser Induced Breakdown Spectroscopy (LIBS) to the quality control, traceability and detection of adulteration in milk. Two adulteration cases have been studied; a qualitative analysis for the discrimination between different milk blends and quantification of melamine in adulterated toddler milk powder. Principal Component Analysis (PCA) and neural networks (NN) have been used to analyze LIBS spectra obtaining a correct classification rate of 98% with a 100% of robustness. For the quantification of melamine, two methodologies have been developed; univariate analysis using CN emission band and multivariate calibration NN model obtaining correlation coefficient (R 2 ) values of 0.982 and 0.999 respectively. The results of the use of LIBS technique coupled with chemometric analysis are discussed in terms of its potential use in the food industry to perform the quality control of this dairy product. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wolrath, H; Forsum, U; Larsson, P G; Borén, H
2001-11-01
The presence of various amines in vaginal fluid from women with malodorous vaginal discharge has been reported before. The investigations have used several techniques to identify the amines. However, an optimized quantification, together with a sensitive analysis method in connection with a diagnostic procedure for vaginal discharge, including the syndrome of bacterial vaginosis, as defined by the accepted "gold standard," has not been done before. We now report a sensitive gas chromatographic and mass spectrometric method for identifying the amines isobutylamine, phenethylamine, putrescine, cadaverine, and tyramine in vaginal fluid. We used weighted samples of vaginal fluid to obtain a correct quantification. In addition, a proper diagnosis was obtained using Gram-stained smears of the vaginal fluid that were Nugent scored according to the method of Nugent et al. (R. P. Nugent et al., J. Clin. Microbiol., 29:297-301, 1991). We found that putrescine, cadaverine, and tyramine occurred in high concentrations in vaginal fluid from 24 women with Nugent scores between 7 and 10. These amines either were not found or were found only in very low concentrations in vaginal fluid from women with Nugent scores of 0 to 3. There is a strong correlation between bacterial vaginosis and the presence of putrescine, cadaverine, and tyramine in high concentrations in vaginal fluid.
Monakhova, Yulia B; Randel, Gabriele; Diehl, Bernd W K
2016-09-01
Recent classification of Aloe vera whole-leaf extract by the International Agency for Research and Cancer as a possible carcinogen to humans as well as the continuous adulteration of A. vera's authentic material have generated renewed interest in controlling A. vera. The existing NMR spectroscopic method for the analysis of A. vera, which is based on a routine developed at Spectral Service, was extended. Apart from aloverose, glucose, malic acid, lactic acid, citric acid, whole-leaf material (WLM), acetic acid, fumaric acid, sodium benzoate, and potassium sorbate, the quantification of Mg(2+), Ca(2+), and fructose is possible with the addition of a Cs-EDTA solution to sample. The proposed methodology was automated, which includes phasing, baseline-correction, deconvolution (based on the Lorentzian function), integration, quantification, and reporting. The NMR method was applied to 41 A. vera preparations in the form of liquid A. vera juice and solid A. vera powder. The advantages of the new NMR methodology over the previous method were discussed. Correlation between the new and standard NMR methodologies was significant for aloverose, glucose, malic acid, lactic acid, citric acid, and WLM (P < 0.0001, R(2) = 0.99). NMR was found to be suitable for the automated simultaneous quantitative determination of 13 parameters in A. vera.
van der Linden, Maarten; Westerlaken, Geertje H A; van der Vlist, Michiel; van Montfrans, Joris; Meyaard, Linde
2017-07-26
A wide variety of microbial and inflammatory factors induce DNA release from neutrophils as neutrophil extracellular traps (NETs). Consensus on the kinetics and mechanism of NET release has been hindered by the lack of distinctive methods to specifically quantify NET release in time. Here, we validate and refine a semi-automatic live imaging approach for quantification of NET release. Importantly, our approach is able to correct for neutrophil input and distinguishes NET release from neutrophil death by other means, aspects that are lacking in many NET quantification methods. Real time visualization shows that opsonized S. aureus rapidly induces cell death by toxins, while actual NET formation occurs after 90 minutes, similar to the kinetics of NET release by immune complexes and PMA. Inhibition of SYK, PI3K and mTORC2 attenuates NET release upon challenge with physiological stimuli but not with PMA. In contrast, neutrophils from chronic granulomatous disease patients show decreased NET release only in response to PMA. With this refined method, we conclude that NET release in primary human neutrophils is dependent on the SYK-PI3K-mTORC2 pathway and that PMA stimulation should be regarded as mechanistically distinct from NET formation induced by natural triggers.
den Braver, Michiel W; Vermeulen, Nico P E; Commandeur, Jan N M
2017-03-01
Modification of cellular macromolecules by reactive drug metabolites is considered to play an important role in the initiation of tissue injury by many drugs. Detection and identification of reactive intermediates is often performed by analyzing the conjugates formed after trapping by glutathione (GSH). Although sensitivity of modern mass spectrometrical methods is extremely high, absolute quantification of GSH-conjugates is critically dependent on the availability of authentic references. Although 1 H NMR is currently the method of choice for quantification of metabolites formed biosynthetically, its intrinsically low sensitivity can be a limiting factor in quantification of GSH-conjugates which generally are formed at low levels. In the present study, a simple but sensitive and generic method for absolute quantification of GSH-conjugates is presented. The method is based on quantitative alkaline hydrolysis of GSH-conjugates and subsequent quantification of glutamic acid and glycine by HPLC after precolumn derivatization with o-phthaldialdehyde/N-acetylcysteine (OPA/NAC). Because of the lower stability of the glycine OPA/NAC-derivate, quantification of the glutamic acid OPA/NAC-derivate appeared most suitable for quantification of GSH-conjugates. The novel method was used to quantify the concentrations of GSH-conjugates of diclofenac, clozapine and acetaminophen and quantification was consistent with 1 H NMR, but with a more than 100-fold lower detection limit for absolute quantification. Copyright © 2017. Published by Elsevier B.V.
Counteracting structural errors in ensemble forecast of influenza outbreaks.
Pei, Sen; Shaman, Jeffrey
2017-10-13
For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.
Motosugi, Utaroh; Hernando, Diego; Wiens, Curtis; Bannas, Peter; Reeder, Scott. B
2017-01-01
Purpose: To determine whether high signal-to-noise ratio (SNR) acquisitions improve the repeatability of liver proton density fat fraction (PDFF) measurements using confounder-corrected chemical shift-encoded magnetic resonance (MR) imaging (CSE-MRI). Materials and Methods: Eleven fat-water phantoms were scanned with 8 different protocols with varying SNR. After repositioning the phantoms, the same scans were repeated to evaluate the test-retest repeatability. Next, an in vivo study was performed with 20 volunteers and 28 patients scheduled for liver magnetic resonance imaging (MRI). Two CSE-MRI protocols with standard- and high-SNR were repeated to assess test-retest repeatability. MR spectroscopy (MRS)-based PDFF was acquired as a standard of reference. The standard deviation (SD) of the difference (Δ) of PDFF measured in the two repeated scans was defined to ascertain repeatability. The correlation between PDFF of CSE-MRI and MRS was calculated to assess accuracy. The SD of Δ and correlation coefficients of the two protocols (standard- and high-SNR) were compared using F-test and t-test, respectively. Two reconstruction algorithms (complex-based and magnitude-based) were used for both the phantom and in vivo experiments. Results: The phantom study demonstrated that higher SNR improved the repeatability for both complex- and magnitude-based reconstruction. Similarly, the in vivo study demonstrated that the repeatability of the high-SNR protocol (SD of Δ = 0.53 for complex- and = 0.85 for magnitude-based fit) was significantly higher than using the standard-SNR protocol (0.77 for complex, P < 0.001; and 0.94 for magnitude-based fit, P = 0.003). No significant difference was observed in the accuracy between standard- and high-SNR protocols. Conclusion: Higher SNR improves the repeatability of fat quantification using confounder-corrected CSE-MRI. PMID:28190853
Dieckmeyer, Michael; Ruschke, Stefan; Eggers, Holger; Kooijman, Hendrik; Rummeny, Ernst J; Kirschke, Jan S; Baum, Thomas; Karampinos, Dimitrios C
2017-10-01
To remove the confounding effect of unsuppressed fat on the imaging-based apparent diffusion coefficient (ADC) of the vertebral bone marrow water component when using spectrally selective fat suppression and to compare and validate the proposed quantification strategy against diffusion-weighted magnetic resonance spectroscopy (DW-MRS). Twelve subjects underwent diffusion-weighted imaging (DWI) and DW-MRS of the vertebral bone marrow. A theoretical model was developed to take into account and correct the effects of residual fat on ADC, incorporating additional measurements for proton density fat fraction (PDFF) and water T 2 (T 2w ). Uncorrected and corrected DWI-based ADC was compared with DW-MRS-based ADC using the Bland-Altman method. There was a systematic bias equal to 0.118 ± 0.116 × 10 -3 mm 2 /s between DWI and DW-MRS when no correction was performed. Taking into account measured PDFF and constant T 2w reduced the bias to 0.006 ± 0.128 × 10 -3 mm 2 /s. Using the proposed approach with both individually measured PDFF and T 2w reduced both the bias and the limits of agreement between DWI and DW-MRS (0.018 ± 0.065 × 10 -3 mm 2 /s). By taking into account the presence of residual fat in a modified signal model that incorporates additional individual measurements of PDFF and T 2w , good agreement of imaging-based ADC with MRS-based ADC can be achieved in vertebral bone marrow. Magn Reson Med 78:1432-1441, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
McCracken, Katherine E.; Angus, Scott V.; Reynolds, Kelly A.; Yoon, Jeong-Yeol
2016-06-01
Smartphone image-based sensing of microfluidic paper analytical devices (μPADs) offers low-cost and mobile evaluation of water quality. However, consistent quantification is a challenge due to variable environmental, paper, and lighting conditions, especially across large multi-target μPADs. Compensations must be made for variations between images to achieve reproducible results without a separate lighting enclosure. We thus developed a simple method using triple-reference point normalization and a fast-Fourier transform (FFT)-based pre-processing scheme to quantify consistent reflected light intensity signals under variable lighting and channel conditions. This technique was evaluated using various light sources, lighting angles, imaging backgrounds, and imaging heights. Further testing evaluated its handle of absorbance, quenching, and relative scattering intensity measurements from assays detecting four water contaminants - Cr(VI), total chlorine, caffeine, and E. coli K12 - at similar wavelengths using the green channel of RGB images. Between assays, this algorithm reduced error from μPAD surface inconsistencies and cross-image lighting gradients. Although the algorithm could not completely remove the anomalies arising from point shadows within channels or some non-uniform background reflections, it still afforded order-of-magnitude quantification and stable assay specificity under these conditions, offering one route toward improving smartphone quantification of μPAD assays for in-field water quality monitoring.
Unice, Kenneth M.; Kreider, Marisa L.; Panko, Julie M.
2012-01-01
Pyrolysis(pyr)-GC/MS analysis of characteristic thermal decomposition fragments has been previously used for qualitative fingerprinting of organic sources in environmental samples. A quantitative pyr-GC/MS method based on characteristic tire polymer pyrolysis products was developed for tread particle quantification in environmental matrices including soil, sediment, and air. The feasibility of quantitative pyr-GC/MS analysis of tread was confirmed in a method evaluation study using artificial soil spiked with known amounts of cryogenically generated tread. Tread concentration determined by blinded analyses was highly correlated (r2 ≥ 0.88) with the known tread spike concentration. Two critical refinements to the initial pyrolysis protocol were identified including use of an internal standard and quantification by the dimeric markers vinylcyclohexene and dipentene, which have good specificity for rubber polymer with no other appreciable environmental sources. A novel use of deuterated internal standards of similar polymeric structure was developed to correct the variable analyte recovery caused by sample size, matrix effects, and ion source variability. The resultant quantitative pyr-GC/MS protocol is reliable and transferable between laboratories. PMID:23202830
Choi, Sol Ji; Jung, Mun Yhung
2017-04-01
We have developed a simple and fast sample preparation technique in combination with a gas chromatography-tandem mass spectrometry (GC-MS/MS) for the quantification of 2-methylimidazole (2-MeI) and 4-methylimidazole (4-MeI) in colas and dark beers. Conventional sample preparation technique for GC-MS requires laborious and time-consuming steps consisting of sample concentration, pH adjustment, ion pair extraction, centrifugation, back-extraction, centrifugation, derivatization, and extraction. Our sample preparation technique consists of only 2 steps (in situ derivation and extraction) which requires less than 3 min. This method provided high linearity, low limit of detection and limit of quantification, high recovery, and high intra- and interday repeatability. It was found that internal standard method with diluted stable isotope (4-MeI-d 6 ) and 2-ethylimidazole (2-EI) could not correctly compensate the matrix effects. Thus, standard addition technique was used for the quantification of 2- and 4-MeI. The established method was successfully applied to colas and dark beers for the determination of 2-MeI and 4-MeI. The 4-MeI contents in colas and dark beers ranged from 8 to 319 μg/L and from trace to 417 μg/L, respectively. Small quantity (0 to 8 μg/L) of 2-MeI was found only in dark beers. The contents of 4-MeI (22 μg/L) in colas obtained from fast food restaurants were significantly lower than those (177 μg/L) in canned or bottled colas. © 2017 Institute of Food Technologists®.
Beekman, Chantal; Janson, Anneke A; Baghat, Aabed; van Deutekom, Judith C; Datson, Nicole A
2018-01-01
Duchenne muscular dystrophy (DMD) is a neuromuscular disease characterized by progressive weakness of the skeletal and cardiac muscles. This X-linked disorder is caused by open reading frame disrupting mutations in the DMD gene, resulting in strong reduction or complete absence of dystrophin protein. In order to use dystrophin as a supportive or even surrogate biomarker in clinical studies on investigational drugs aiming at correcting the primary cause of the disease, the ability to reliably quantify dystrophin expression in muscle biopsies of DMD patients pre- and post-treatment is essential. Here we demonstrate the application of the ProteinSimple capillary immunoassay (Wes) method, a gel- and blot-free method requiring less sample, antibody and time to run than conventional Western blot assay. We optimized dystrophin quantification by Wes using 2 different antibodies and found it to be highly sensitive, reproducible and quantitative over a large dynamic range. Using a healthy control muscle sample as a reference and α-actinin as a protein loading/muscle content control, a panel of skeletal muscle samples consisting of 31 healthy controls, 25 Becker Muscle dystrophy (BMD) and 17 DMD samples was subjected to Wes analysis. In healthy controls dystrophin levels varied 3 to 5-fold between the highest and lowest muscle samples, with the reference sample representing the average of all 31 samples. In BMD muscle samples dystrophin levels ranged from 10% to 90%, with an average of 33% of the healthy muscle average, while for the DMD samples the average dystrophin level was 1.3%, ranging from 0.7% to 7% of the healthy muscle average. In conclusion, Wes is a suitable, efficient and reliable method for quantification of dystrophin expression as a biomarker in DMD clinical drug development.
Morphology supporting function: attenuation correction for SPECT/CT, PET/CT, and PET/MR imaging
Lee, Tzu C.; Alessio, Adam M.; Miyaoka, Robert M.; Kinahan, Paul E.
2017-01-01
Both SPECT, and in particular PET, are unique in medical imaging for their high sensitivity and direct link to a physical quantity, i.e. radiotracer concentration. This gives PET and SPECT imaging unique capabilities for accurately monitoring disease activity for the purposes of clinical management or therapy development. However, to achieve a direct quantitative connection between the underlying radiotracer concentration and the reconstructed image values several confounding physical effects have to be estimated, notably photon attenuation and scatter. With the advent of dual-modality SPECT/CT, PET/CT, and PET/MR scanners, the complementary CT or MR image data can enable these corrections, although there are unique challenges for each combination. This review covers the basic physics underlying photon attenuation and scatter and summarizes technical considerations for multimodal imaging with regard to PET and SPECT quantification and methods to address the challenges for each multimodal combination. PMID:26576737
Quantifying color variation: Improved formulas for calculating hue with segment classification.
Smith, Stacey D
2014-03-01
Differences in color form a major component of biological variation, and quantifying these differences is the first step to understanding their evolutionary and ecological importance. One common method for measuring color variation is segment classification, which uses three variables (chroma, hue, and brightness) to describe the height and shape of reflectance curves. This study provides new formulas for calculating hue (the variable that describes the "type" of color) to give correct values in all regions of color space. • Reflectance spectra were obtained from the literature, and chroma, hue, and brightness were computed for each spectrum using the original formulas as well as the new formulas. Only the new formulas result in correct values in the blue-green portion of color space. • Use of the new formulas for calculating hue will result in more accurate color quantification for a broad range of biological applications.
High-throughput real-time quantitative reverse transcription PCR.
Bookout, Angie L; Cummins, Carolyn L; Mangelsdorf, David J; Pesola, Jean M; Kramer, Martha F
2006-02-01
Extensive detail on the application of the real-time quantitative polymerase chain reaction (QPCR) for the analysis of gene expression is provided in this unit. The protocols are designed for high-throughput, 384-well-format instruments, such as the Applied Biosystems 7900HT, but may be modified to suit any real-time PCR instrument. QPCR primer and probe design and validation are discussed, and three relative quantitation methods are described: the standard curve method, the efficiency-corrected DeltaCt method, and the comparative cycle time, or DeltaDeltaCt method. In addition, a method is provided for absolute quantification of RNA in unknown samples. RNA standards are subjected to RT-PCR in the same manner as the experimental samples, thus accounting for the reaction efficiencies of both procedures. This protocol describes the production and quantitation of synthetic RNA molecules for real-time and non-real-time RT-PCR applications.
[Archaeology and criminology--Strengths and weaknesses of interdisciplinary cooperation].
Bachhiesl, Christian
2015-01-01
Interdisciplinary cooperation of archaeology and criminology is often focussed on the scientific methods applied in both fields of knowledge. In combination with the humanistic methods traditionally used in archaeology, the finding of facts can be enormously increased and the subsequent hermeneutic deduction of human behaviour in the past can take place on a more solid basis. Thus, interdisciplinary cooperation offers direct and indirect advantages. But it can also cause epistemological problems, if the weaknesses and limits of one method are to be corrected by applying methods used in other disciplines. This may result in the application of methods unsuitable for the problem to be investigated so that, in a way, the methodological and epistemological weaknesses of two disciplines potentiate each other. An example of this effect is the quantification of qualia. These epistemological reflections are compared with the interdisciplinary approach using the concrete case of the "Eulau Crime Scene".
Adamski, Mateusz G; Gumann, Patryk; Baird, Alison E
2014-01-01
Over the past decade rapid advances have occurred in the understanding of RNA expression and its regulation. Quantitative polymerase chain reactions (qPCR) have become the gold standard for quantifying gene expression. Microfluidic next generation, high throughput qPCR now permits the detection of transcript copy number in thousands of reactions simultaneously, dramatically increasing the sensitivity over standard qPCR. Here we present a gene expression analysis method applicable to both standard polymerase chain reactions (qPCR) and high throughput qPCR. This technique is adjusted to the input sample quantity (e.g., the number of cells) and is independent of control gene expression. It is efficiency-corrected and with the use of a universal reference sample (commercial complementary DNA (cDNA)) permits the normalization of results between different batches and between different instruments--regardless of potential differences in transcript amplification efficiency. Modifications of the input quantity method include (1) the achievement of absolute quantification and (2) a non-efficiency corrected analysis. When compared to other commonly used algorithms the input quantity method proved to be valid. This method is of particular value for clinical studies of whole blood and circulating leukocytes where cell counts are readily available.
Simultaneous quantitative analysis of nine vitamin D compounds in human blood using LC-MS/MS.
Abu Kassim, Nur Sofiah; Gomes, Fabio P; Shaw, Paul Nicholas; Hewavitharana, Amitha K
2016-01-01
It has been suggested that each member of the family of vitamin D compounds may have different function(s). Therefore, selective quantification of each compound is important in clinical research. Development and validation attempts of a simultaneous determination method of 12 vitamin D compounds in human blood using precolumn derivatization followed by LC-MS/MS is described. Internal standard calibration with 12 stable isotope labeled analogs was used to correct for matrix effects in MS detector. Nine vitamin D compounds were quantifiable in blood samples with detection limits within femtomole levels. Serum (compared with plasma) was found to be a more suitable sample type, and protein precipitation (compared with saponification) a more effective extraction method for vitamin D assay.
Embedded Model Error Representation and Propagation in Climate Models
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.
2017-12-01
Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.
Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu
2015-02-01
To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.
Gong, Kuang; Yang, Jaewon; Kim, Kyungsang; El Fakhri, Georges; Seo, Youngho; Li, Quanzheng
2018-05-23
Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure. © 2018 Institute of Physics and Engineering in Medicine.
Long term measurement of lake evaporation using a pontoon mounted Eddy Covariance system
NASA Astrophysics Data System (ADS)
McGowan, H. A.; McGloin, R.; McJannet, D.; Burn, S.
2011-12-01
Accurate quantification of evaporation from water storages is essential for design of water management and allocation policy that aims to balance demands for water without compromising the sustainability of future water resources, particularly during periods of prolonged and severe drought. Precise measurement of evaporation from lakes and dams however, presents significant research challenges. These include design and installation of measurement platforms that can withstand a range of wind and wave conditions; accurate determination of the evaporation measurement footprint and the influence of changing water levels. In this paper we present results from a two year long deployment of a pontoon mounted Eddy Covariance (EC) system on a 17.2ha irrigation reservoir in southeast Queensland, Australia. The EC unit included a CSAT-3 sonic anemometer (Campbell Scientific, Utah, United States) and a Li-Cor CS7500 open-path H2O/CO2 infrared gas analyzer (LiCor, Nebraska, United States) at a height of 2.2m, a net radiometer (CNR1, Kipp & Zonen, Netherlands) at a height of 1.2m and a humidity and temperature probe (HMP45C,Vaisala, Finland) at 2.3m. The EC unit was controlled by a Campbell Scientific CR3000 data logger with flux measurements made at 10 Hz and block averaged values logged every 15 minutes. Power to the EC system was from mounted solar panels that charged deep cycle lead-acid batteries while communication was via a cellphone data link. The pontoon was fitted with a weighted central beam and gimbal ring system that allowed self-levelling of the instrumentation and minimized dynamic influences on measurements (McGowan et al 2010; Wiebe et al 2011). EC measurements were corrected for tilt errors using the double rotation method for coordinate rotation described by Wilczak et al. (2001). High and low frequency attenuation of the measured co-spectrum was corrected using Massman's (2000) method for estimating frequency response corrections, while measurements were corrected for density fluctuations using the method of Webb-Pearman-Leuning (Webb et al. 1980). The evaporation measurement footprint over the reservoir was determined using the SCADIS one and a half order turbulence closure footprint model (Sogachev and Lloyd, 2004). Comparison of EC measured evaporation rates show excellent agreement with independent measurement of evaporation by scintillometer under a wide range of conditions (McJannet et al 2011). They confirm that pontoon mounted EC systems offer a robust, highly portable and reliable cost effective approach for accurate quantification of evaporation from reservoirs.
Highly multiplexed targeted proteomics using precise control of peptide retention time.
Gallien, Sebastien; Peterman, Scott; Kiyonami, Reiko; Souady, Jamal; Duriez, Elodie; Schoen, Alan; Domon, Bruno
2012-04-01
Large-scale proteomics applications using SRM analysis on triple quadrupole mass spectrometers present new challenges to LC-MS/MS experimental design. Despite the automation of building large-scale LC-SRM methods, the increased numbers of targeted peptides can compromise the balance between sensitivity and selectivity. To facilitate large target numbers, time-scheduled SRM transition acquisition is performed. Previously published results have demonstrated incorporation of a well-characterized set of synthetic peptides enabled chromatographic characterization of the elution profile for most endogenous peptides. We have extended this application of peptide trainer kits to not only build SRM methods but to facilitate real-time elution profile characterization that enables automated adjustment of the scheduled detection windows. Incorporation of dynamic retention time adjustments better facilitate targeted assays lasting several days without the need for constant supervision. This paper provides an overview of how the dynamic retention correction approach identifies and corrects for commonly observed LC variations. This adjustment dramatically improves robustness in targeted discovery experiments as well as routine quantification experiments. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Li, Ming; Josephs, Ralf D; Daireaux, Adeline; Choteau, Tiphaine; Westwood, Steven; Wielgosz, Robert I; Li, Hongmei
2018-06-04
Peptides are an increasingly important group of biomarkers and pharmaceuticals. The accurate purity characterization of peptide calibrators is critical for the development of reference measurement systems for laboratory medicine and quality control of pharmaceuticals. The peptides used for these purposes are increasingly produced through peptide synthesis. Various approaches (for example mass balance, amino acid analysis, qNMR, and nitrogen determination) can be applied to accurately value assign the purity of peptide calibrators. However, all purity assessment approaches require a correction for structurally related peptide impurities in order to avoid biases. Liquid chromatography coupled to high resolution mass spectrometry (LC-hrMS) has become the key technique for the identification and accurate quantification of structurally related peptide impurities in intact peptide calibrator materials. In this study, LC-hrMS-based methods were developed and validated in-house for the identification and quantification of structurally related peptide impurities in a synthetic human C-peptide (hCP) material, which served as a study material for an international comparison looking at the competencies of laboratories to perform peptide purity mass fraction assignments. More than 65 impurities were identified, confirmed, and accurately quantified by using LC-hrMS. The total mass fraction of all structurally related peptide impurities in the hCP study material was estimated to be 83.3 mg/g with an associated expanded uncertainty of 3.0 mg/g (k = 2). The calibration hierarchy concept used for the quantification of individual impurities is described in detail. Graphical abstract ᅟ.
Uncertainty quantification in Eulerian-Lagrangian models for particle-laden flows
NASA Astrophysics Data System (ADS)
Fountoulakis, Vasileios; Jacobs, Gustaaf; Udaykumar, Hs
2017-11-01
A common approach to ameliorate the computational burden in simulations of particle-laden flows is to use a point-particle based Eulerian-Lagrangian model, which traces individual particles in their Lagrangian frame and models particles as mathematical points. The particle motion is determined by Stokes drag law, which is empirically corrected for Reynolds number, Mach number and other parameters. The empirical corrections are subject to uncertainty. Treating them as random variables renders the coupled system of PDEs and ODEs stochastic. An approach to quantify the propagation of this parametric uncertainty to the particle solution variables is proposed. The approach is based on averaging of the governing equations and allows for estimation of the first moments of the quantities of interest. We demonstrate the feasibility of our proposed methodology of uncertainty quantification of particle-laden flows on one-dimensional linear and nonlinear Eulerian-Lagrangian systems. This research is supported by AFOSR under Grant FA9550-16-1-0008.
Scanning electron microscope image signal-to-noise ratio monitoring for micro-nanomanipulation.
Marturi, Naresh; Dembélé, Sounkalo; Piat, Nadine
2014-01-01
As an imaging system, scanning electron microscope (SEM) performs an important role in autonomous micro-nanomanipulation applications. When it comes to the sub micrometer range and at high scanning speeds, the images produced by the SEM are noisy and need to be evaluated or corrected beforehand. In this article, the quality of images produced by a tungsten gun SEM has been evaluated by quantifying the level of image signal-to-noise ratio (SNR). In order to determine the SNR, an efficient and online monitoring method is developed based on the nonlinear filtering using a single image. Using this method, the quality of images produced by a tungsten gun SEM is monitored at different experimental conditions. The derived results demonstrate the developed method's efficiency in SNR quantification and illustrate the imaging quality evolution in SEM. © 2014 Wiley Periodicals, Inc.
Analyzing the uncertainty of suspended sediment load prediction using sequential data assimilation
NASA Astrophysics Data System (ADS)
Leisenring, Marc; Moradkhani, Hamid
2012-10-01
SummaryA first step in understanding the impacts of sediment and controlling the sources of sediment is to quantify the mass loading. Since mass loading is the product of flow and concentration, the quantification of loads first requires the quantification of runoff volume. Using the National Weather Service's SNOW-17 and the Sacramento Soil Moisture Accounting (SAC-SMA) models, this study employed particle filter based Bayesian data assimilation methods to predict seasonal snow water equivalent (SWE) and runoff within a small watershed in the Lake Tahoe Basin located in California, USA. A procedure was developed to scale the variance multipliers (a.k.a hyperparameters) for model parameters and predictions based on the accuracy of the mean predictions relative to the ensemble spread. In addition, an online bias correction algorithm based on the lagged average bias was implemented to detect and correct for systematic bias in model forecasts prior to updating with the particle filter. Both of these methods significantly improved the performance of the particle filter without requiring excessively wide prediction bounds. The flow ensemble was linked to a non-linear regression model that was used to predict suspended sediment concentrations (SSCs) based on runoff rate and time of year. Runoff volumes and SSC were then combined to produce an ensemble of suspended sediment load estimates. Annual suspended sediment loads for the 5 years of simulation were finally computed along with 95% prediction intervals that account for uncertainty in both the SSC regression model and flow rate estimates. Understanding the uncertainty associated with annual suspended sediment load predictions is critical for making sound watershed management decisions aimed at maintaining the exceptional clarity of Lake Tahoe. The computational methods developed and applied in this research could assist with similar studies where it is important to quantify the predictive uncertainty of pollutant load estimates.
Gambarota, Giulio; Hitti, Eric; Leporq, Benjamin; Saint-Jalmes, Hervé; Beuf, Olivier
2017-01-01
Tissue perfusion measurements using intravoxel incoherent motion (IVIM) diffusion-MRI are of interest for investigations of liver pathologies. A confounding factor in the perfusion quantification is the partial volume between liver tissue and large blood vessels. The aim of this study was to assess and correct for this partial volume effect in the estimation of the perfusion fraction. MRI experiments were performed at 3 Tesla with a diffusion-MRI sequence at 12 b-values. Diffusion signal decays in liver were analyzed using the non-negative least square (NNLS) method and the biexponential fitting approach. In some voxels, the NNLS analysis yielded a very fast-decaying component that was assigned to partial volume with the blood flowing in large vessels. Partial volume correction was performed by biexponential curve fitting, where the first data point (b = 0 s/mm 2 ) was eliminated in voxels with a very fast-decaying component. Biexponential fitting with partial volume correction yielded parametric maps with perfusion fraction values smaller than biexponential fitting without partial volume correction. The results of the current study indicate that the NNLS analysis in combination with biexponential curve fitting allows to correct for partial volume effects originating from blood flow in IVIM perfusion fraction measurements. Magn Reson Med 77:310-317, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Tran, Ngoc Han; Chen, Hongjie; Do, Thanh Van; Reinhard, Martin; Ngo, Huu Hao; He, Yiliang; Gin, Karina Yew-Hoong
2016-10-01
A robust and sensitive analytical method was developed for the simultaneous analysis of 21 target antimicrobials in different environmental water samples. Both single SPE and tandem SPE cartridge systems were investigated to simultaneously extract multiple classes of antimicrobials. Experimental results showed that good extraction efficiencies (84.5-105.6%) were observed for the vast majority of the target analytes when extraction was performed using the tandem SPE cartridge (SB+HR-X) system under an extraction pH of 3.0. HPLC-MS/MS parameters were optimized for simultaneous analysis of all the target analytes in a single injection. Quantification of target antimicrobials in water samples was accomplished using 15 isotopically labeled internal standards (ILISs), which allowed the efficient compensation of the losses of target analytes during sample preparation and correction of matrix effects during UHPLC-MS/MS as well as instrument fluctuations in MS/MS signal intensity. Method quantification limit (MQL) for most target analytes based on SPE was below 5ng/L for surface waters, 10ng/L for treated wastewater effluents, and 15ng/L for raw wastewater. The method was successfully applied to detect and quantify the occurrence of the target analytes in raw influent, treated effluent and surface water samples. Copyright © 2016 Elsevier B.V. All rights reserved.
Performance of a malaria microscopy image analysis slide reading device
2012-01-01
Background Viewing Plasmodium in Romanovsky-stained blood has long been considered the gold standard for diagnosis and a cornerstone in management of the disease. This method however, requires a subjective evaluation by trained, experienced diagnosticians and establishing proficiency of diagnosis is fraught with many challenges. Reported here is an evaluation of a diagnostic system (a “device” consisting of a microscope, a scanner, and a computer algorithm) that evaluates scanned images of standard Giemsa-stained slides and reports species and parasitaemia. Methods The device was challenged with two independent tests: a 55 slide, expert slide reading test the composition of which has been published by the World Health Organization (“WHO55” test), and a second test in which slides were made from a sample of consenting subjects participating in a malaria incidence survey conducted in Equatorial Guinea (EGMIS test). These subjects’ blood was tested by malaria RDT as well as having the blood smear diagnosis unequivocally determined by a worldwide panel of a minimum of six reference microscopists. Only slides with unequivocal microscopic diagnoses were used for the device challenge, n = 119. Results On the WHO55 test, the device scored a “Level 4” using the WHO published grading scheme. Broken down by more traditional analysis parameters this result was translated to 89% and 70% sensitivity and specificity, respectively. Species were correctly identified in 61% of the slides and the quantification of parasites fell within acceptable range of the validated parasitaemia in 10% of the cases. On the EGMIS test it scored 100% and 94% sensitivity/specificity, with 64% of the species correct and 45% of the parasitaemia within an acceptable range. A pooled analysis of the 174 slides used for both tests resulted in an overall 92% sensitivity and 90% specificity with 61% species and 19% quantifications correct. Conclusions In its current manifestation, the device performs at a level comparable to that of many human slide readers. Because its use requires minimal additional equipment and it uses standard stained slides as starting material, its widespread adoption may eliminate the current uncertainty about the quality of microscopic diagnoses worldwide. PMID:22559294
Spatial resolution properties of motion-compensated tomographic image reconstruction methods.
Chun, Se Young; Fessler, Jeffrey A
2012-07-01
Many motion-compensated image reconstruction (MCIR) methods have been proposed to correct for subject motion in medical imaging. MCIR methods incorporate motion models to improve image quality by reducing motion artifacts and noise. This paper analyzes the spatial resolution properties of MCIR methods and shows that nonrigid local motion can lead to nonuniform and anisotropic spatial resolution for conventional quadratic regularizers. This undesirable property is akin to the known effects of interactions between heteroscedastic log-likelihoods (e.g., Poisson likelihood) and quadratic regularizers. This effect may lead to quantification errors in small or narrow structures (such as small lesions or rings) of reconstructed images. This paper proposes novel spatial regularization design methods for three different MCIR methods that account for known nonrigid motion. We develop MCIR regularization designs that provide approximately uniform and isotropic spatial resolution and that match a user-specified target spatial resolution. Two-dimensional PET simulations demonstrate the performance and benefits of the proposed spatial regularization design methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhou; Adams, Rachel M; Chourey, Karuna
2012-01-01
A variety of quantitative proteomics methods have been developed, including label-free, metabolic labeling, and isobaric chemical labeling using iTRAQ or TMT. Here, these methods were compared in terms of the depth of proteome coverage, quantification accuracy, precision, and reproducibility using a high-performance hybrid mass spectrometer, LTQ Orbitrap Velos. Our results show that (1) the spectral counting method provides the deepest proteome coverage for identification, but its quantification performance is worse than labeling-based approaches, especially the quantification reproducibility; (2) metabolic labeling and isobaric chemical labeling are capable of accurate, precise, and reproducible quantification and provide deep proteome coverage for quantification. Isobaricmore » chemical labeling surpasses metabolic labeling in terms of quantification precision and reproducibility; (3) iTRAQ and TMT perform similarly in all aspects compared in the current study using a CID-HCD dual scan configuration. Based on the unique advantages of each method, we provide guidance for selection of the appropriate method for a quantitative proteomics study.« less
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
NASA Astrophysics Data System (ADS)
Wind, L.; Szymanski, W. W.
2002-06-01
Figure 3 of this paper has not printed correctly. Specifically, the character ψ is missing five times. The correct figure is reproduced below. The electronic version is unaffected. Figure 3. Schematic diagram of the lp detector system. The angle subtained by the cone of light that will be detected is constant and is determined by the focal length of the lens and the radius of the pinhole. To the left of the position indicated by z* the lp geometry behaves in the same way as the open detector geometry.
Quantification of 18F-fluorocholine kinetics in patients with prostate cancer.
Verwer, Eline E; Oprea-Lager, Daniela E; van den Eertwegh, Alfons J M; van Moorselaar, Reindert J A; Windhorst, Albert D; Schwarte, Lothar A; Hendrikse, N Harry; Schuit, Robert C; Hoekstra, Otto S; Lammertsma, Adriaan A; Boellaard, Ronald
2015-03-01
Choline kinase is upregulated in prostate cancer, resulting in increased (18)F-fluoromethylcholine uptake. This study used pharmacokinetic modeling to validate the use of simplified methods for quantification of (18)F-fluoromethylcholine uptake in a routine clinical setting. Forty-minute dynamic PET/CT scans were acquired after injection of 204 ± 9 MBq of (18)F-fluoromethylcholine, from 8 patients with histologically proven metastasized prostate cancer. Plasma input functions were obtained using continuous arterial blood-sampling as well as using image-derived methods. Manual arterial blood samples were used for calibration and correction for plasma-to-blood ratio and metabolites. Time-activity curves were derived from volumes of interest in all visually detectable lymph node metastases. (18)F-fluoromethylcholine kinetics were studied by nonlinear regression fitting of several single- and 2-tissue plasma input models to the time-activity curves. Model selection was based on the Akaike information criterion and measures of robustness. In addition, the performance of several simplified methods, such as standardized uptake value (SUV), was assessed. Best fits were obtained using an irreversible compartment model with blood volume parameter. Parent fractions were 0.12 ± 0.4 after 20 min, necessitating individual metabolite corrections. Correspondence between venous and arterial parent fractions was low as determined by the intraclass correlation coefficient (0.61). Results for image-derived input functions that were obtained from volumes of interest in blood-pool structures distant from tissues of high (18)F-fluoromethylcholine uptake yielded good correlation to those for the blood-sampling input functions (R(2) = 0.83). SUV showed poor correlation to parameters derived from full quantitative kinetic analysis (R(2) < 0.34). In contrast, lesion activity concentration normalized to the integral of the blood activity concentration over time (SUVAUC) showed good correlation (R(2) = 0.92 for metabolite-corrected plasma; 0.65 for whole-blood activity concentrations). SUV cannot be used to quantify (18)F-fluoromethylcholine uptake. A clinical compromise could be SUVAUC derived from 2 consecutive static PET scans, one centered on a large blood-pool structure during 0-30 min after injection to obtain the blood activity concentrations and the other a whole-body scan at 30 min after injection to obtain lymph node activity concentrations. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Bord, N; Crétier, G; Rocca, J-L; Bailly, C; Souchez, J-P
2004-09-01
Alkanolamines such as diethanolamine (DEA) and N-methyldiethanolamine (MDEA) are used in desulfurization processes in crude oil refineries. These compounds may be found in process waters following an accidental contamination. The analysis of alkanolamines in refinery process waters is very difficult due to the high ammonium concentration of the samples. This paper describes a method for the determination of DEA in high ammonium concentration refinery process waters by using capillary electrophoresis (CE) with indirect UV detection. The same method can be used for the determination of MDEA. Best results were achieved with a background electrolyte (BGE) comprising 10 mM histidine adjusted to pH 5.0 with acetic acid. The development of this electrolyte and the analytical performances are discussed. The quantification was performed by using internal standardization, by which triethanolamine (TEA) was used as internal standard. A matrix effect due to the high ammonium content has been highlighted and standard addition was therefore used. The developed method was characterized in terms of repeatability of migration times and corrected peak areas, linearity, and accuracy. Limits of detection (LODs) and quantification (LOQs) obtained were 0.2 and 0.7 ppm, respectively. The CE method was applied to the determination of DEA or MDEA in refinery process waters spiked with known amounts of analytes and it gave excellent results, since uncertainties obtained were 8 and 5%, respectively.
Aghili, Zahra; Nasirizadeh, Navid; Divsalar, Adeleh; Shoeibi, Shahram; Yaghmaei, Parichehreh
2017-09-15
Genetically Modified Organisms, have been entered our food chain and detection of these organisms in market products are still the main challenge for scientists. Among several developed detection/quantification methods for detection of these organisms, the electrochemical nanobiosensors are the most attended which are combining the advantages of using nanomaterials, electrochemical methods and biosensors. In this research, a novel and sensitive electrochemical nanobiosensor for detection/quantification of these organisms have been developed using nanomaterials; Exfoliated Graphene Oxide and Gold Nano-Urchins for modification of the screen-printed carbon electrode, and also applying a specific DNA probe as well as hematoxylin for electrochemical indicator. Application time period and concentration of the components have been optimized and also several reliable methods have been used to assess the correct assembling of the nanobiosensor e.g. field emission scanning electron microscope, cyclic voltammetry and electrochemical impedance spectroscopy. The results shown the linear range of the sensor was 40.0-1100.0 femtomolar and the limit of detection calculated as 13.0 femtomolar. Besides, the biosensor had good selectivity towards the target DNA over the non-specific sequences and also it was cost and time-effective and possess ability to be used in real sample environment of extracted DNA of Genetically Modified Organism products. Therefore, the superiority of the aforementioned specification to the other previously published methods was proved adequate. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Fu, Yi; Yu, Guoqiang; Levine, Douglas A.; Wang, Niya; Shih, Ie-Ming; Zhang, Zhen; Clarke, Robert; Wang, Yue
2015-09-01
Most published copy number datasets on solid tumors were obtained from specimens comprised of mixed cell populations, for which the varying tumor-stroma proportions are unknown or unreported. The inability to correct for signal mixing represents a major limitation on the use of these datasets for subsequent analyses, such as discerning deletion types or detecting driver aberrations. We describe the BACOM2.0 method with enhanced accuracy and functionality to normalize copy number signals, detect deletion types, estimate tumor purity, quantify true copy numbers, and calculate average-ploidy value. While BACOM has been validated and used with promising results, subsequent BACOM analysis of the TCGA ovarian cancer dataset found that the estimated average tumor purity was lower than expected. In this report, we first show that this lowered estimate of tumor purity is the combined result of imprecise signal normalization and parameter estimation. Then, we describe effective allele-specific absolute normalization and quantification methods that can enhance BACOM applications in many biological contexts while in the presence of various confounders. Finally, we discuss the advantages of BACOM in relation to alternative approaches. Here we detail this revised computational approach, BACOM2.0, and validate its performance in real and simulated datasets.
Mato Abad, Virginia; Quirós, Alicia; García-Álvarez, Roberto; Loureiro, Javier Pereira; Alvarez-Linera, Juan; Frank, Ana; Hernández-Tamames, Juan Antonio
2014-01-01
1H-MRS variability increases due to normal aging and also as a result of atrophy in grey and white matter caused by neurodegeneration. In this work, an automatic process was developed to integrate data from spectra and high-resolution anatomical images to quantify metabolites, taking into account tissue partial volumes within the voxel of interest avoiding additional spectra acquisitions required for partial volume correction. To evaluate this method, we use a cohort of 135 subjects (47 male and 88 female, aged between 57 and 99 years) classified into 4 groups: 38 healthy participants, 20 amnesic mild cognitive impairment patients, 22 multi-domain mild cognitive impairment patients, and 55 Alzheimer's disease patients. Our findings suggest that knowing the voxel composition of white and grey matter and cerebrospinal fluid is necessary to avoid partial volume variations in a single-voxel study and to decrease part of the variability found in metabolites quantification, particularly in those studies involving elder patients and neurodegenerative diseases. The proposed method facilitates the use of 1H-MRS techniques in statistical studies in Alzheimer's disease, because it provides more accurate quantitative measurements, reduces the inter-subject variability, and improves statistical results when performing group comparisons.
Domingos Alves, Renata; Romero-González, Roberto; López-Ruiz, Rosalía; Jiménez-Medina, M L; Garrido Frenich, Antonia
2016-11-01
An analytical method based on a modified QuPPe (quick polar pesticide) extraction procedure coupled with liquid chromatography-tandem mass spectrometry (LC-MS/MS) was evaluated for the determination of four polar compounds (chlorate, fosetyl-Al, maleic hydrazide, and perchlorate) in nutraceutical products obtained from soy. Experimental conditions including extraction such as solvent, acidification, time, and clean-up sorbents were varied. Acidified acetonitrile (1 % formic acid, v/v) was used as extraction solvent instead of methanol (conventional QuPPe), which provides a doughy mixture which cannot be injected into the LC. Clean-up or derivatization steps were avoided. For analysis, several stationary phases were evaluated and Hypercarb (porous graphitic carbon) provided the best results. The optimized method was validated and recoveries ranged between 46 and 119 %, and correction factors can be used for quantification purposes bearing in mind that inter-day precision was equal to or lower than 17 %. Limits of quantification (LOQs) ranged from 4 to 100 μg kg -1 . Soy-based nutraceutical products were analyzed and chlorate was detected in five samples at concentrations between 63 and 1642 μg kg -1 . Graphical Abstract Analysis of polar compounds in soy-based nutraceutical products.
Guo, Yingkun; Zheng, Hairong; Sun, Phillip Zhe
2015-01-01
Chemical exchange saturation transfer (CEST) MRI is a versatile imaging method that probes the chemical exchange between bulk water and exchangeable protons. CEST imaging indirectly detects dilute labile protons via bulk water signal changes following selective saturation of exchangeable protons, which offers substantial sensitivity enhancement and has sparked numerous biomedical applications. Over the past decade, CEST imaging techniques have rapidly evolved due to contributions from multiple domains, including the development of CEST mathematical models, innovative contrast agent designs, sensitive data acquisition schemes, efficient field inhomogeneity correction algorithms, and quantitative CEST (qCEST) analysis. The CEST system that underlies the apparent CEST-weighted effect, however, is complex. The experimentally measurable CEST effect depends not only on parameters such as CEST agent concentration, pH and temperature, but also on relaxation rate, magnetic field strength and more importantly, experimental parameters including repetition time, RF irradiation amplitude and scheme, and image readout. Thorough understanding of the underlying CEST system using qCEST analysis may augment the diagnostic capability of conventional imaging. In this review, we provide a concise explanation of CEST acquisition methods and processing algorithms, including their advantages and limitations, for optimization and quantification of CEST MRI experiments. PMID:25641791
Rondel, Caroline; Marcato-Romain, Claire-Emmanuelle; Girbal-Neuhauser, Elisabeth
2013-05-15
A colorimetric assay based on the conventional anthrone reaction was investigated for specific quantification of uronic acids (UA) in the presence of neutral sugars and/or proteins. Scanning of glucose (Glu) and glucuronic acid (GlA) was performed after the reaction with anthrone and a double absorbance reading was made, at 560 nm and at 620 nm, in order to quantify the UA and neutral sugars separately. The assay was implemented on binary or ternary solutions containing Glu, GlA and bovine serum albumin (BSA) in order to validate its specificity towards sugars and check possible interference with other biochemical components such as proteins. Statistical analysis indicated that this assay provided correct quantification of uronic sugars from 50 to 400 mg/l and of neutral sugars from 20 to 80 mg/l, in the presence of proteins with concentrations reaching 600 mg/l. The proposed protocol can be of great interest for simultaneous determination of uronic and neutral sugars in complex biological samples. In particular, it can be used to correctly quantify the Extracellular Polymeric Substances (EPS) isolated from the biological matrix of many bacterial aggregates, even in the presence of EPS extractant such as EDTA. Copyright © 2013 Elsevier Ltd. All rights reserved.
Sun, Rongrong; Wang, Yuanyuan
2008-11-01
Predicting the spontaneous termination of the atrial fibrillation (AF) leads to not only better understanding of mechanisms of the arrhythmia but also the improved treatment of the sustained AF. A novel method is proposed to characterize the AF based on structure and the quantification of the recurrence plot (RP) to predict the termination of the AF. The RP of the electrocardiogram (ECG) signal is firstly obtained and eleven features are extracted to characterize its three basic patterns. Then the sequential forward search (SFS) algorithm and Davies-Bouldin criterion are utilized to select the feature subset which can predict the AF termination effectively. Finally, the multilayer perceptron (MLP) neural network is applied to predict the AF termination. An AF database which includes one training set and two testing sets (A and B) of Holter ECG recordings is studied. Experiment results show that 97% of testing set A and 95% of testing set B are correctly classified. It demonstrates that this algorithm has the ability to predict the spontaneous termination of the AF effectively.
Shivali, Garg; Praful, Lahorkar; Vijay, Gadgil
2012-01-01
Fourier transform infrared (FT-IR) spectroscopy is a technique widely used for detection and quantification of various chemical moieties. This paper describes the use of the FT-IR spectroscopy technique for the quantification of total lactones present in Inula racemosa and Andrographis paniculata. To validate the FT-IR spectroscopy method for quantification of total lactones in I. racemosa and A. paniculata. Dried and powdered I. racemosa roots and A. paniculata plant were extracted with ethanol and dried to remove ethanol completely. The ethanol extract was analysed in a KBr pellet by FT-IR spectroscopy. The FT-IR spectroscopy method was validated and compared with a known spectrophotometric method for quantification of lactones in A. paniculata. By FT-IR spectroscopy, the amount of total lactones was found to be 2.12 ± 0.47% (n = 3) in I. racemosa and 8.65 ± 0.51% (n = 3) in A. paniculata. The method showed comparable results with a known spectrophotometric method used for quantification of such lactones: 8.42 ± 0.36% (n = 3) in A. paniculata. Limits of detection and quantification for isoallantolactone were 1 µg and 10 µg respectively; for andrographolide they were 1.5 µg and 15 µg respectively. Recoveries were over 98%, with good intra- and interday repeatability: RSD ≤ 2%. The FT-IR spectroscopy method proved linear, accurate, precise and specific, with low limits of detection and quantification, for estimation of total lactones, and is less tedious than the UV spectrophotometric method for the compounds tested. This validated FT-IR spectroscopy method is readily applicable for the quality control of I. racemosa and A. paniculata. Copyright © 2011 John Wiley & Sons, Ltd.
TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics
Röst, Hannes L.; Liu, Yansheng; D’Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C.; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi
2016-01-01
Large scale, quantitative proteomic studies have become essential for the analysis of clinical cohorts, large perturbation experiments and systems biology studies. While next-generation mass spectrometric techniques such as SWATH-MS have substantially increased throughput and reproducibility, ensuring consistent quantification of thousands of peptide analytes across multiple LC-MS/MS runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we have developed the TRIC software which utilizes fragment ion data to perform cross-run alignment, consistent peak-picking and quantification for high throughput targeted proteomics. TRIC uses a graph-based alignment strategy based on non-linear retention time correction to integrate peak elution information from all LC-MS/MS runs acquired in a study. When compared to state-of-the-art SWATH-MS data analysis, the algorithm was able to reduce the identification error by more than 3-fold at constant recall, while correcting for highly non-linear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem (iPS) cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups and substantially increased the quantitative completeness and biological information in the data, providing insights into protein dynamics of iPS cells. Overall, this study demonstrates the importance of consistent quantification in highly challenging experimental setups, and proposes an algorithm to automate this task, constituting the last missing piece in a pipeline for automated analysis of massively parallel targeted proteomics datasets. PMID:27479329
On correct evaluation techniques of brightness enhancement effect measurement data
NASA Astrophysics Data System (ADS)
Kukačka, Leoš; Dupuis, Pascal; Motomura, Hideki; Rozkovec, Jiří; Kolář, Milan; Zissis, Georges; Jinno, Masafumi
2017-11-01
This paper aims to establish confidence intervals of the quantification of brightness enhancement effects resulting from the use of pulsing bright light. It is found that the methods used so far may yield significant bias in the published results, overestimating or underestimating the enhancement effect. The authors propose to use a linear algebra method called the total least squares. Upon an example dataset, it is shown that this method does not yield biased results. The statistical significance of the results is also computed. It is concluded over an observation set that the currently used linear algebra methods present many patterns of noise sensitivity. Changing algorithm details leads to inconsistent results. It is thus recommended to use the method with the lowest noise sensitivity. Moreover, it is shown that this method also permits one to obtain an estimate of the confidence interval. This paper neither aims to publish results about a particular experiment nor to draw any particular conclusion about existence or nonexistence of the brightness enhancement effect.
NASA Astrophysics Data System (ADS)
Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong
2018-05-01
In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.
Mollet, Pieter; Keereman, Vincent; Bini, Jason; Izquierdo-Garcia, David; Fayad, Zahi A; Vandenberghe, Stefaan
2014-02-01
Quantitative PET imaging relies on accurate attenuation correction. Recently, there has been growing interest in combining state-of-the-art PET systems with MR imaging in a sequential or fully integrated setup. As CT becomes unavailable for these systems, an alternative approach to the CT-based reconstruction of attenuation coefficients (μ values) at 511 keV must be found. Deriving μ values directly from MR images is difficult because MR signals are related to the proton density and relaxation properties of tissue. Therefore, most research groups focus on segmentation or atlas registration techniques. Although studies have shown that these methods provide viable solutions in particular applications, some major drawbacks limit their use in whole-body PET/MR. Previously, we used an annulus-shaped PET transmission source inside the field of view of a PET scanner to measure attenuation coefficients at 511 keV. In this work, we describe the use of this method in studies of patients with the sequential time-of-flight (TOF) PET/MR scanner installed at the Icahn School of Medicine at Mount Sinai, New York, NY. Five human PET/MR and CT datasets were acquired. The transmission-based attenuation correction method was compared with conventional CT-based attenuation correction and the 3-segment, MR-based attenuation correction available on the TOF PET/MR imaging scanner. The transmission-based method overcame most problems related to the MR-based technique, such as truncation artifacts of the arms, segmentation artifacts in the lungs, and imaging of cortical bone. Additionally, the TOF capabilities of the PET detectors allowed the simultaneous acquisition of transmission and emission data. Compared with the MR-based approach, the transmission-based method provided average improvements in PET quantification of 6.4%, 2.4%, and 18.7% in volumes of interest inside the lung, soft tissue, and bone tissue, respectively. In conclusion, a transmission-based technique with an annulus-shaped transmission source will be more accurate than a conventional MR-based technique for measuring attenuation coefficients at 511 keV in future whole-body PET/MR studies.
Randrianjatovo, I; Girbal-Neuhauser, E; Marcato-Romain, C-E
2015-06-01
Biofilms are ecosystems of closely associated bacteria encapsulated in an extracellular matrix mainly composed of polysaccharides and proteins. A novel approach was developed for in situ quantification of extracellular proteins (ePNs) in various bacterial biofilms using epicocconone, a natural, fluorescent compound that binds amine residues of proteins. Six commercial proteins were tested for their reaction with epicocconone, and bovine serum albumin (BSA) was selected for assay optimization. The optimized protocol, performed as a microassay, allowed protein amounts as low as 0.7 μg to as high as 50 μg per well to be detected. Addition of monosaccharides or polysaccharides (glucose, dextran or alginate) to the standard BSA solutions (0 to 250 μg ml(-1)) showed little or no sugar interference up to 2000 μg ml(-1), thus providing an assessment of the specificity of epicocconone for proteins. The optimized protocol was then applied to three different biofilms, and in situ quantification of ePN showed contrasted protein amounts of 22.1 ± 3.1, 38.3 ± 7.1 and 0.3 ± 0.1 μg equivalent BSA of proteins for 48-h biofilms of Pseudomonas aeruginosa, Bacillus licheniformis and Weissella confusa, respectively. Possible interference due to global matrix compounds on the in situ quantification of proteins was also investigated by applying the standard addition method (SAM). Low error percentages were obtained, indicating a correct quantification of both the ePN and the added proteins. For the first time, a specific and sensitive assay has been developed for in situ determination of ePN produced by bacterial cells. This advance should lead to an accurate, rapid tool for further protein labelling and microscopic observation of the extracellular matrix of biofilms.
Chen, Litong; Flynn, Dan F B; Jing, Xin; Kühn, Peter; Scholten, Thomas; He, Jin-Sheng
2015-01-01
As CO2 concentrations continue to rise and drive global climate change, much effort has been put into estimating soil carbon (C) stocks and dynamics over time. However, the inconsistent methods employed by researchers hamper the comparability of such works, creating a pressing need to standardize the methods for soil organic C (SOC) quantification by the various methods. Here, we collected 712 soil samples from 36 sites of alpine grasslands on the Tibetan Plateau covering different soil depths and vegetation and soil types. We used an elemental analyzer for soil total C (STC) and an inorganic carbon analyzer for soil inorganic C (SIC), and then defined the difference between STC and SIC as SOCCNS. In addition, we employed the modified Walkley-Black (MWB) method, hereafter SOCMWB. Our results showed that there was a strong correlation between SOCCNS and SOCMWB across the data set, given the application of a correction factor of 1.103. Soil depth and soil type significantly influenced on the recovery, defined as the ratio of SOCMWB to SOCCNS, and the recovery was closely associated with soil carbonate content and pH value as well. The differences of recovery between alpine meadow and steppe were largely driven by soil pH. In addition, statistically, a relatively strong correlation between SOCCNS and STC was also found, suggesting that it is feasible to estimate SOCCNS stocks through the STC data across the Tibetan grasslands. Therefore, our results suggest that in order to accurately estimate the absolute SOC stocks and its change in the Tibetan alpine grasslands, adequate correction of the modified WB measurements is essential with correct consideration of the effects of soil types, vegetation, soil pH and soil depth.
Chen, Litong; Flynn, Dan F. B.; Jing, Xin; Kühn, Peter; Scholten, Thomas; He, Jin-Sheng
2015-01-01
As CO2 concentrations continue to rise and drive global climate change, much effort has been put into estimating soil carbon (C) stocks and dynamics over time. However, the inconsistent methods employed by researchers hamper the comparability of such works, creating a pressing need to standardize the methods for soil organic C (SOC) quantification by the various methods. Here, we collected 712 soil samples from 36 sites of alpine grasslands on the Tibetan Plateau covering different soil depths and vegetation and soil types. We used an elemental analyzer for soil total C (STC) and an inorganic carbon analyzer for soil inorganic C (SIC), and then defined the difference between STC and SIC as SOCCNS. In addition, we employed the modified Walkley-Black (MWB) method, hereafter SOCMWB. Our results showed that there was a strong correlation between SOCCNS and SOCMWB across the data set, given the application of a correction factor of 1.103. Soil depth and soil type significantly influenced on the recovery, defined as the ratio of SOCMWB to SOCCNS, and the recovery was closely associated with soil carbonate content and pH value as well. The differences of recovery between alpine meadow and steppe were largely driven by soil pH. In addition, statistically, a relatively strong correlation between SOCCNS and STC was also found, suggesting that it is feasible to estimate SOCCNS stocks through the STC data across the Tibetan grasslands. Therefore, our results suggest that in order to accurately estimate the absolute SOC stocks and its change in the Tibetan alpine grasslands, adequate correction of the modified WB measurements is essential with correct consideration of the effects of soil types, vegetation, soil pH and soil depth. PMID:25946085
Anizan, Nadège; Carlier, Thomas; Hindorf, Cecilia; Barbet, Jacques; Bardiès, Manuel
2012-02-13
Noninvasive multimodality imaging is essential for preclinical evaluation of the biodistribution and pharmacokinetics of radionuclide therapy and for monitoring tumor response. Imaging with nonstandard positron-emission tomography [PET] isotopes such as 124I is promising in that context but requires accurate activity quantification. The decay scheme of 124I implies an optimization of both acquisition settings and correction processing. The PET scanner investigated in this study was the Inveon PET/CT system dedicated to small animal imaging. The noise equivalent count rate [NECR], the scatter fraction [SF], and the gamma-prompt fraction [GF] were used to determine the best acquisition parameters for mouse- and rat-sized phantoms filled with 124I. An image-quality phantom as specified by the National Electrical Manufacturers Association NU 4-2008 protocol was acquired and reconstructed with two-dimensional filtered back projection, 2D ordered-subset expectation maximization [2DOSEM], and 3DOSEM with maximum a posteriori [3DOSEM/MAP] algorithms, with and without attenuation correction, scatter correction, and gamma-prompt correction (weighted uniform distribution subtraction). Optimal energy windows were established for the rat phantom (390 to 550 keV) and the mouse phantom (400 to 590 keV) by combining the NECR, SF, and GF results. The coincidence time window had no significant impact regarding the NECR curve variation. Activity concentration of 124I measured in the uniform region of an image-quality phantom was underestimated by 9.9% for the 3DOSEM/MAP algorithm with attenuation and scatter corrections, and by 23% with the gamma-prompt correction. Attenuation, scatter, and gamma-prompt corrections decreased the residual signal in the cold insert. The optimal energy windows were chosen with the NECR, SF, and GF evaluation. Nevertheless, an image quality and an activity quantification assessment were required to establish the most suitable reconstruction algorithm and corrections for 124I small animal imaging.
qPCR-based mitochondrial DNA quantification: Influence of template DNA fragmentation on accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, Christopher B., E-mail: Christopher.jackson@insel.ch; Gallati, Sabina, E-mail: sabina.gallati@insel.ch; Schaller, Andre, E-mail: andre.schaller@insel.ch
2012-07-06
Highlights: Black-Right-Pointing-Pointer Serial qPCR accurately determines fragmentation state of any given DNA sample. Black-Right-Pointing-Pointer Serial qPCR demonstrates different preservation of the nuclear and mitochondrial genome. Black-Right-Pointing-Pointer Serial qPCR provides a diagnostic tool to validate the integrity of bioptic material. Black-Right-Pointing-Pointer Serial qPCR excludes degradation-induced erroneous quantification. -- Abstract: Real-time PCR (qPCR) is the method of choice for quantification of mitochondrial DNA (mtDNA) by relative comparison of a nuclear to a mitochondrial locus. Quantitative abnormal mtDNA content is indicative of mitochondrial disorders and mostly confines in a tissue-specific manner. Thus handling of degradation-prone bioptic material is inevitable. We established a serialmore » qPCR assay based on increasing amplicon size to measure degradation status of any DNA sample. Using this approach we can exclude erroneous mtDNA quantification due to degraded samples (e.g. long post-exicision time, autolytic processus, freeze-thaw cycles) and ensure abnormal DNA content measurements (e.g. depletion) in non-degraded patient material. By preparation of degraded DNA under controlled conditions using sonification and DNaseI digestion we show that erroneous quantification is due to the different preservation qualities of the nuclear and the mitochondrial genome. This disparate degradation of the two genomes results in over- or underestimation of mtDNA copy number in degraded samples. Moreover, as analysis of defined archival tissue would allow to precise the molecular pathomechanism of mitochondrial disorders presenting with abnormal mtDNA content, we compared fresh frozen (FF) with formalin-fixed paraffin-embedded (FFPE) skeletal muscle tissue of the same sample. By extrapolation of measured decay constants for nuclear DNA ({lambda}{sub nDNA}) and mtDNA ({lambda}{sub mtDNA}) we present an approach to possibly correct measurements in degraded samples in the future. To our knowledge this is the first time different degradation impact of the two genomes is demonstrated and which evaluates systematically the impact of DNA degradation on quantification of mtDNA copy number.« less
Accurate determination of brain metabolite concentrations using ERETIC as external reference.
Zoelch, Niklaus; Hock, Andreas; Heinzer-Schweizer, Susanne; Avdievitch, Nikolai; Henning, Anke
2017-08-01
Magnetic Resonance Spectroscopy (MRS) can provide in vivo metabolite concentrations in standard concentration units if a reliable reference signal is available. For 1 H MRS in the human brain, typically the signal from the tissue water is used as the (internal) reference signal. However, a concentration determination based on the tissue water signal most often requires a reliable estimate of the water concentration present in the investigated tissue. Especially in clinically interesting cases, this estimation might be difficult. To avoid assumptions about the water in the investigated tissue, the Electric REference To access In vivo Concentrations (ERETIC) method has been proposed. In this approach, the metabolite signal is compared with a reference signal acquired in a phantom and potential coil-loading differences are corrected using a synthetic reference signal. The aim of this study, conducted with a transceiver quadrature head coil, was to increase the accuracy of the ERETIC method by correcting the influence of spatial B 1 inhomogeneities and to simplify the quantification with ERETIC by incorporating an automatic phase correction for the ERETIC signal. Transmit field ( B1+) differences are minimized with a volume-selective power optimization, whereas reception sensitivity changes are corrected using contrast-minimized images of the brain and by adapting the voxel location in the phantom measurement closely to the position measured in vivo. By applying the proposed B 1 correction scheme, the mean metabolite concentrations determined with ERETIC in 21 healthy subjects at three different positions agree with concentrations derived with the tissue water signal as reference. In addition, brain water concentrations determined with ERETIC were in agreement with estimations derived using tissue segmentation and literature values for relative water densities. Based on the results, the ERETIC method presented here is a valid tool to derive in vivo metabolite concentration, with potential advantages compared with internal water referencing in diseased tissue. Copyright © 2017 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Johnstone, Samuel; Hourigan, Jeremy; Gallagher, Christopher
2013-05-01
Heterogeneous concentrations of α-producing nuclides in apatite have been recognized through a variety of methods. The presence of zonation in apatite complicates both traditional α-ejection corrections and diffusive models, both of which operate under the assumption of homogeneous concentrations. In this work we develop a method for measuring radial concentration profiles of 238U and 232Th in apatite by laser ablation ICP-MS depth profiling. We then focus on one application of this method, removing bias introduced by applying inappropriate α-ejection corrections. Formal treatment of laser ablation ICP-MS depth profile calibration for apatite includes construction and calibration of matrix-matched standards and quantification of rates of elemental fractionation. From this we conclude that matrix-matched standards provide more robust monitors of fractionation rate and concentrations than doped silicate glass standards. We apply laser ablation ICP-MS depth profiling to apatites from three unknown populations and small, intact crystals of Durango fluorapatite. Accurate and reproducible Durango apatite dates suggest that prolonged exposure to laser drilling does not impact cooling ages. Intracrystalline concentrations vary by at least a factor of 2 in the majority of the samples analyzed, but concentration variation only exceeds 5x in 5 grains and 10x in 1 out of the 63 grains analyzed. Modeling of synthetic concentration profiles suggests that for concentration variations of 2x and 10x individual homogeneous versus zonation dependent α-ejection corrections could lead to age bias of >5% and >20%, respectively. However, models based on measured concentration profiles only generated biases exceeding 5% in 13 of the 63 cases modeled. Application of zonation dependent α-ejection corrections did not significantly reduce the age dispersion present in any of the populations studied. This suggests that factors beyond homogeneous α-ejection corrections are the dominant source of overdispersion in apatite (U-Th)/He cooling ages.
Keller, Sune H; Sibomana, Merence; Olesen, Oline V; Svarer, Claus; Holm, Søren; Andersen, Flemming L; Højgaard, Liselotte
2012-03-01
Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Two scans with minor motion and 5 with major motion (as reported by the optical motion tracking system) were selected from (18)F-FDG scans acquired on a PET scanner. The motion was measured as the maximum displacement of the markers attached to the subject's head and was considered to be major if larger than 4 mm and minor if less than 2 mm. After allowing a 40- to 60-min uptake time after tracer injection, we acquired a 6-min transmission scan, followed by a 40-min emission list-mode scan. Each emission list-mode dataset was divided into 8 frames of 5 min. The reconstructed time-framed images were aligned to a selected reference frame using either EMT or the AIR (automated image registration) software. The following 3 QC methods were used to evaluate the EMT and AIR MC: a method using the ratio between 2 regions of interest with gray matter voxels (GM) and white matter voxels (WM), called GM/WM; mutual information; and cross correlation. The results of the 3 QC methods were in agreement with one another and with a visual subjective inspection of the image data. Before MC, the QC method measures varied significantly in scans with major motion and displayed limited variations on scans with minor motion. The variation was significantly reduced and measures improved after MC with AIR, whereas EMT MC performed less well. The 3 presented QC methods produced similar results and are useful for evaluating tracer-independent external-tracking motion-correction methods for human brain scans.
Recent application of quantification II in Japanese medical research.
Suzuki, T; Kudo, A
1979-01-01
Hayashi's Quantification II is a method of multivariate discrimination analysis to manipulate attribute data as predictor variables. It is very useful in the medical research field for estimation, diagnosis, prognosis, evaluation of epidemiological factors, and other problems based on multiplicity of attribute data. In Japan, this method is so well known that most of the computer program packages include the Hayashi Quantification, but it seems to be yet unfamiliar with the method for researchers outside Japan. In view of this situation, we introduced 19 selected articles of recent applications of the Quantification II in Japanese medical research. In reviewing these papers, special mention is made to clarify how the researchers were satisfied with findings provided by the method. At the same time, some recommendations are made about terminology and program packages. Also a brief discussion of the background of the quantification methods is given with special reference to the Behaviormetric Society of Japan. PMID:540587
Procedure for the Selection and Validation of a Calibration Model I-Description and Application.
Desharnais, Brigitte; Camirand-Lemyre, Félix; Mireault, Pascal; Skinner, Cameron D
2017-05-01
Calibration model selection is required for all quantitative methods in toxicology and more broadly in bioanalysis. This typically involves selecting the equation order (quadratic or linear) and weighting factor correctly modelizing the data. A mis-selection of the calibration model will generate lower quality control (QC) accuracy, with an error up to 154%. Unfortunately, simple tools to perform this selection and tests to validate the resulting model are lacking. We present a stepwise, analyst-independent scheme for selection and validation of calibration models. The success rate of this scheme is on average 40% higher than a traditional "fit and check the QCs accuracy" method of selecting the calibration model. Moreover, the process was completely automated through a script (available in Supplemental Data 3) running in RStudio (free, open-source software). The need for weighting was assessed through an F-test using the variances of the upper limit of quantification and lower limit of quantification replicate measurements. When weighting was required, the choice between 1/x and 1/x2 was determined by calculating which option generated the smallest spread of weighted normalized variances. Finally, model order was selected through a partial F-test. The chosen calibration model was validated through Cramer-von Mises or Kolmogorov-Smirnov normality testing of the standardized residuals. Performance of the different tests was assessed using 50 simulated data sets per possible calibration model (e.g., linear-no weight, quadratic-no weight, linear-1/x, etc.). This first of two papers describes the tests, procedures and outcomes of the developed procedure using real LC-MS-MS results for the quantification of cocaine and naltrexone. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Objective automated quantification of fluorescence signal in histological sections of rat lens.
Talebizadeh, Nooshin; Hagström, Nanna Zhou; Yu, Zhaohua; Kronschläger, Martin; Söderberg, Per; Wählby, Carolina
2017-08-01
Visual quantification and classification of fluorescent signals is the gold standard in microscopy. The purpose of this study was to develop an automated method to delineate cells and to quantify expression of fluorescent signal of biomarkers in each nucleus and cytoplasm of lens epithelial cells in a histological section. A region of interest representing the lens epithelium was manually demarcated in each input image. Thereafter, individual cell nuclei within the region of interest were automatically delineated based on watershed segmentation and thresholding with an algorithm developed in Matlab™. Fluorescence signal was quantified within nuclei, cytoplasms and juxtaposed backgrounds. The classification of cells as labelled or not labelled was based on comparison of the fluorescence signal within cells with local background. The classification rule was thereafter optimized as compared with visual classification of a limited dataset. The performance of the automated classification was evaluated by asking 11 independent blinded observers to classify all cells (n = 395) in one lens image. Time consumed by the automatic algorithm and visual classification of cells was recorded. On an average, 77% of the cells were correctly classified as compared with the majority vote of the visual observers. The average agreement among visual observers was 83%. However, variation among visual observers was high, and agreement between two visual observers was as low as 71% in the worst case. Automated classification was on average 10 times faster than visual scoring. The presented method enables objective and fast detection of lens epithelial cells and quantification of expression of fluorescent signal with an accuracy comparable with the variability among visual observers. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.
Tran, Ngoc Han; Hu, Jiangyong; Ong, Say Leong
2013-09-15
A high-throughput method for the simultaneous determination of 24 pharmaceuticals and personal care products (PPCPs), endocrine disrupting chemicals (EDCs) and artificial sweeteners (ASs) was developed. The method was based on a single-step solid phase extraction (SPE) coupled with high performance liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) and isotope dilution. In this study, a single-step SPE procedure was optimized for simultaneous extraction of all target analytes. Good recoveries (≥ 70%) were observed for all target analytes when extraction was performed using Chromabond(®) HR-X (500 mg, 6 mL) cartridges under acidic condition (pH 2). HPLC-MS/MS parameters were optimized for the simultaneous analysis of 24 PPCPs, EDCs and ASs in a single injection. Quantification was performed by using 13 isotopically labeled internal standards (ILIS), which allows correcting efficiently the loss of the analytes during SPE procedure, matrix effects during HPLC-MS/MS and fluctuation in MS/MS signal intensity due to instrument. Method quantification limit (MQL) for most of the target analytes was below 10 ng/L in all water samples. The method was successfully applied for the simultaneous determination of PPCPs, EDCs and ASs in raw wastewater, surface water and groundwater samples collected in a local catchment area in Singapore. In conclusion, the developed method provided a valuable tool for investigating the occurrence, behavior, transport, and the fate of PPCPs, EDCs and ASs in the aquatic environment. Copyright © 2013 Elsevier B.V. All rights reserved.
Revisiting the Logan plot to account for non-negligible blood volume in brain tissue.
Schain, Martin; Fazio, Patrik; Mrzljak, Ladislav; Amini, Nahid; Al-Tawil, Nabil; Fitzer-Attas, Cheryl; Bronzova, Juliana; Landwehrmeyer, Bernhard; Sampaio, Christina; Halldin, Christer; Varrone, Andrea
2017-08-18
Reference tissue-based quantification of brain PET data does not typically include correction for signal originating from blood vessels, which is known to result in biased outcome measures. The bias extent depends on the amount of radioactivity in the blood vessels. In this study, we seek to revisit the well-established Logan plot and derive alternative formulations that provide estimation of distribution volume ratios (DVRs) that are corrected for the signal originating from the vasculature. New expressions for the Logan plot based on arterial input function and reference tissue were derived, which included explicit terms for whole blood radioactivity. The new methods were evaluated using PET data acquired using [ 11 C]raclopride and [ 18 F]MNI-659. The two-tissue compartment model (2TCM), with which signal originating from blood can be explicitly modeled, was used as a gold standard. DVR values obtained for [ 11 C]raclopride using the either blood-based or reference tissue-based Logan plot were systematically underestimated compared to 2TCM, and for [ 18 F]MNI-659, a proportionality bias was observed, i.e., the bias varied across regions. The biases disappeared when optimal blood-signal correction was used for respective tracer, although for the case of [ 18 F]MNI-659 a small but systematic overestimation of DVR was still observed. The new method appears to remove the bias introduced due to absence of correction for blood volume in regular graphical analysis and can be considered in clinical studies. Further studies are however required to derive a generic mapping between plasma and whole-blood radioactivity levels.
Vincenti, Gabriella; Masci, Pier Giorgio; Rutz, Tobias; De Blois, Jonathan; Prša, Milan; Jeanrenaud, Xavier; Schwitter, Juerg; Monney, Pierre
2017-07-27
To quantify mitral regurgitation (MR) with CMR, the regurgitant volume can be calculated as the difference between the left ventricular (LV) stroke volume (SV) measured with the Simpson's method and the reference SV, i.e. the right ventricular SV (RVSV) in patients without tricuspid regurgitation. However, for patients with prominent mitral valve prolapse (MVP), the Simpson's method may underestimate the LV end-systolic volume (LVESV) as it only considers the volume located between the apex and the mitral annulus, and neglects the ventricular volume that is displaced into the left atrium but contained within the prolapsed mitral leaflets at end systole. This may lead to an underestimation of LVESV, and resulting an over-estimation of LVSV, and an over-estimation of mitral regurgitation. The aim of the present study was to assess the impact of prominent MVP on MR quantification by CMR. In patients with MVP (and no more than trace tricuspid regurgitation) MR was quantified by calculating the regurgitant volume as the difference between LVSV and RVSV. LVSV uncorr was calculated conventionally as LV end-diastolic (LVEDV) minus LVESV. A corrected LVESV corr was calculated as the LVESV plus the prolapsed volume, i.e. the volume between the mitral annulus and the prolapsing mitral leaflets. The 2 methods were compared with respect to the MR grading. MR grades were defined as absent or trace, mild (5-29% regurgitant fraction (RF)), moderate (30-49% RF), or severe (≥50% RF). In 35 patients (44.0 ± 23.0y, 14 males, 20 patients with MR) the prolapsed volume was 16.5 ± 8.7 ml. The 2 methods were concordant in only 12 (34%) patients, as the uncorrected method indicated a 1-grade higher MR severity in 23 (66%) patients. For the uncorrected/corrected method, the distribution of the MR grades as absent-trace (0 vs 11, respectively), mild (20 vs 18, respectively), moderate (11 vs 5, respectively), and severe (4 vs 1, respectively) was significantly different (p < 0.001). In the subgroup without MR, LVSV corr was not significantly different from RVSV (difference: 2.5 ± 4.7 ml, p = 0.11 vs 0) while a systematic overestimation was observed with LVSV uncorr (difference: 16.9 ± 9.1 ml, p = 0.0007 vs 0). Also, RVSV was highly correlated with aortic forward flow (n = 24, R 2 = 0.97, p < 0.001). For patients with severe bileaflet prolapse, the correction of the LVSV for the prolapse volume is suggested as it modified the assessment of MR severity by one grade in a large portion of patients.
Buiarelli, Francesca; Coccioli, Franco; Jasionowska, Renata; Terracciano, Alessandro
2008-09-01
A fast and accurate micellar electrokinetic capillary chromatography method was developed for quality control of pharmaceutical preparations containing cold remedies as acetaminophen, salicylamide, caffeine, phenylephrine, pseudoephedrine, norephedrine and chlorpheniramine. The method optimization was realized on a Beckman P/ACE System MDQ instrument. The baseline separation of seven analytes was performed in an uncoated fused silica capillary internal diameter (ID)=50 microm using tris-borate (20 mM, pH=8.5) containing sodium dodecyl sulphate 30 mM BGE. On line-UV detection at 214 nm was performed and the applied voltage was 10 kV. The operating temperature was 25 degrees C. After experimental conditions optimization, the proposed method was validated. The evaluated parameters were: precision of migration time and of corrected peak area ratio, linearity range, limit of detection, limit of quantification, accuracy (recovery), ruggedness and applicability. The method was then successfully applied for the analysis of three pharmaceutical preparations containing some of the analytes listed before.
Quantification of Training and Competition Loads in Endurance Sports: Methods and Applications.
Mujika, Iñigo
2017-04-01
Training quantification is basic to evaluate an endurance athlete's responses to training loads, ensure adequate stress/recovery balance, and determine the relationship between training and performance. Quantifying both external and internal workload is important, because external workload does not measure the biological stress imposed by the exercise sessions. Generally used quantification methods include retrospective questionnaires, diaries, direct observation, and physiological monitoring, often based on the measurement of oxygen uptake, heart rate, and blood lactate concentration. Other methods in use in endurance sports include speed measurement and the measurement of power output, made possible by recent technological advances such as power meters in cycling and triathlon. Among subjective methods of quantification, rating of perceived exertion stands out because of its wide use. Concurrent assessments of the various quantification methods allow researchers and practitioners to evaluate stress/recovery balance, adjust individual training programs, and determine the relationships between external load, internal load, and athletes' performance. This brief review summarizes the most relevant external- and internal-workload-quantification methods in endurance sports and provides practical examples of their implementation to adjust the training programs of elite athletes in accordance with their individualized stress/recovery balance.
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-01-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
Kwon, Young-Hoo; Casebolt, Jeffrey B
2006-07-01
One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.
NASA Astrophysics Data System (ADS)
Dumouchel, Tyler; Thorn, Stephanie; Kordos, Myra; DaSilva, Jean; Beanlands, Rob S. B.; deKemp, Robert A.
2012-07-01
Quantification in cardiac mouse positron emission tomography (PET) imaging is limited by the imaging spatial resolution. Spillover of left ventricle (LV) myocardial activity into adjacent organs results in partial volume (PV) losses leading to underestimation of myocardial activity. A PV correction method was developed to restore accuracy of the activity distribution for FDG mouse imaging. The PV correction model was based on convolving an LV image estimate with a 3D point spread function. The LV model was described regionally by a five-parameter profile including myocardial, background and blood activities which were separated into three compartments by the endocardial radius and myocardium wall thickness. The PV correction was tested with digital simulations and a physical 3D mouse LV phantom. In vivo cardiac FDG mouse PET imaging was also performed. Following imaging, the mice were sacrificed and the tracer biodistribution in the LV and liver tissue was measured using a gamma-counter. The PV correction algorithm improved recovery from 50% to within 5% of the truth for the simulated and measured phantom data and image uniformity by 5-13%. The PV correction algorithm improved the mean myocardial LV recovery from 0.56 (0.54) to 1.13 (1.10) without (with) scatter and attenuation corrections. The mean image uniformity was improved from 26% (26%) to 17% (16%) without (with) scatter and attenuation corrections applied. Scatter and attenuation corrections were not observed to significantly impact PV-corrected myocardial recovery or image uniformity. Image-based PV correction algorithm can increase the accuracy of PET image activity and improve the uniformity of the activity distribution in normal mice. The algorithm may be applied using different tracers, in transgenic models that affect myocardial uptake, or in different species provided there is sufficient image quality and similar contrast between the myocardium and surrounding structures.
Staack, Roland F; Jordan, Gregor; Heinrich, Julia
2012-02-01
For every drug development program it needs to be discussed whether discrimination between free and total drug concentrations is required to accurately describe its pharmacokinetic behavior. This perspective describes the application of mathematical simulation approaches to guide this initial decision based on available knowledge about target biology, binding kinetics and expected drug concentrations. We provide generic calculations that can be used to estimate the necessity of free drug quantification for different drug molecules. In addition, mathematical approaches are used to simulate various assay conditions in bioanalytical ligand-binding assays: it is demonstrated that due to the noncovalent interaction between the binding partners and typical assay-related interferences in the equilibrium, a correct quantification of the free drug concentration is highly challenging and requires careful design of different assay procedure steps.
NASA Astrophysics Data System (ADS)
Wright, K. E.; Popa, K.; Pöml, P.
2018-01-01
Transmutation nuclear fuels contain weight percentage quantities of actinide elements, including Pu, Am and Np. Because of the complex spectra presented by actinide elements using electron probe microanalysis (EPMA), it is necessary to have relatively pure actinide element standards to facilitate overlap correction and accurate quantitation. Synthesis of actinide oxide standards is complicated by their multiple oxidation states, which can result in inhomogeneous standards or standards that are not stable at atmospheric conditions. Synthesis of PuP4 results in a specimen that exhibits stable oxidation-reduction chemistry and is sufficiently homogenous to serve as an EPMA standard. This approach shows promise as a method for producing viable actinide standards for microanalysis.
Quantitative proteome analysis using isobaric peptide termini labeling (IPTL).
Arntzen, Magnus O; Koehler, Christian J; Treumann, Achim; Thiede, Bernd
2011-01-01
The quantitative comparison of proteome level changes across biological samples has become an essential feature in proteomics that remains challenging. We have recently introduced isobaric peptide termini labeling (IPTL), a novel strategy for isobaric quantification based on the derivatization of peptide termini with complementary isotopically labeled reagents. Unlike non-isobaric quantification methods, sample complexity at the MS level is not increased, providing improved sensitivity and protein coverage. The distinguishing feature of IPTL when comparing it to more established isobaric labeling methods (iTRAQ and TMT) is the presence of quantification signatures in all sequence-determining ions in MS/MS spectra, not only in the low mass reporter ion region. This makes IPTL a quantification method that is accessible to mass spectrometers with limited capabilities in the low mass range. Also, the presence of several quantification points in each MS/MS spectrum increases the robustness of the quantification procedure.
Cho, Kyung Jin; Müller, Jacobus H; Erasmus, Pieter J; DeJour, David; Scheffer, Cornie
2014-01-01
Segmentation and computer assisted design tools have the potential to test the validity of simulated surgical procedures, e.g., trochleoplasty. A repeatable measurement method for three dimensional femur models that enables quantification of knee parameters of the distal femur is presented. Fifteen healthy knees are analysed using the method to provide a training set for an artificial neural network. The aim is to use this artificial neural network for the prediction of parameter values that describe the shape of a normal trochlear groove geometry. This is achieved by feeding the artificial neural network with the unaffected parameters of a dysplastic knee. Four dysplastic knees (Type A through D) are virtually redesigned by way of morphing the groove geometries based on the suggested shape from the artificial neural network. Each of the four resulting shapes is analysed and compared to its initial dysplastic shape in terms of three anteroposterior dimensions: lateral, central and medial. For the four knees the trochlear depth is increased, the ventral trochlear prominence reduced and the sulcus angle corrected to within published normal ranges. The results show a lateral facet elevation inadequate, with a sulcus deepening or a depression trochleoplasty more beneficial to correct trochlear dysplasia.
CORRECTING ENERGY EXPENDITURES FOR FATIGUE AND EXCESS POST-EXERCISE OXYGEN CONSUMPTION
The EPA's human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the individual's current level of physical activity (PA), which is determined from activity diaries selected from the Conso...
Letcher, R J; Li, H X; Chu, S G
2005-01-01
Hydroxylated metabolites of polychlorinated biphenyls (HO-PCBs) and pentachlorophenol (PCP) are halogenated phenolic compounds, and they are increasingly common as environmental contaminants mainly in the blood of wildlife and humans. A methodology based on high-performance liquid chromatography (reversed-phase)-electrospray (negative) ionization-tandem quadrupole mass spectrometry (LC-ESI(-)-MS-MS) in the select ion monitoring or multiple reaction monitoring modes was developed for HO-PCB and PCP determination in blood plasma and serum. Among 11 environmentally relevant HO-PCB congeners and PCP spiked to fetal calf serum, quantitative assessments, including matrix effects on ESI(-) suppression/ enhancement, showed process (recovery) efficiencies of 73% to 89% without internal standard (IS) correction, and 88% to 103% with IS correction, and method limits of quantification ranging from 1 to 50 pg/g (wet weight). Using the developed LC-ESI(-)-MS methodology in comparison with GC-MS and GC-ECD based approaches, similar results were found for HO-PCB identification and quantification in the plasma of polar bear (Ursus maritimus) from the Canadian arctic. LC-ESI(-)-MS identified four HO-PCB congeners [4'-HO-2,2',4,6,6'-pentachlorobiphenyl (4'-HO-CB104), 4-HO-2,3,3',4',5-pentachlorobiphenyl (4-HO-CB107), 4-HO-2,3,3',5,5',6-hexachlorobiphenyl (4-HO-CB165) and 3'-HO-2,2',3,4,4',5,5'-heptachlorobiphenyl (3'-HO-CB180)], and 14 additional tetra- to hepta-chlorinated HO-PCBs isomers in the polar bear plasma.
Mandija, Stefano; Petrov, Petar I; Neggers, Sebastian F W; Luijten, Peter R; van den Berg, Cornelis A T
2016-11-01
Transcranial magnetic stimulation (TMS) is an emerging technique that allows non-invasive neurostimulation. However, the correct validation of electromagnetic models of typical TMS coils and the correct assessment of the incident TMS field (B TMS ) produced by standard TMS stimulators are still lacking. Such a validation can be performed by mapping B TMS produced by a realistic TMS setup. In this study, we show that MRI can provide precise quantification of the magnetic field produced by a realistic TMS coil and a clinically used TMS stimulator in the region in which neurostimulation occurs. Measurements of the phase accumulation created by TMS pulses applied during a tailored MR sequence were performed in a phantom. Dedicated hardware was developed to synchronize a typical, clinically used, TMS setup with a 3-T MR scanner. For comparison purposes, electromagnetic simulations of B TMS were performed. MR-based measurements allow the mapping and quantification of B TMS starting 2.5 cm from the TMS coil. For closer regions, the intra-voxel dephasing induced by B TMS prohibits TMS field measurements. For 1% TMS output, the maximum measured value was ~0.1 mT. Simulations reflect quantitatively the experimental data. These measurements can be used to validate electromagnetic models of TMS coils, to guide TMS coil positioning, and for dosimetry and quality assessment of concurrent TMS-MRI studies without the need for crude methods, such as motor threshold, for stimulation dose determination. Copyright © 2016 John Wiley & Sons, Ltd.
A multifractal approach to space-filling recovery for PET quantification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willaime, Julien M. Y., E-mail: julien.willaime@siemens.com; Aboagye, Eric O.; Tsoumpas, Charalampos
2014-11-01
Purpose: A new image-based methodology is developed for estimating the apparent space-filling properties of an object of interest in PET imaging without need for a robust segmentation step and used to recover accurate estimates of total lesion activity (TLA). Methods: A multifractal approach and the fractal dimension are proposed to recover the apparent space-filling index of a lesion (tumor volume, TV) embedded in nonzero background. A practical implementation is proposed, and the index is subsequently used with mean standardized uptake value (SUV {sub mean}) to correct TLA estimates obtained from approximate lesion contours. The methodology is illustrated on fractal andmore » synthetic objects contaminated by partial volume effects (PVEs), validated on realistic {sup 18}F-fluorodeoxyglucose PET simulations and tested for its robustness using a clinical {sup 18}F-fluorothymidine PET test–retest dataset. Results: TLA estimates were stable for a range of resolutions typical in PET oncology (4–6 mm). By contrast, the space-filling index and intensity estimates were resolution dependent. TLA was generally recovered within 15% of ground truth on postfiltered PET images affected by PVEs. Volumes were recovered within 15% variability in the repeatability study. Results indicated that TLA is a more robust index than other traditional metrics such as SUV {sub mean} or TV measurements across imaging protocols. Conclusions: The fractal procedure reported here is proposed as a simple and effective computational alternative to existing methodologies which require the incorporation of image preprocessing steps (i.e., partial volume correction and automatic segmentation) prior to quantification.« less
Suárez, Inmaculada; Coto, Baudilio
2015-08-14
Average molecular weights and polydispersity indexes are some of the most important parameters considered in the polymer characterization. Usually, gel permeation chromatography (GPC) and multi angle light scattering (MALS) are used for this determination, but GPC values are overestimated due to the dispersion introduced by the column separation. Several procedures were proposed to correct such effect usually involving more complex calibration processes. In this work, a new method of calculation has been considered including diffusion effects. An equation for the concentration profile due to diffusion effects along the GPC column was considered to be a Fickian function and polystyrene narrow standards were used to determine effective diffusion coefficients. The molecular weight distribution function of mono and poly disperse polymers was interpreted as a sum of several Fickian functions representing a sample formed by only few kind of polymer chains with specific molecular weight and diffusion coefficient. Proposed model accurately fit the concentration profile along the whole elution time range as checked by the computed standard deviation. Molecular weights obtained by this new method are similar to those obtained by MALS or traditional GPC while polydispersity index values are intermediate between those obtained by the traditional GPC combined to Universal Calibration method and the MALS method. Values for Pearson and Lin coefficients shows improvement in the correlation of polydispersity index values determined by GPC and MALS methods when diffusion coefficients and new methods are used. Copyright © 2015 Elsevier B.V. All rights reserved.
Spatial Normalization of Reverse Phase Protein Array Data
Kaushik, Poorvi; Molinelli, Evan J.; Miller, Martin L.; Wang, Weiqing; Korkut, Anil; Liu, Wenbin; Ju, Zhenlin; Lu, Yiling; Mills, Gordon; Sander, Chris
2014-01-01
Reverse phase protein arrays (RPPA) are an efficient, high-throughput, cost-effective method for the quantification of specific proteins in complex biological samples. The quality of RPPA data may be affected by various sources of error. One of these, spatial variation, is caused by uneven exposure of different parts of an RPPA slide to the reagents used in protein detection. We present a method for the determination and correction of systematic spatial variation in RPPA slides using positive control spots printed on each slide. The method uses a simple bi-linear interpolation technique to obtain a surface representing the spatial variation occurring across the dimensions of a slide. This surface is used to calculate correction factors that can normalize the relative protein concentrations of the samples on each slide. The adoption of the method results in increased agreement between technical and biological replicates of various tumor and cell-line derived samples. Further, in data from a study of the melanoma cell-line SKMEL-133, several slides that had previously been rejected because they had a coefficient of variation (CV) greater than 15%, are rescued by reduction of CV below this threshold in each case. The method is implemented in the R statistical programing language. It is compatible with MicroVigene and SuperCurve, packages commonly used in RPPA data analysis. The method is made available, along with suggestions for implementation, at http://bitbucket.org/rppa_preprocess/rppa_preprocess/src. PMID:25501559
Aghayee, Samira; Winkowski, Daniel E; Bowen, Zachary; Marshall, Erin E; Harrington, Matt J; Kanold, Patrick O; Losert, Wolfgang
2017-01-01
The application of 2-photon laser scanning microscopy (TPLSM) techniques to measure the dynamics of cellular calcium signals in populations of neurons is an extremely powerful technique for characterizing neural activity within the central nervous system. The use of TPLSM on awake and behaving subjects promises new insights into how neural circuit elements cooperatively interact to form sensory perceptions and generate behavior. A major challenge in imaging such preparations is unavoidable animal and tissue movement, which leads to shifts in the imaging location (jitter). The presence of image motion can lead to artifacts, especially since quantification of TPLSM images involves analysis of fluctuations in fluorescence intensities for each neuron, determined from small regions of interest (ROIs). Here, we validate a new motion correction approach to compensate for motion of TPLSM images in the superficial layers of auditory cortex of awake mice. We use a nominally uniform fluorescent signal as a secondary signal to complement the dynamic signals from genetically encoded calcium indicators. We tested motion correction for single plane time lapse imaging as well as multiplane (i.e., volume) time lapse imaging of cortical tissue. Our procedure of motion correction relies on locating the brightest neurons and tracking their positions over time using established techniques of particle finding and tracking. We show that our tracking based approach provides subpixel resolution without compromising speed. Unlike most established methods, our algorithm also captures deformations of the field of view and thus can compensate e.g., for rotations. Object tracking based motion correction thus offers an alternative approach for motion correction, one that is well suited for real time spike inference analysis and feedback control, and for correcting for tissue distortions.
Aghayee, Samira; Winkowski, Daniel E.; Bowen, Zachary; Marshall, Erin E.; Harrington, Matt J.; Kanold, Patrick O.; Losert, Wolfgang
2017-01-01
The application of 2-photon laser scanning microscopy (TPLSM) techniques to measure the dynamics of cellular calcium signals in populations of neurons is an extremely powerful technique for characterizing neural activity within the central nervous system. The use of TPLSM on awake and behaving subjects promises new insights into how neural circuit elements cooperatively interact to form sensory perceptions and generate behavior. A major challenge in imaging such preparations is unavoidable animal and tissue movement, which leads to shifts in the imaging location (jitter). The presence of image motion can lead to artifacts, especially since quantification of TPLSM images involves analysis of fluctuations in fluorescence intensities for each neuron, determined from small regions of interest (ROIs). Here, we validate a new motion correction approach to compensate for motion of TPLSM images in the superficial layers of auditory cortex of awake mice. We use a nominally uniform fluorescent signal as a secondary signal to complement the dynamic signals from genetically encoded calcium indicators. We tested motion correction for single plane time lapse imaging as well as multiplane (i.e., volume) time lapse imaging of cortical tissue. Our procedure of motion correction relies on locating the brightest neurons and tracking their positions over time using established techniques of particle finding and tracking. We show that our tracking based approach provides subpixel resolution without compromising speed. Unlike most established methods, our algorithm also captures deformations of the field of view and thus can compensate e.g., for rotations. Object tracking based motion correction thus offers an alternative approach for motion correction, one that is well suited for real time spike inference analysis and feedback control, and for correcting for tissue distortions. PMID:28860973
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch
Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, inmore » contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial systems will also be discussed.« less
Mehranian, Abolfazl; Arabi, Hossein; Zaidi, Habib
2016-03-01
Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography (PET) imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MR images; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, in contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lung tissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MR images are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PET imaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial systems will also be discussed.
[Imaging of diabetic osteopathy].
Patsch, J; Pietschmann, P; Schueller-Weidekamm, C
2015-04-01
Diabetic bone diseases are more than just osteoporosis in patients with diabetes mellitus (DM): a relatively high bone mineral density is paired with a paradoxically high risk of fragility fractures. Diabetics exhibit low bone turnover, osteocyte dysfunction, relative hypoparathyroidism and an accumulation of advanced glycation end products in the bone matrix. Besides typical insufficiency fractures, diabetics show a high risk for peripheral fractures of the lower extremities (e.g. metatarsal fractures). The correct interdisciplinary assessment of fracture risks in patients with DM is therefore a clinical challenge. There are two state of the art imaging methods for the quantification of fracture risks: dual energy X-ray absorptiometry (DXA) and quantitative computed tomography (QCT). Radiography, multidetector computed tomography (MDCT) and magnetic resonance imaging (MRI) are suitable for the detection of insufficiency fractures. Novel research imaging techniques, such as high-resolution peripheral quantitative computed tomography (HR-pQCT) provide non-invasive insights into bone microarchitecture of the peripheral skeleton. Using MR spectroscopy, bone marrow composition can be studied. Both methods have been shown to be capable of discriminating between type 2 diabetic patients with and without prevalent fragility fractures and thus bear the potential of improving the current standard of care. Currently both methods remain limited to clinical research applications. DXA and HR-pQCT are valid tools for the quantification of bone mineral density and assessment of fracture risk in patients with DM, especially if interpreted in the context of clinical risk factors. Radiography, CT and MRI are suitable for the detection of insufficiency fractures.
Zhang, Mengliang; Harrington, Peter de B
2015-01-01
Multivariate partial least-squares (PLS) method was applied to the quantification of two complex polychlorinated biphenyls (PCBs) commercial mixtures, Aroclor 1254 and 1260, in a soil matrix. PCBs in soil samples were extracted by headspace solid phase microextraction (SPME) and determined by gas chromatography/mass spectrometry (GC/MS). Decachlorinated biphenyl (deca-CB) was used as internal standard. After the baseline correction was applied, four data representations including extracted ion chromatograms (EIC) for Aroclor 1254, EIC for Aroclor 1260, EIC for both Aroclors and two-way data sets were constructed for PLS-1 and PLS-2 calibrations and evaluated with respect to quantitative prediction accuracy. The PLS model was optimized with respect to the number of latent variables using cross validation of the calibration data set. The validation of the method was performed with certified soil samples and real field soil samples and the predicted concentrations for both Aroclors using EIC data sets agreed with the certified values. The linear range of the method was from 10μgkg(-1) to 1000μgkg(-1) for both Aroclor 1254 and 1260 in soil matrices and the detection limit was 4μgkg(-1) for Aroclor 1254 and 6μgkg(-1) for Aroclor 1260. This holistic approach for the determination of mixtures of complex samples has broad application to environmental forensics and modeling. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quantifying construction and demolition waste: An analytical review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Zezhou; Yu, Ann T.W., E-mail: bsannyu@polyu.edu.hk; Shen, Liyin
2014-09-15
Highlights: • Prevailing C and D waste quantification methodologies are identified and compared. • One specific methodology cannot fulfill all waste quantification scenarios. • A relevance tree for appropriate quantification methodology selection is proposed. • More attentions should be paid to civil and infrastructural works. • Classified information is suggested for making an effective waste management plan. - Abstract: Quantifying construction and demolition (C and D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C and D waste generation at both regional and projectmore » levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C and D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested.« less
Han, Yongming; Chen, Antony; Cao, Junji; Fung, Kochy; Ho, Fai; Yan, Beizhan; Zhan, Changlin; Liu, Suixin; Wei, Chong; An, Zhisheng
2013-01-01
Quantifying elemental carbon (EC) content in geological samples is challenging due to interferences of crustal, salt, and organic material. Thermal/optical analysis, combined with acid pretreatment, represents a feasible approach. However, the consistency of various thermal/optical analysis protocols for this type of samples has never been examined. In this study, urban street dust and soil samples from Baoji, China were pretreated with acids and analyzed with four thermal/optical protocols to investigate how analytical conditions and optical correction affect EC measurement. The EC values measured with reflectance correction (ECR) were found always higher and less sensitive to temperature program than the EC values measured with transmittance correction (ECT). A high-temperature method with extended heating times (STN120) showed the highest ECT/ECR ratio (0.86) while a low-temperature protocol (IMPROVE-550), with heating time adjusted for sample loading, showed the lowest (0.53). STN ECT was higher than IMPROVE ECT, in contrast to results from aerosol samples. A higher peak inert-mode temperature and extended heating times can elevate ECT/ECR ratios for pretreated geological samples by promoting pyrolyzed organic carbon (PyOC) removal over EC under trace levels of oxygen. Considering that PyOC within filter increases ECR while decreases ECT from the actual EC levels, simultaneous ECR and ECT measurements would constrain the range of EC loading and provide information on method performance. Further testing with standard reference materials of common environmental matrices supports the findings. Char and soot fractions of EC can be further separated using the IMPROVE protocol. The char/soot ratio was lower in street dusts (2.2 on average) than in soils (5.2 on average), most likely reflecting motor vehicle emissions. The soot concentrations agreed with EC from CTO-375, a pure thermal method.
2014-01-01
Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860
Mahlke, C; Hernando, D; Jahn, C; Cigliano, A; Ittermann, T; Mössler, A; Kromrey, ML; Domaska, G; Reeder, SB; Kühn, JP
2016-01-01
Purpose To investigate the feasibility of estimating the proton-density fat fraction (PDFF) using a 7.1 Tesla magnetic resonance imaging (MRI) system and to compare the accuracy of liver fat quantification using different fitting approaches. Materials and Methods Fourteen leptin-deficient ob/ob mice and eight intact controls were examined in a 7.1 Tesla animal scanner using a 3-dimensional six-echo chemical shift-encoded pulse sequence. Confounder-corrected PDFF was calculated using magnitude (magnitude data alone) and combined fitting (complex and magnitude data). Differences between fitting techniques were compared using Bland-Altman analysis. In addition, PDFFs derived with both reconstructions were correlated with histopathological fat content and triglyceride mass fraction using linear regression analysis. Results The PDFFs determined with use of both reconstructions correlated very strongly (r=0.91). However, small mean bias between reconstructions demonstrated divergent results (3.9%; CI 2.7%-5.1%). For both reconstructions, there was linear correlation with histopathology (combined fitting: r=0.61; magnitude fitting: r=0.64) and triglyceride content (combined fitting: r=0.79; magnitude fitting: r=0.70). Conclusion Liver fat quantification using the PDFF derived from MRI performed at 7.1 Tesla is feasible. PDFF has strong correlations with histopathologically determined fat and with triglyceride content. However, small differences between PDFF reconstruction techniques may impair the robustness and reliability of the biomarker at 7.1 Tesla. PMID:27197806
Rieger, Benedikt; Akçakaya, Mehmet; Pariente, José C; Llufriu, Sara; Martinez-Heras, Eloy; Weingärtner, Sebastian; Schad, Lothar R
2018-04-27
Magnetic resonance fingerprinting (MRF) is a promising method for fast simultaneous quantification of multiple tissue parameters. The objective of this study is to improve the coverage of MRF based on echo-planar imaging (MRF-EPI) by using a slice-interleaved acquisition scheme. For this, the MRF-EPI is modified to acquire several slices in a randomized interleaved manner, increasing the effective repetition time of the spoiled gradient echo readout acquisition in each slice. Per-slice matching of the signal-trace to a precomputed dictionary allows the generation of T 1 and T 2 * maps with integrated B 1 + correction. Subsequent compensation for the coil sensitivity profile and normalization to the cerebrospinal fluid additionally allows for quantitative proton density (PD) mapping. Numerical simulations are performed to optimize the number of interleaved slices. Quantification accuracy is validated in phantom scans and feasibility is demonstrated in-vivo. Numerical simulations suggest the acquisition of four slices as a trade-off between quantification precision and scan-time. Phantom results indicate good agreement with reference measurements (Difference T 1 : -2.4 ± 1.1%, T 2 *: -0.5 ± 2.5%, PD: -0.5 ± 7.2%). In-vivo whole-brain coverage of T 1 , T 2 * and PD with 32 slices was acquired within 3:36 minutes, resulting in parameter maps of high visual quality and comparable performance with single-slice MRF-EPI at 4-fold scan-time reduction.
Solassol, J; Burcia, V; Costes, V; Lacombe, J; Mange, A; Barbotte, E; de Verbizier, D; Cartier, C; Makeieff, M; Crampette, L; Boulle, N; Maudelonde, T; Guerrier, B; Garrel, R
2009-01-01
Background: Molecular diagnosis has been proposed to enhance the intra-operative diagnosis of sentinel lymph node (SLN) invasion in head and neck squamous cell carcinoma (HNSCC). Although cytokeratin (CK) mRNA quantification with real-time reverse transcriptase-PCR (QRT–PCR) has produced encouraging results, the more discriminating markers remain to be identified. Methods: Pemphigus vulgaris antigen (PVA), squamous cell carcinoma antigen (SCCA), and CK17 mRNA were quantified using QRT–PCR, and the results were compared with an extensive histopathological examination of the entire SLNs on 78 SLNs harvested from 22 patients with HNSCC. Results: SCCA and CK17 quantification showed significantly higher mRNA values for macrometastases (MAs) than for either negative or isolated tumour cell (ITC) SLNs (P<0.01). Pemphigus vulgaris antigen allowed the discrimination of all MAs and micrometastases from both negative and ITC SLNs (P<0.001). For the neck staging of patients, considering metastatic vs non-metastatic status, receiver-operating characteristic curve analysis found areas under the curve of 93.8, 97.9, and 100% for CK17, SCCA, and PVA, respectively. With PVA, a cutoff value of 562 copies per 100 ng of cDNA permitted the correct distinction between patients with positive as opposed to negative neck nodes in all cases. Conclusion: PVA seems to be a highly promising marker for accurate intra-operative SLN staging in HNSCC by QRT–PCR. PMID:19997107
NASA Astrophysics Data System (ADS)
Bourgeat, Pierrick; Dore, Vincent; Fripp, Jurgen; Villemagne, Victor L.; Rowe, Chris C.; Salvado, Olivier
2015-03-01
With the advances of PET tracers for β-Amyloid (Aβ) detection in neurodegenerative diseases, automated quantification methods are desirable. For clinical use, there is a great need for PET-only quantification method, as MR images are not always available. In this paper, we validate a previously developed PET-only quantification method against MR-based quantification using 6 tracers: 18F-Florbetaben (N=148), 18F-Florbetapir (N=171), 18F-NAV4694 (N=47), 18F-Flutemetamol (N=180), 11C-PiB (N=381) and 18F-FDG (N=34). The results show an overall mean absolute percentage error of less than 5% for each tracer. The method has been implemented as a remote service called CapAIBL (http://milxcloud.csiro.au/capaibl). PET images are uploaded to a cloud platform where they are spatially normalised to a standard template and quantified. A report containing global as well as local quantification, along with surface projection of the β-Amyloid deposition is automatically generated at the end of the pipeline and emailed to the user.
NASA Astrophysics Data System (ADS)
Lee, Hyun-Seok; Heun Kim, Sook; Jeong, Ji-Seon; Lee, Yong-Moon; Yim, Yong-Hyeon
2015-10-01
An element-based reductive approach provides an effective means of realizing International System of Units (SI) traceability for high-purity biological standards. Here, we develop an absolute protein quantification method using double isotope dilution (ID) inductively coupled plasma mass spectrometry (ICP-MS) combined with microwave-assisted acid digestion for the first time. We validated the method and applied it to certify the candidate protein certified reference material (CRM) of human growth hormone (hGH). The concentration of hGH was determined by analysing the total amount of sulfur in hGH. Next, the size-exclusion chromatography method was used with ICP-MS to characterize and quantify sulfur-containing impurities. By subtracting the contribution of sulfur-containing impurities from the total sulfur content in the hGH CRM, we obtained a SI-traceable certification value. The quantification result obtained with the present method based on sulfur analysis was in excellent agreement with the result determined via a well-established protein quantification method based on amino acid analysis using conventional acid hydrolysis combined with an ID liquid chromatography-tandem mass spectrometry. The element-based protein quantification method developed here can be generally used for SI-traceable absolute quantification of proteins, especially pure-protein standards.
Sensitivity estimation in time-of-flight list-mode positron emission tomography.
Herraiz, J L; Sitek, A
2015-11-01
An accurate quantification of the images in positron emission tomography (PET) requires knowing the actual sensitivity at each voxel, which represents the probability that a positron emitted in that voxel is finally detected as a coincidence of two gamma rays in a pair of detectors in the PET scanner. This sensitivity depends on the characteristics of the acquisition, as it is affected by the attenuation of the annihilation gamma rays in the body, and possible variations of the sensitivity of the scanner detectors. In this work, the authors propose a new approach to handle time-of-flight (TOF) list-mode PET data, which allows performing either or both, a self-attenuation correction, and self-normalization correction based on emission data only. The authors derive the theory using a fully Bayesian statistical model of complete data. The authors perform an initial evaluation of algorithms derived from that theory and proposed in this work using numerical 2D list-mode simulations with different TOF resolutions and total number of detected coincidences. Effects of randoms and scatter are not simulated. The authors found that proposed algorithms successfully correct for unknown attenuation and scanner normalization for simulated 2D list-mode TOF-PET data. A new method is presented that can be used for corrections for attenuation and normalization (sensitivity) using TOF list-mode data.
NASA Astrophysics Data System (ADS)
Ma, Yupengxue; Gong, Xinning; He, Bangbang; Li, Xiaofei; Cao, Dianyu; Li, Junshuai; Xiong, Qing; Chen, Qiang; Chen, Bing Hui; Huo Liu, Qing
2018-04-01
Hydroxyl (OH) radical is one of the most important reactive species produced by plasma-liquid interactions, and the OH in liquid phase (dissolved OH radical, OHdis) takes effect in many plasma-based applications due to its high reactivity. Therefore, the quantification of the OHdis in a plasma-liquid system is of great importance, and a molecular probe method usually used for the OHdis detection might be applied. Herein, we investigate the validity of using the molecular probe method to estimate the [OHdis] in the plasma-liquid system. Dimethyl sulfoxide is used as the molecular probe to estimate the [OHdis] in an air plasma-liquid system, and usually the estimation of [OHdis] is deduced by quantifying the OHdis-induced derivative, the formaldehyde (HCHO). The analysis indicates that the true concentration of the OHdis should be estimated from the sum of three terms: the formed HCHO, the existing OH scavengers, and the H2O2 formed from the OHdis. The results show that the measured [HCHO] needs to be corrected since the HCHO consumption is not negligible in the plasma-liquid system. We conclude from the results and the analysis that the molecular probe method generally underestimates the [OHdis] in the plasma-liquid system. If one wants to obtain the true concentration of the OHdis in the plasma-liquid system, one needs to know the consumption behavior of the OHdis-induced derivatives, the information of the OH scavengers (such as hydrated electron, atomic hydrogen besides the molecular probe), and also the knowledge of the H2O2 formed from the OHdis.
RNA-Skim: a rapid method for RNA-Seq quantification at transcript level
Zhang, Zhaojun; Wang, Wei
2014-01-01
Motivation: RNA-Seq technique has been demonstrated as a revolutionary means for exploring transcriptome because it provides deep coverage and base pair-level resolution. RNA-Seq quantification is proven to be an efficient alternative to Microarray technique in gene expression study, and it is a critical component in RNA-Seq differential expression analysis. Most existing RNA-Seq quantification tools require the alignments of fragments to either a genome or a transcriptome, entailing a time-consuming and intricate alignment step. To improve the performance of RNA-Seq quantification, an alignment-free method, Sailfish, has been recently proposed to quantify transcript abundances using all k-mers in the transcriptome, demonstrating the feasibility of designing an efficient alignment-free method for transcriptome quantification. Even though Sailfish is substantially faster than alternative alignment-dependent methods such as Cufflinks, using all k-mers in the transcriptome quantification impedes the scalability of the method. Results: We propose a novel RNA-Seq quantification method, RNA-Skim, which partitions the transcriptome into disjoint transcript clusters based on sequence similarity, and introduces the notion of sig-mers, which are a special type of k-mers uniquely associated with each cluster. We demonstrate that the sig-mer counts within a cluster are sufficient for estimating transcript abundances with accuracy comparable with any state-of-the-art method. This enables RNA-Skim to perform transcript quantification on each cluster independently, reducing a complex optimization problem into smaller optimization tasks that can be run in parallel. As a result, RNA-Skim uses <4% of the k-mers and <10% of the CPU time required by Sailfish. It is able to finish transcriptome quantification in <10 min per sample by using just a single thread on a commodity computer, which represents >100 speedup over the state-of-the-art alignment-based methods, while delivering comparable or higher accuracy. Availability and implementation: The software is available at http://www.csbio.unc.edu/rs. Contact: weiwang@cs.ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931995
Zhang, Aizhi; Wang, Quanlin; Mo, Shijie
2010-11-01
A method for the simultaneous determination of delta-9-tetrahydrocannabinol (THC), cannabidiol (CBD) and cannabinol (CBN) in edible oil was developed using ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS). The target compounds were extracted with methanol, purified by an LC-Alumina-N solid phase extraction cartridge, separated and detected by the UPLC-MS/MS. Quantitative analysis was corrected by an isotope internal standard method using delta-9-THC-D3 as internal standard. Average recoveries for the target compounds varied from 68.0% to 101.6% with the relative standard deviations ranging from 7.0% to 20.1% at three spiked levels. The limits of detection (LOD) of the method were from 0.06-0.17 microg/kg and the limits of quantification (LOQ) were in the range of 0.20-0.52 microg/kg. The results showed that the method is able to meet the requirements for the simultaneous determination of THC, CBD and CBN in edible oil.
Assessment of cardiac fibrosis: a morphometric method comparison for collagen quantification.
Schipke, Julia; Brandenberger, Christina; Rajces, Alexandra; Manninger, Martin; Alogna, Alessio; Post, Heiner; Mühlfeld, Christian
2017-04-01
Fibrotic remodeling of the heart is a frequent condition linked to various diseases and cardiac dysfunction. Collagen quantification is an important objective in cardiac fibrosis research; however, a variety of different histological methods are currently used that may differ in accuracy. Here, frequently applied collagen quantification techniques were compared. A porcine model of early stage heart failure with preserved ejection fraction was used as an example. Semiautomated threshold analyses were imprecise, mainly due to inclusion of noncollagen structures or failure to detect certain collagen deposits. In contrast, collagen assessment by automated image analysis and light microscopy (LM)-stereology was more sensitive. Depending on the quantification method, the amount of estimated collagen varied and influenced intergroup comparisons. PicroSirius Red, Masson's trichrome, and Azan staining protocols yielded similar results, whereas the measured collagen area increased with increasing section thickness. Whereas none of the LM-based methods showed significant differences between the groups, electron microscopy (EM)-stereology revealed a significant collagen increase between cardiomyocytes in the experimental group, but not at other localizations. In conclusion, in contrast to the staining protocol, section thickness and the quantification method being used directly influence the estimated collagen content and thus, possibly, intergroup comparisons. EM in combination with stereology is a precise and sensitive method for collagen quantification if certain prerequisites are considered. For subtle fibrotic alterations, consideration of collagen localization may be necessary. Among LM methods, LM-stereology and automated image analysis are appropriate to quantify fibrotic changes, the latter depending on careful control of algorithm and comparable section staining. NEW & NOTEWORTHY Direct comparison of frequently applied histological fibrosis assessment techniques revealed a distinct relation of measured collagen and utilized quantification method as well as section thickness. Besides electron microscopy-stereology, which was precise and sensitive, light microscopy-stereology and automated image analysis proved to be appropriate for collagen quantification. Moreover, consideration of collagen localization might be important in revealing minor fibrotic changes. Copyright © 2017 the American Physiological Society.
From cutting-edge pointwise cross-section to groupwise reaction rate: A primer
NASA Astrophysics Data System (ADS)
Sublet, Jean-Christophe; Fleming, Michael; Gilbert, Mark R.
2017-09-01
The nuclear research and development community has a history of using both integral and differential experiments to support accurate lattice-reactor, nuclear reactor criticality and shielding simulations, as well as verification and validation efforts of cross sections and emitted particle spectra. An important aspect to this type of analysis is the proper consideration of the contribution of the neutron spectrum in its entirety, with correct propagation of uncertainties and standard deviations derived from Monte Carlo simulations, to the local and total uncertainty in the simulated reactions rates (RRs), which usually only apply to one application at a time. This paper identifies deficiencies in the traditional treatment, and discusses correct handling of the RR uncertainty quantification and propagation, including details of the cross section components in the RR uncertainty estimates, which are verified for relevant applications. The methodology that rigorously captures the spectral shift and cross section contributions to the uncertainty in the RR are discussed with quantified examples that demonstrate the importance of the proper treatment of the spectrum profile and cross section contributions to the uncertainty in the RR and subsequent response functions. The recently developed inventory code FISPACT-II, when connected to the processed nuclear data libraries TENDL-2015, ENDF/B-VII.1, JENDL-4.0u or JEFF-3.2, forms an enhanced multi-physics platform providing a wide variety of advanced simulation methods for modelling activation, transmutation, burnup protocols and simulating radiation damage sources terms. The system has extended cutting-edge nuclear data forms, uncertainty quantification and propagation methods, which have been the subject of recent integral and differential, fission, fusion and accelerators validation efforts. The simulation system is used to accurately and predictively probe, understand and underpin a modern and sustainable understanding of the nuclear physics that is so important for many areas of science and technology; advanced fission and fuel systems, magnetic and inertial confinement fusion, high energy, accelerator physics, medical application, isotope production, earth exploration, astrophysics and homeland security.
NASA Astrophysics Data System (ADS)
Saha, Abhijit; Deb, S. B.; Nagar, B. K.; Saxena, M. K.
An analytical methodology was developed for the precise quantification of ten trace rare earth elements (REEs), namely, La, Ce, Pr, Nd, Sm, Eu, Tb, Dy, Ho, and Tm, in gadolinium aluminate (GdAlO3) employing an ultrasonic nebulizer (USN)-desolvating device based inductively coupled plasma mass spectrometry (ICP-MS). A microwave digestion procedure was optimized for digesting 100 mg of the refractory oxide using a mixture of sulphuric acid (H2SO4), phosphoric acid (H3PO4) and water (H2O) with 1400 W power, 10 min ramp and 60 min hold time. An USN-desolvating sample introduction system was employed to enhance analyte sensitivities by minimizing their oxide ion formation in the plasma. Studies on the effect of various matrix concentrations on the analyte intensities revealed that precise quantification of the analytes was possible with matrix level of 250 mg L- 1. The possibility of using indium as an internal standard was explored and applied to correct for matrix effect and variation in analyte sensitivity under plasma operating conditions. Individual oxide ion formation yields were determined in matrix matched solution and employed for correcting polyatomic interferences of light REE (LREE) oxide ions on the intensities of middle and heavy rare earth elements (MREEs and HREEs). Recoveries of ≥ 90% were achieved for the analytes employing standard addition technique. Three real samples were analyzed for traces of REEs by the proposed method and cross validated for Eu and Nd by isotope dilution mass spectrometry (IDMS). The results show no significant difference in the values at 95% confidence level. The expanded uncertainty (coverage factor 1σ) in the determination of trace REEs in the samples were found to be between 3 and 8%. The instrument detection limits (IDLs) and the method detection limits (MDLs) for the ten REEs lie in the ranges 1-5 ng L- 1 and 7-64 μg kg- 1 respectively.
Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Ungersböck, Johanna; Dolliner, Peter; Frey, Richard; Birkfellner, Wolfgang; Mitterhauser, Markus; Wadsak, Wolfgang; Karanikas, Georgios; Kasper, Siegfried; Lanzenberger, Rupert
2012-08-01
image- derived input functions (IDIFs) represent a promising technique for a simpler and less invasive quantification of PET studies as compared to arterial cannulation. However, a number of limitations complicate the routine use of IDIFs in clinical research protocols and the full substitution of manual arterial samples by venous ones has hardly been evaluated. This study aims for a direct validation of IDIFs and venous data for the quantification of serotonin-1A receptor binding (5-HT(1A)) with [carbonyl-(11)C]WAY-100635 before and after hormone treatment. Fifteen PET measurements with arterial and venous blood sampling were obtained from 10 healthy women, 8 scans before and 7 after eight weeks of hormone replacement therapy. Image-derived input functions were derived automatically from cerebral blood vessels, corrected for partial volume effects and combined with venous manual samples from 10 min onward (IDIF+VIF). Corrections for plasma/whole-blood ratio and metabolites were done separately with arterial and venous samples. 5-HT(1A) receptor quantification was achieved with arterial input functions (AIF) and IDIF+VIF using a two-tissue compartment model. Comparison between arterial and venous manual blood samples yielded excellent reproducibility. Variability (VAR) was less than 10% for whole-blood activity (p>0.4) and below 2% for plasma to whole-blood ratios (p>0.4). Variability was slightly higher for parent fractions (VARmax=24% at 5 min, p<0.05 and VAR<13% after 20 min, p>0.1) but still within previously reported values. IDIFs after partial volume correction had peak values comparable to AIFs (mean difference Δ=-7.6 ± 16.9 kBq/ml, p>0.1), whereas AIFs exhibited a delay (Δ=4 ± 6.4s, p<0.05) and higher peak width (Δ=15.9 ± 5.2s, p<0.001). Linear regression analysis showed strong agreement for 5-HT(1A) binding as obtained with AIF and IDIF+VIF at baseline (R(2)=0.95), after treatment (R(2)=0.93) and when pooling all scans (R(2)=0.93), with slopes and intercepts in the range of 0.97 to 1.07 and -0.05 to 0.16, respectively. In addition to the region of interest analysis, the approach yielded virtually identical results for voxel-wise quantification as compared to the AIF. Despite the fast metabolism of the radioligand, manual arterial blood samples can be substituted by venous ones for parent fractions and plasma to whole-blood ratios. Moreover, the combination of image-derived and venous input functions provides a reliable quantification of 5-HT(1A) receptors. This holds true for 5-HT(1A) binding estimates before and after treatment for both regions of interest-based and voxel-wise modeling. Taken together, the approach provides less invasive receptor quantification by full independence of arterial cannulation. This offers great potential for the routine use in clinical research protocols and encourages further investigation for other radioligands with different kinetic characteristics. Copyright © 2012 Elsevier Inc. All rights reserved.
A phase quantification method based on EBSD data for a continuously cooled microalloyed steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, H.; Wynne, B.P.; Palmiere, E.J., E-mail: e.j
2017-01-15
Mechanical properties of steels depend on the phase constitutions of the final microstructures which can be related to the processing parameters. Therefore, accurate quantification of different phases is necessary to investigate the relationships between processing parameters, final microstructures and mechanical properties. Point counting on micrographs observed by optical or scanning electron microscopy is widely used as a phase quantification method, and different phases are discriminated according to their morphological characteristics. However, it is difficult to differentiate some of the phase constituents with similar morphology. Differently, for EBSD based phase quantification methods, besides morphological characteristics, other parameters derived from the orientationmore » information can also be used for discrimination. In this research, a phase quantification method based on EBSD data in the unit of grains was proposed to identify and quantify the complex phase constitutions of a microalloyed steel subjected to accelerated coolings. Characteristics of polygonal ferrite/quasi-polygonal ferrite, acicular ferrite and bainitic ferrite on grain averaged misorientation angles, aspect ratios, high angle grain boundary fractions and grain sizes were analysed and used to develop the identification criteria for each phase. Comparing the results obtained by this EBSD based method and point counting, it was found that this EBSD based method can provide accurate and reliable phase quantification results for microstructures with relatively slow cooling rates. - Highlights: •A phase quantification method based on EBSD data in the unit of grains was proposed. •The critical grain area above which GAM angles are valid parameters was obtained. •Grain size and grain boundary misorientation were used to identify acicular ferrite. •High cooling rates deteriorate the accuracy of this EBSD based method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oehmigen, Mark, E-mail: mark.oehmigen@uni-due.de
Purpose: This study aims to develop, implement, and evaluate a 16-channel radiofrequency (RF) coil for integrated positron emission tomography/magnetic resonance (PET/MR) imaging of breast cancer. The RF coil is designed for optimized MR imaging performance and PET transparency and attenuation correction (AC) is applied for accurate PET quantification. Methods: A 16-channel breast array RF coil was designed for integrated PET/MR hybrid imaging of breast cancer lesions. The RF coil features a lightweight rigid design and is positioned with a spacer at a defined position on the patient table of an integrated PET/MR system. Attenuation correction is performed by generating andmore » applying a dedicated 3D CT-based template attenuation map. Reposition accuracy of the RF coil on the system patient table while using the positioning frame was tested in repeated measurements using MR-visible markers. The MR, PET, and PET/MR imaging performances were systematically evaluated using modular breast phantoms. Attenuation correction of the RF coil was evaluated with difference measurements of the active breast phantoms filled with radiotracer in the PET detector with and without the RF coil in place, serving as a standard of reference measurement. The overall PET/MR imaging performance and PET quantification accuracy of the new 16-channel RF coil and its AC were then evaluated in first clinical examinations on ten patients with local breast cancer. Results: The RF breast array coil provides excellent signal-to-noise ratio and signal homogeneity across the volume of the breast phantoms in MR imaging and visualizes small structures in the phantoms down to 0.4 mm in plane. Difference measurements with PET revealed a global loss and thus attenuation of counts by 13% (mean value across the whole phantom volume) when the RF coil is placed in the PET detector. Local attenuation ranging from 0% in the middle of the phantoms up to 24% was detected in the peripheral regions of the phantoms at positions closer to attenuating hardware structures of the RF coil. The position accuracy of the RF coil on the patient table when using the positioning frame was determined well below 1 mm for all three spatial dimensions. This ensures perfect position match between the RF coil and its three-dimensional attenuation template during the PET data reconstruction process. When applying the CT-based AC of the RF coil, the global attenuation bias was mostly compensated to ±0.5% across the entire breast imaging volume. The patient study revealed high quality MR, PET, and combined PET/MR imaging of breast cancer. Quantitative activity measurements in all 11 breast cancer lesions of the ten patients resulted in increased mean difference values of SUV{sub max} 11.8% (minimum 3.2%; maximum 23.2%) between nonAC images and images when AC of the RF breast coil was applied. This supports the quantitative results of the phantom study as well as successful attenuation correction of the RF coil. Conclusions: A 16-channel breast RF coil was designed for optimized MR imaging performance and PET transparency and was successfully integrated with its dedicated attenuation correction template into a whole-body PET/MR system. Systematic PET/MR imaging evaluation with phantoms and an initial study on patients with breast cancer provided excellent MR and PET image quality and accurate PET quantification.« less
NASA Astrophysics Data System (ADS)
Hou, X. D.; Jennett, N. M.
2017-11-01
Instrumented indentation is a convenient and increasingly rapid method of high resolution mapping of surface properties. There is, however, significant untapped potential for the quantification of these properties, which is only possible by solving a number of serious issues that affect the absolute values for mechanical properties obtained from small indentations. The three most pressing currently are the quantification of: the indentation size effect (ISE), residual stress, and pile-up and sink-in—which is itself affected by residual stress and ISE. Hardness based indentation mapping is unable to distinguish these effects. We describe a procedure that uses an elastic modulus as an internal reference and combines the information available from an indentation modulus map, a hardness map, and a determination of the ISE coefficient (using self-similar geometry indentation) to correct for the effects of stress, pile up and the indentation size effect, to leave a quantified map of plastic damage and grain refinement hardening in a surface. This procedure is used to map the residual stress in a cross-section of the machined surface of a previously stress free metal. The effect of surface grinding is compared to milling and is shown to cause different amounts of work hardening, increase in residual stress, and surface grain size reduction. The potential use of this procedure for mapping coatings in cross-section is discussed.
Dahab, Gamal M; Kheriza, Mohamed M; El-Beltagi, Hussien M; Fouda, Abdel-Motaal M; El-Din, Osama A Sharaf
2004-01-01
The precise quantification of fibrous tissue in liver biopsy sections is extremely important in the classification, diagnosis and grading of chronic liver disease, as well as in evaluating the response to antifibrotic therapy. Because the recently described methods of digital image analysis of fibrosis in liver biopsy sections have major flaws, including the use of out-dated techniques in image processing, inadequate precision and inability to detect and quantify perisinusoidal fibrosis, we developed a new technique in computerized image analysis of liver biopsy sections based on Adobe Photoshop software. We prepared an experimental model of liver fibrosis involving treatment of rats with oral CCl4 for 6 weeks. After staining liver sections with Masson's trichrome, a series of computer operations were performed including (i) reconstitution of seamless widefield images from a number of acquired fields of liver sections; (ii) image size and solution adjustment; (iii) color correction; (iv) digital selection of a specified color range representing all fibrous tissue in the image and; (v) extraction and calculation. This technique is fully computerized with no manual interference at any step, and thus could be very reliable for objectively quantifying any pattern of fibrosis in liver biopsy sections and in assessing the response to antifibrotic therapy. It could also be a valuable tool in the precise assessment of antifibrotic therapy to other tissue regardless of the pattern of tissue or fibrosis.
Droplet digital PCR technology promises new applications and research areas.
Manoj, P
2016-01-01
Digital Polymerase Chain Reaction (dPCR) is used to quantify nucleic acids and its applications are in the detection and precise quantification of low-level pathogens, rare genetic sequences, quantification of copy number variants, rare mutations and in relative gene expressions. Here the PCR is performed in large number of reaction chambers or partitions and the reaction is carried out in each partition individually. This separation allows a more reliable collection and sensitive measurement of nucleic acid. Results are calculated by counting amplified target sequence (positive droplets) and the number of partitions in which there is no amplification (negative droplets). The mean number of target sequences was calculated by Poisson Algorithm. Poisson correction compensates the presence of more than one copy of target gene in any droplets. The method provides information with accuracy and precision which is highly reproducible and less susceptible to inhibitors than qPCR. It has been demonstrated in studying variations in gene sequences, such as copy number variants and point mutations, distinguishing differences between expression of nearly identical alleles, assessment of clinically relevant genetic variations and it is routinely used for clonal amplification of samples for NGS methods. dPCR enables more reliable predictors of tumor status and patient prognosis by absolute quantitation using reference normalizations. Rare mitochondrial DNA deletions associated with a range of diseases and disorders as well as aging can be accurately detected with droplet digital PCR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toro, Javier, E-mail: jjtoroca@unal.edu.co; Requena, Ignacio, E-mail: requena@decsai.ugr.es; Duarte, Oscar, E-mail: ogduartev@unal.edu.co
In environmental impact assessment, qualitative methods are used because they are versatile and easy to apply. This methodology is based on the evaluation of the strength of the impact by grading a series of qualitative attributes that can be manipulated by the evaluator. The results thus obtained are not objective, and all too often impacts are eliminated that should be mitigated with corrective measures. However, qualitative methodology can be improved if the calculation of Impact Importance is based on the characteristics of environmental factors and project activities instead on indicators assessed by evaluators. In this sense, this paper proposes themore » inclusion of the vulnerability of environmental factors and the potential environmental impact of project activities. For this purpose, the study described in this paper defined Total Impact Importance and specified a quantification procedure. The results obtained in the case study of oil drilling in Colombia reflect greater objectivity in the evaluation of impacts as well as a positive correlation between impact values, the environmental characteristics at and near the project location, and the technical characteristics of project activities. -- Highlights: • Concept of vulnerability has been used to calculate the importance impact assessment. • This paper defined Total Impact Importance and specified a quantification procedure. • The method includes the characteristics of environmental and project activities. • The application has shown greater objectivity in the evaluation of impacts. • Better correlation between impact values, environment and the project has been shown.« less
Multi-Scale Validation of a Nanodiamond Drug Delivery System and Multi-Scale Engineering Education
ERIC Educational Resources Information Center
Schwalbe, Michelle Kristin
2010-01-01
This dissertation has two primary concerns: (i) evaluating the uncertainty and prediction capabilities of a nanodiamond drug delivery model using Bayesian calibration and bias correction, and (ii) determining conceptual difficulties of multi-scale analysis from an engineering education perspective. A Bayesian uncertainty quantification scheme…
Is the Image Quality of I-124-PET Impaired by an Automatic Correction of Prompt Gammas?
Preylowski, Veronika; Schlögl, Susanne; Schoenahl, Frédéric; Jörg, Gerhard; Samnick, Samuel; Buck, Andreas K.; Lassmann, Michael
2013-01-01
Objectives The aim of this study is to evaluate the quality of I-124 PET images with and without prompt gamma compensation (PGC) by comparing the recovery coefficients (RC), the signal to noise ratios (SNR) and the contrast to F-18 and Ga-68. Furthermore, the influence of the PGC on the quantification and image quality is evaluated. Methods For measuring the image quality the NEMA NU2-2001 PET/SPECT-Phantom was used containing 6 spheres with a diameter between 10 mm and 37 mm placed in water with different levels of background activity. Each sphere was filled with the same activity concentration measured by an independently cross-calibrated dose calibrator. The “hot” sources were acquired with a full 3D PET/CT (Biograph mCT®, Siemens Medical USA). Acquisition times were 2 min for F-18 and Ga-68, and 10 min for I-124. For reconstruction an OSEM algorithm was applied. For I-124 the images were reconstructed with and without PGC. For the calculation of the RCs the activity concentrations in each sphere were determined; in addition, the influence of the background correction was studied. Results The RCs of Ga-68 are the smallest (79%). I-124 reaches similar RCs (87% with PGC, 84% without PGC) as F-18 (84%). showing that the quantification of I-124 images is similar to F-18 and slightly better than Ga-68. With background activity the contrast of the I-124 PGC images is similar to Ga-68 and F-18 scans. There was lower background activity in the I-124 images without PGC, which probably originates from an overcorrection of the scatter contribution. Consequently, the contrast without PGC was much higher than with PGC. As a consequence PGC should be used for I-124. Conclusions For I-124 there is only a slight influence on the quantification depending on the use of the PGC. However, there are considerable differences with respect to I-124 image quality. PMID:24014105
Taverniers, Isabel; Van Bockstaele, Erik; De Loose, Marc
2004-03-01
Analytical real-time PCR technology is a powerful tool for implementation of the GMO labeling regulations enforced in the EU. The quality of analytical measurement data obtained by quantitative real-time PCR depends on the correct use of calibrator and reference materials (RMs). For GMO methods of analysis, the choice of appropriate RMs is currently under debate. So far, genomic DNA solutions from certified reference materials (CRMs) are most often used as calibrators for GMO quantification by means of real-time PCR. However, due to some intrinsic features of these CRMs, errors may be expected in the estimations of DNA sequence quantities. In this paper, two new real-time PCR methods are presented for Roundup Ready soybean, in which two types of plasmid DNA fragments are used as calibrators. Single-target plasmids (STPs) diluted in a background of genomic DNA were used in the first method. Multiple-target plasmids (MTPs) containing both sequences in one molecule were used as calibrators for the second method. Both methods simultaneously detect a promoter 35S sequence as GMO-specific target and a lectin gene sequence as endogenous reference target in a duplex PCR. For the estimation of relative GMO percentages both "delta C(T)" and "standard curve" approaches are tested. Delta C(T) methods are based on direct comparison of measured C(T) values of both the GMO-specific target and the endogenous target. Standard curve methods measure absolute amounts of target copies or haploid genome equivalents. A duplex delta C(T) method with STP calibrators performed at least as well as a similar method with genomic DNA calibrators from commercial CRMs. Besides this, high quality results were obtained with a standard curve method using MTP calibrators. This paper demonstrates that plasmid DNA molecules containing either one or multiple target sequences form perfect alternative calibrators for GMO quantification and are especially suitable for duplex PCR reactions.
Understanding the atmospheric measurement and behavior of perfluorooctanoic acid.
Webster, Eva M; Ellis, David A
2012-09-01
The recently reported quantification of the atmospheric sampling artifact for perfluorooctanoic acid (PFOA) was applied to existing gas and particle concentration measurements. Specifically, gas phase concentrations were increased by a factor of 3.5 and particle-bound concentrations by a factor of 0.1. The correlation constants in two particle-gas partition coefficient (K(QA)) estimation equations were determined for multiple studies with and without correcting for the sampling artifact. Correction for the sampling artifact gave correlation constants with improved agreement to those reported for other neutral organic contaminants, thus supporting the application of the suggested correction factors for perfluorinated carboxylic acids. Applying the corrected correlation constant to a recent multimedia modeling study improved model agreement with corrected, reported, atmospheric concentrations. This work confirms that there is sufficient partitioning to the gas phase to support the long-range atmospheric transport of PFOA. Copyright © 2012 SETAC.
Quantification of liver fat in the presence of iron overload.
Horng, Debra E; Hernando, Diego; Reeder, Scott B
2017-02-01
To evaluate the accuracy of R2* models (1/T 2 * = R2*) for chemical shift-encoded magnetic resonance imaging (CSE-MRI)-based proton density fat-fraction (PDFF) quantification in patients with fatty liver and iron overload, using MR spectroscopy (MRS) as the reference standard. Two Monte Carlo simulations were implemented to compare the root-mean-squared-error (RMSE) performance of single-R2* and dual-R2* correction in a theoretical liver environment with high iron. Fatty liver was defined as hepatic PDFF >5.6% based on MRS; only subjects with fatty liver were considered for analyses involving fat. From a group of 40 patients with known/suspected iron overload, nine patients were identified at 1.5T, and 13 at 3.0T with fatty liver. MRS linewidth measurements were used to estimate R2* values for water and fat peaks. PDFF was measured from CSE-MRI data using single-R2* and dual-R2* correction with magnitude and complex fitting. Spectroscopy-based R2* analysis demonstrated that the R2* of water and fat remain close in value, both increasing as iron overload increases: linear regression between R2* W and R2* F resulted in slope = 0.95 [0.79-1.12] (95% limits of agreement) at 1.5T and slope = 0.76 [0.49-1.03] at 3.0T. MRI-PDFF using dual-R2* correction had severe artifacts. MRI-PDFF using single-R2* correction had good agreement with MRS-PDFF: Bland-Altman analysis resulted in -0.7% (bias) ± 2.9% (95% limits of agreement) for magnitude-fit and -1.3% ± 4.3% for complex-fit at 1.5T, and -1.5% ± 8.4% for magnitude-fit and -2.2% ± 9.6% for complex-fit at 3.0T. Single-R2* modeling enables accurate PDFF quantification, even in patients with iron overload. 1 J. Magn. Reson. Imaging 2017;45:428-439. © 2016 International Society for Magnetic Resonance in Medicine.
Uncertainty quantification in volumetric Particle Image Velocimetry
NASA Astrophysics Data System (ADS)
Bhattacharya, Sayantan; Charonko, John; Vlachos, Pavlos
2016-11-01
Particle Image Velocimetry (PIV) uncertainty quantification is challenging due to coupled sources of elemental uncertainty and complex data reduction procedures in the measurement chain. Recent developments in this field have led to uncertainty estimation methods for planar PIV. However, no framework exists for three-dimensional volumetric PIV. In volumetric PIV the measurement uncertainty is a function of reconstructed three-dimensional particle location that in turn is very sensitive to the accuracy of the calibration mapping function. Furthermore, the iterative correction to the camera mapping function using triangulated particle locations in space (volumetric self-calibration) has its own associated uncertainty due to image noise and ghost particle reconstructions. Here we first quantify the uncertainty in the triangulated particle position which is a function of particle detection and mapping function uncertainty. The location uncertainty is then combined with the three-dimensional cross-correlation uncertainty that is estimated as an extension of the 2D PIV uncertainty framework. Finally the overall measurement uncertainty is quantified using an uncertainty propagation equation. The framework is tested with both simulated and experimental cases. For the simulated cases the variation of estimated uncertainty with the elemental volumetric PIV error sources are also evaluated. The results show reasonable prediction of standard uncertainty with good coverage.
Li, Belinda S Y; Wang, Hao; Gonen, Oded
2003-10-01
In localized brain proton MR spectroscopy ((1)H-MRS), metabolites' levels are often expressed as ratios, rather than as absolute concentrations. Frequently, their denominator is the creatine [Cr], which level is explicitly assumed to be stable in normal as well as in many pathologic states. The rationale is that ratios self-correct for imager and localization method differences, gain instabilities, regional susceptibility variations and partial volume effects. The implicit assumption is that these benefits are worth their cost(w)-(w) propagation of the individual variation of each of the ratio's components. To test this hypothesis, absolute levels of N-acetylaspartate [NAA], choline [Cho] and [Cr] were quantified in various regions of the brains of 8 volunteers, using 3-dimensional (3D) (1)H-MRS at 1.5 T. The results show that in over 50% of approximately 2000 voxels examined, [NAA]/[Cr] and [Cho]/[Cr] exhibited higher coefficients of variations (CV) than [NAA] and [Cho] individually. Furthermore, in approximately 33% of these voxels, the ratios' CVs exceeded even the combined constituents' CVs. Consequently, basing metabolite quantification on ratios and assuming stable [Cr] introduces more variability into (1)H-MRS than it prevents. Therefore, its cost exceeds the benefit.
Zhou, Yun; Sojkova, Jitka; Resnick, Susan M.; Wong, Dean F.
2012-01-01
Both the standardized uptake value ratio (SUVR) and the Logan plot result in biased distribution volume ratios (DVR) in ligand-receptor dynamic PET studies. The objective of this study is to use a recently developed relative equilibrium-based graphical plot (RE plot) method to improve and simplify the two commonly used methods for quantification of [11C]PiB PET. Methods The overestimation of DVR in SUVR was analyzed theoretically using the Logan and the RE plots. A bias-corrected SUVR (bcSUVR) was derived from the RE plot. Seventy-eight [11C]PiB dynamic PET scans (66 from controls and 12 from mildly cognitively impaired participants (MCI) from the Baltimore Longitudinal Study of Aging (BLSA)) were acquired over 90 minutes. Regions of interest (ROIs) were defined on coregistered MRIs. Both the ROI and pixelwise time activity curves (TACs) were used to evaluate the estimates of DVR. DVRs obtained using the Logan plot applied to ROI TACs were used as a reference for comparison of DVR estimates. Results Results from the theoretical analysis were confirmed by human studies. ROI estimates from the RE plot and the bcSUVR were nearly identical to those from the Logan plot with ROI TACs. In contrast, ROI estimates from DVR images in frontal, temporal, parietal, cingulate regions, and the striatum were underestimated by the Logan plot (controls 4 – 12%; MCI 9 – 16%) and overestimated by the SUVR (controls 8 – 16%; MCI 16 – 24%). This bias was higher in the MCI group than in controls (p < 0.01) but was not present when data were analyzed using either the RE plot or the bcSUVR. Conclusion The RE plot improves pixel-wise quantification of [11C]PiB dynamic PET compared to the conventional Logan plot. The bcSUVR results in lower bias and higher consistency of DVR estimates compared to SUVR. The RE plot and the bcSUVR are practical quantitative approaches that improve the analysis of [11C]PiB studies. PMID:22414634
Takeno, Shinya; Bamba, Takeshi; Nakazawa, Yoshihisa; Fukusaki, Eiichiro; Okazawa, Atsushi; Kobayashi, Akio
2008-04-01
Commercial development of trans-1,4-polyisoprene from Eucommia ulmoides Oliver (EU-rubber) requires specific knowledge on selection of high-rubber-content lines and establishment of agronomic cultivation methods for achieving maximum EU-rubber yield. The development can be facilitated by high-throughput and highly sensitive analytical techniques for EU-rubber extraction and quantification. In this paper, we described an efficient EU-rubber extraction method, and validated that the accuracy was equivalent to that of the conventional Soxhlet extraction method. We also described a highly sensitive quantification method for EU-rubber by Fourier transform infrared spectroscopy (FT-IR) and pyrolysis-gas chromatography/mass spectrometry (PyGC/MS). We successfully applied the extraction/quantification method for study of seasonal changes in EU-rubber content and molecular weight distribution.
Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.
Gerold, Chase T; Bakker, Eric; Henry, Charles S
2018-04-03
In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.
Jiménez-Carvelo, Ana M; González-Casado, Antonio; Cuadros-Rodríguez, Luis
2017-03-01
A new analytical method for the quantification of olive oil and palm oil in blends with other vegetable edible oils (canola, safflower, corn, peanut, seeds, grapeseed, linseed, sesame and soybean) using normal phase liquid chromatography, and applying chemometric tools was developed. The procedure for obtaining of chromatographic fingerprint from the methyl-transesterified fraction from each blend is described. The multivariate quantification methods used were Partial Least Square-Regression (PLS-R) and Support Vector Regression (SVR). The quantification results were evaluated by several parameters as the Root Mean Square Error of Validation (RMSEV), Mean Absolute Error of Validation (MAEV) and Median Absolute Error of Validation (MdAEV). It has to be highlighted that the new proposed analytical method, the chromatographic analysis takes only eight minutes and the results obtained showed the potential of this method and allowed quantification of mixtures of olive oil and palm oil with other vegetable oils. Copyright © 2016 Elsevier B.V. All rights reserved.
Lu, Tzong-Shi; Yiao, Szu-Yu; Lim, Kenneth; Jensen, Roderick V; Hsiao, Li-Li
2010-07-01
The identification of differences in protein expression resulting from methodical variations is an essential component to the interpretation of true, biologically significant results. We used the Lowry and Bradford methods- two most commonly used methods for protein quantification, to assess whether differential protein expressions are a result of true biological or methodical variations. MATERIAL #ENTITYSTARTX00026; Differential protein expression patterns was assessed by western blot following protein quantification by the Lowry and Bradford methods. We have observed significant variations in protein concentrations following assessment with the Lowry versus Bradford methods, using identical samples. Greater variations in protein concentration readings were observed over time and in samples with higher concentrations, with the Bradford method. Identical samples quantified using both methods yielded significantly different expression patterns on Western blot. We show for the first time that methodical variations observed in these protein assay techniques, can potentially translate into differential protein expression patterns, that can be falsely taken to be biologically significant. Our study therefore highlights the pivotal need to carefully consider methodical approaches to protein quantification in techniques that report quantitative differences.
Yan, Xiaowen; Yang, Limin; Wang, Qiuquan
2013-07-01
Much progress has been made in identification of the proteins in proteomes, and quantification of these proteins has attracted much interest. In addition to popular tandem mass spectrometric methods based on soft ionization, inductively coupled plasma mass spectrometry (ICPMS), a typical example of mass spectrometry based on hard ionization, usually used for analysis of elements, has unique advantages in absolute quantification of proteins by determination of an element with a definite stoichiometry in a protein or attached to the protein. In this Trends article, we briefly describe state-of-the-art ICPMS-based methods for quantification of proteins, emphasizing protein-labeling and element-tagging strategies developed on the basis of chemically selective reactions and/or biospecific interactions. Recent progress from protein to cell quantification by use of ICPMS is also discussed, and the possibilities and challenges of ICPMS-based protein quantification for universal, selective, or targeted quantification of proteins and cells in a biological sample are also discussed critically. We believe ICPMS-based protein quantification will become ever more important in targeted quantitative proteomics and bioanalysis in the near future.
Middelburg, T A; Hoy, C L; Neumann, H A M; Amelink, A; Robinson, D J
2015-07-01
Fluorescence measurements in the skin are very much affected by absorption and scattering but existing methods to correct for this are not applicable to superficial skin measurements. The first use of multiple-diameter single fiber reflectance (MDSFR) and single fiber fluorescence (SFF) spectroscopy in human skin was investigated. MDSFR spectroscopy allows a quantification of the full optical properties in superficial skin (μa, μs' and γ), which can next be used to retrieve the corrected - intrinsic - fluorescence of a fluorophore Qμa,x(f). Our goal was to investigate the importance of such correction for individual patients. We studied this in 22 patients undergoing photodynamic therapy (PDT) for actinic keratosis. The magnitude of correction of fluorescence was around 4 (for both autofluorescence and protoporphyrin IX). Moreover, it was variable between patients, but also within patients over the course of fractionated aminolevulinic acid PDT (range 2.7-7.5). Patients also varied in the amount of protoporphyrin IX synthesis, photobleaching percentages and resynthesis (>100× difference between the lowest and highest PpIX synthesis). The autofluorescence was lower in actinic keratosis than contralateral normal skin (0.0032 versus 0.0052; P<0.0005). Our results clearly demonstrate the importance of correcting the measured fluorescence for optical properties, because these vary considerably between individual patients and also during PDT. Protoporphyrin IX synthesis and photobleaching kinetics allow monitoring clinical PDT which facilitates individual-based PDT dosing and improvement of clinical treatment protocols. Furthermore, the skin autofluorescence can be relevant for diagnostic use in the skin, but it may also be interesting because of its association with several internal diseases. Copyright © 2015 Japanese Society for Investigative Dermatology. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Spengler, D.; Kuester, T.; Frick, A.; Scheffler, D.; Kaufmann, H.
2013-10-01
Surface soil moisture content is one of the key variables used for many applications especially in hydrology, meteorology and agriculture. Hyperspectral remote sensing provides effective methodologies for mapping soil moisture content over a broad area by different indices such as NSMI [1,2] and SMGM [3]. Both indices can achieve a high accuracy for non-vegetation influenced soil samples, but their accuracy is limited in case of the presence of vegetation. Since, the increase of the vegetation cover leads to non-linear variations of the indices. In this study a new methodology for moisture indices correcting the influence of vegetation is presented consisting of several processing steps. First, hyperspectral reflectance data are classified in terms of crop type and growth stage. Second, based on these parameters 3D plant models from a database used to simulate typical canopy reflectance considering variations in the canopy structure (e.g. plant density and distribution) and the soil moisture content for actual solar illumination and sensor viewing angles. Third, a vegetation correction function is developed, based on the calculated soil moisture indices and vegetation indices of the simulated canopy reflectance data. Finally this function is applied on hyperspectral image data. The method is tested on two hyperspectral image data sets of the AISA DUAL at the test site Fichtwald in Germany. The results show a significant improvements compared to solely use of NSMI index. Up to a vegetation cover of 75 % the correction function minimise the influences of vegetation cover significantly. If the vegetation is denser the method leads to inadequate quality to predict the soil moisture content. In summary it can be said that applying the method on weakly to moderately overgrown with vegetation locations enables a significant improvement in the quantification of soil moisture and thus greatly expands the scope of NSMI.
Quantification of seasonal biomass effects on cosmic-ray soil water content determination
NASA Astrophysics Data System (ADS)
Baatz, R.; Bogena, H. R.; Hendricks Franssen, H.; Huisman, J. A.; Qu, W.; Montzka, C.; Korres, W.; Vereecken, H.
2013-12-01
The novel cosmic-ray soil moisture probes (CRPs) measure neutron flux density close to the earth surface. High energy cosmic-rays penetrate the Earth's atmosphere from the cosmos and become moderated by terrestrial nuclei. Hydrogen is the most effective neutron moderator out of all chemical elements. Therefore, neutron flux density measured with a CRP at the earth surface correlates inversely with the hydrogen content in the CRP's footprint. A major contributor to the amount of hydrogen in the sensor's footprint is soil water content. The ability to measure changes in soil water content within the CRP footprint at a larger-than-point scale (~30 ha) and at high temporal resolution (hourly) make these sensors an appealing measurement instrument for hydrologic modeling purposes. Recent developments focus on the identification and quantification of major uncertainties inherent in CRP soil moisture measurements. In this study, a cosmic-ray soil moisture network for the Rur catchment in Western Germany is presented. It is proposed to correct the measured neutron flux density for above ground biomass yielding vegetation corrected soil water content from cosmic-ray measurements. The correction for above ground water equivalents aims to remove biases in soil water content measurements on sites with high seasonal vegetation dynamics such as agricultural fields. Above ground biomass is estimated as function of indices like NDVI and NDWI using regression equations. The regression equations were obtained with help of literature information, ground-based control measurements, a crop growth model and globally available data from the Moderate Resolution Imaging Spectrometer (MODIS). The results show that above ground biomass could be well estimated during the first half of the year. Seasonal changes in vegetation water content yielded biases in soil water content of ~0.05 cm3/cm3 that could be corrected for with the vegetation correction. The vegetation correction has particularly high potential when applied at long term cosmic-ray monitoring sites and the cosmic-ray rover.
Rauniyar, Navin
2015-01-01
The parallel reaction monitoring (PRM) assay has emerged as an alternative method of targeted quantification. The PRM assay is performed in a high resolution and high mass accuracy mode on a mass spectrometer. This review presents the features that make PRM a highly specific and selective method for targeted quantification using quadrupole-Orbitrap hybrid instruments. In addition, this review discusses the label-based and label-free methods of quantification that can be performed with the targeted approach. PMID:26633379
NASA Astrophysics Data System (ADS)
Restaino, Stephen M.; White, Ian M.
2017-03-01
Surface Enhanced Raman spectroscopy (SERS) provides significant improvements over conventional methods for single and multianalyte quantification. Specifically, the spectroscopic fingerprint provided by Raman scattering allows for a direct multiplexing potential far beyond that of fluorescence and colorimetry. Additionally, SERS generates a comparatively low financial and spatial footprint compared with common fluorescence based systems. Despite the advantages of SERS, it has remained largely an academic pursuit. In the field of biosensing, techniques to apply SERS to molecular diagnostics are constantly under development but, most often, assay protocols are redesigned around the use of SERS as a quantification method and ultimately complicate existing protocols. Our group has sought to rethink common SERS methodologies in order to produce translational technologies capable of allowing SERS to compete in the evolving, yet often inflexible biosensing field. This work will discuss the development of two techniques for quantification of microRNA, a promising biomarker for homeostatic and disease conditions ranging from cancer to HIV. First, an inkjet-printed paper SERS sensor has been developed to allow on-demand production of a customizable and multiplexable single-step lateral flow assay for miRNA quantification. Second, as miRNA concentrations commonly exist in relatively low concentrations, amplification methods (e.g. PCR) are therefore required to facilitate quantification. This work presents a novel miRNA assay alongside a novel technique for quantification of nuclease driven nucleic acid amplification strategies that will allow SERS to be used directly with common amplification strategies for quantification of miRNA and other nucleic acid biomarkers.
MEDOKADS - A 20 Year's Daily AVHRR Data Series for Analysis of Land Surface Properties
NASA Astrophysics Data System (ADS)
Koslowsky, D.; Billing, H.; Bolle, H.-J.
2009-04-01
To derive primary data products from raw AVHRR data, like spectral reflectances or temperatures, it is necessary to correct for sensor degradation and changing hardware specifications, to re-sample the data into a grid of equal pixel size, to perform geographical registration, cloud-screening and normalization for illumination and observation geometry. A data set which resulted from the application of these corrections is the top of the atmosphere Mediterranean Extended One-Km AVHRR Data Set (MEDOKADS) which now covers a period of 20 years. To study land surface processes, the obtained spectral data have to be combined, radiometric corrections for atmospheric effects, emissivity corrections in the case of temperature measurements have to be applied, and the variable over-flight times have to be accounted for. By application of complex evaluation schemes then higher level products are generated, like vegetation indices, surface albedo, and surface energy fluxes. The ultimate goal is to provide the users community with problem-related information. This includes the quantification of changes and the determination of trends. Methods and tools to reach this goal as well as their limitations are discussed. To validate the data, extended field measurements have been performed in which the scaling between local ground measurements and large scale satellite data play a major role. A major problem remains the application of atmospheric corrections because of the not well known variable aerosol content. The supervision of the quality of the derived information leads to the concept of anchor stations at which surface and atmospheric properties should permanently be measured.
Digital histology quantification of intra-hepatic fat in patients undergoing liver resection.
Parkin, E; O'Reilly, D A; Plumb, A A; Manoharan, P; Rao, M; Coe, P; Frystyk, J; Ammori, B; de Liguori Carino, N; Deshpande, R; Sherlock, D J; Renehan, A G
2015-08-01
High intra-hepatic fat (IHF) content is associated with insulin resistance, visceral adiposity, and increased morbidity and mortality following liver resection. However, in clinical practice, IHF is assessed indirectly by pre-operative imaging [for example, chemical-shift magnetic resonance (CS-MR)]. We used the opportunity in patients undergoing liver resection to quantify IHF by digital histology (D-IHF) and relate this to CT-derived anthropometrics, insulin-related serum biomarkers, and IHF estimated by CS-MR. A reproducible method for quantification of D-IHF using 7 histology slides (inter- and intra-rater concordance: 0.97 and 0.98) was developed. In 35 patients undergoing resection for colorectal cancer metastases, we measured: CT-derived subcutaneous and visceral adipose tissue volumes, Homeostasis Model Assessment of Insulin Resistance (HOMA-IR), fasting serum adiponectin, leptin and fetuin-A. We estimated relative IHF using CS-MR and developed prediction models for IHF using a factor-clustered approach. The multivariate linear regression models showed that D-IHF was best predicted by HOMA-IR (Beta coefficient(per doubling): 2.410, 95% CI: 1.093, 5.313) and adiponectin (β(per doubling): 0.197, 95% CI: 0.058, 0.667), but not by anthropometrics. MR-derived IHF correlated with D-IHF (rho: 0.626; p = 0.0001), but levels of agreement deviated in upper range values (CS-MR over-estimated IHF: regression versus zero, p = 0.009); this could be adjusted for by a correction factor (CF: 0.7816). Our findings show IHF is associated with measures of insulin resistance, but not measures of visceral adiposity. CS-MR over-estimated IHF in the upper range. Larger studies are indicated to test whether a correction of imaging-derived IHF estimates is valid. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Woodka, Marc D.; Brunschwig, Bruce S.; Lewis, Nathan S.
2008-03-01
Linear sensor arrays made from small molecule/carbon black composite chemiresistors placed in a low headspace volume chamber, with vapor delivered at low flow rates, allowed for the extraction of chemical information that significantly increased the ability of the sensor arrays to identify vapor mixture components and to quantify their concentrations. Each sensor sorbed vapors from the gas stream to various degrees. Similar to gas chromatography, species having high vapor pressures were separated from species having low vapor pressures. Instead of producing typical sensor responses representative of thermodynamic equilibrium between each sensor and an unchanging vapor phase, sensor responses varied depending on the position of the sensor in the chamber and the time from the beginning of the analyte exposure. This spatiotemporal (ST) array response provided information that was a function of time as well as of the position of the sensor in the chamber. The responses to pure analytes and to multi-component analyte mixtures comprised of hexane, decane, ethyl acetate, chlorobenzene, ethanol, and/or butanol, were recorded along each of the sensor arrays. Use of a non-negative least squares (NNLS) method for analysis of the ST data enabled the correct identification and quantification of the composition of 2-, 3-, 4- and 5-component mixtures from arrays using only 4 chemically different sorbent films and sensor training on pure vapors only. In contrast, when traditional time- and position-independent sensor response information was used, significant errors in mixture identification were observed. The ability to correctly identify and quantify constituent components of vapor mixtures through the use of such ST information significantly expands the capabilities of such broadly cross-reactive arrays of sensors.
Zhang, Li; Wu, Yuhua; Wu, Gang; Cao, Yinglong; Lu, Changming
2014-10-01
Plasmid calibrators are increasingly applied for polymerase chain reaction (PCR) analysis of genetically modified organisms (GMOs). To evaluate the commutability between plasmid DNA (pDNA) and genomic DNA (gDNA) as calibrators, a plasmid molecule, pBSTopas, was constructed, harboring a Topas 19/2 event-specific sequence and a partial sequence of the rapeseed reference gene CruA. Assays of the pDNA showed similar limits of detection (five copies for Topas 19/2 and CruA) and quantification (40 copies for Topas 19/2 and 20 for CruA) as those for the gDNA. Comparisons of plasmid and genomic standard curves indicated that the slopes, intercepts, and PCR efficiency for pBSTopas were significantly different from CRM Topas 19/2 gDNA for quantitative analysis of GMOs. Three correction methods were used to calibrate the quantitative analysis of control samples using pDNA as calibrators: model a, or coefficient value a (Cva); model b, or coefficient value b (Cvb); and the novel model c or coefficient formula (Cf). Cva and Cvb gave similar estimated values for the control samples, and the quantitative bias of the low concentration sample exceeded the acceptable range within ±25% in two of the four repeats. Using Cfs to normalize the Ct values of test samples, the estimated values were very close to the reference values (bias -13.27 to 13.05%). In the validation of control samples, model c was more appropriate than Cva or Cvb. The application of Cf allowed pBSTopas to substitute for Topas 19/2 gDNA as a calibrator to accurately quantify the GMO.
Quantification of adipose tissue in a rodent model of obesity
NASA Astrophysics Data System (ADS)
Johnson, David H.; Flask, Chris; Wan, Dinah; Ernsberger, Paul; Wilson, David L.
2006-03-01
Obesity is a global epidemic and a comorbidity for many diseases. We are using MRI to characterize obesity in rodents, especially with regard to visceral fat. Rats were scanned on a 1.5T clinical scanner, and a T1W, water-spoiled image (fat only) was divided by a matched T1W image (fat + water) to yield a ratio image related to the lipid content in each voxel. The ratio eliminated coil sensitivity inhomogeneity and gave flat values across a fat pad, except for outlier voxels (> 1.0) due to motion. Following sacrifice, fat pad volumes were dissected and measured by displacement in canola oil. In our study of 6 lean (SHR), 6 dietary obese (SHR-DO), and 9 genetically obese rats (SHROB), significant differences in visceral fat volume was observed with an average of 29+/-16 ml increase due to diet and 84+/-44 ml increase due to genetics relative to lean control with a volume of 11+/-4 ml. Subcutaneous fat increased 14+/-8 ml due to diet and 198+/-105 ml due to genetics relative to the lean control with 7+/-3 ml. Visceral fat strongly correlated between MRI and dissection (R2 = 0.94), but MRI detected over five times the subcutaneous fat found with error-prone dissection. Using a semi-automated images segmentation method on the ratio images, intra-subject variation was very low. Fat pad composition as estimated from ratio images consistently differentiated the strains with SHROB having a greater lipid concentration in adipose tissues. Future work will include in vivo studies of diet versus genetics, identification of new phenotypes, and corrective measures for obesity; technical efforts will focus on correction for motion and automation in quantification.
Jeong, Hyun Cheol; Hong, Hee-Do; Kim, Young-Chan; Rhee, Young Kyoung; Choi, Sang Yoon; Kim, Kyung-Tack; Kim, Sung Soo; Lee, Young-Chul; Cho, Chang-Won
2015-01-01
Background: Maltol, as a type of phenolic compounds, is produced by the browning reaction during the high-temperature treatment of ginseng. Thus, maltol can be used as a marker for the quality control of various ginseng products manufactured by high-temperature treatment including red ginseng. For the quantification of maltol in Korean ginseng products, an effective high-performance liquid chromatography-diode array detector (HPLC-DAD) method was developed. Materials and Methods: The HPLC-DAD method for maltol quantification coupled with a liquid-liquid extraction (LLE) method was developed and validated in terms of linearity, precision, and accuracy. An HPLC separation was performed on a C18 column. Results: The LLE methods and HPLC running conditions for maltol quantification were optimized. The calibration curve of the maltol exhibited good linearity (R2 = 1.00). The limit of detection value of maltol was 0.26 μg/mL, and the limit of quantification value was 0.79 μg/mL. The relative standard deviations (RSDs) of the data of the intra- and inter-day experiments were <1.27% and 0.61%, respectively. The results of the recovery test were 101.35–101.75% with an RSD value of 0.21–1.65%. The developed method was applied successfully to quantify the maltol in three ginseng products manufactured by different methods. Conclusion: The results of validation demonstrated that the proposed HPLC-DAD method was useful for the quantification of maltol in various ginseng products. PMID:26246746
Artifacts Quantification of Metal Implants in MRI
NASA Astrophysics Data System (ADS)
Vrachnis, I. N.; Vlachopoulos, G. F.; Maris, T. G.; Costaridou, L. I.
2017-11-01
The presence of materials with different magnetic properties, such as metal implants, causes distortion of the magnetic field locally, resulting in signal voids and pile ups, i.e. susceptibility artifacts in MRI. Quantitative and unbiased measurement of the artifact is prerequisite for optimization of acquisition parameters. In this study an image gradient based segmentation method is proposed for susceptibility artifact quantification. The method captures abrupt signal alterations by calculation of the image gradient. Then the artifact is quantified in terms of its extent by an automated cross entropy thresholding method as image area percentage. The proposed method for artifact quantification was tested in phantoms containing two orthopedic implants with significantly different magnetic permeabilities. The method was compared against a method proposed in the literature, considered as a reference, demonstrating moderate to good correlation (Spearman’s rho = 0.62 and 0.802 in case of titanium and stainless steel implants). The automated character of the proposed quantification method seems promising towards MRI acquisition parameter optimization.
Monte Carlo modeling of fluorescence in semi-infinite turbid media
NASA Astrophysics Data System (ADS)
Ong, Yi Hong; Finlay, Jarod C.; Zhu, Timothy C.
2018-02-01
The incident field size and the interplay of absorption and scattering can influence the in-vivo light fluence rate distribution and complicate the absolute quantification of fluorophore concentration in-vivo. In this study, we use Monte Carlo simulations to evaluate the effect of incident beam radius and optical properties to the fluorescence signal collected by isotropic detector placed on the tissue surface. The optical properties at the excitation and emission wavelengths are assumed to be identical. We compute correction factors to correct the fluorescence intensity for variations due to incident field size and optical properties. The correction factors are fitted to a 4-parameters empirical correction function and the changes in each parameter are compared for various beam radius over a range of physiologically relevant tissue optical properties (μa = 0.1 - 1 cm-1 , μs'= 5 - 40 cm-1 ).
Quantitative Proteomics via High Resolution MS Quantification: Capabilities and Limitations
Higgs, Richard E.; Butler, Jon P.; Han, Bomie; Knierman, Michael D.
2013-01-01
Recent improvements in the mass accuracy and resolution of mass spectrometers have led to renewed interest in label-free quantification using data from the primary mass spectrum (MS1) acquired from data-dependent proteomics experiments. The capacity for higher specificity quantification of peptides from samples enriched for proteins of biological interest offers distinct advantages for hypothesis generating experiments relative to immunoassay detection methods or prespecified peptide ions measured by multiple reaction monitoring (MRM) approaches. Here we describe an evaluation of different methods to post-process peptide level quantification information to support protein level inference. We characterize the methods by examining their ability to recover a known dilution of a standard protein in background matrices of varying complexity. Additionally, the MS1 quantification results are compared to a standard, targeted, MRM approach on the same samples under equivalent instrument conditions. We show the existence of multiple peptides with MS1 quantification sensitivity similar to the best MRM peptides for each of the background matrices studied. Based on these results we provide recommendations on preferred approaches to leveraging quantitative measurements of multiple peptides to improve protein level inference. PMID:23710359
Dankowska, A; Domagała, A; Kowalewski, W
2017-09-01
The potential of fluorescence, UV-Vis spectroscopies as well as the low- and mid-level data fusion of both spectroscopies for the quantification of concentrations of roasted Coffea arabica and Coffea canephora var. robusta in coffee blends was investigated. Principal component analysis was used to reduce data multidimensionality. To calculate the level of undeclared addition, multiple linear regression (PCA-MLR) models were used with lowest root mean square error of calibration (RMSEC) of 3.6% and root mean square error of cross-validation (RMSECV) of 7.9%. LDA analysis was applied to fluorescence intensities and UV spectra of Coffea arabica, canephora samples, and their mixtures in order to examine classification ability. The best performance of PCA-LDA analysis was observed for data fusion of UV and fluorescence intensity measurements at wavelength interval of 60nm. LDA showed that data fusion can achieve over 96% of correct classifications (sensitivity) in the test set and 100% of correct classifications in the training set, with low-level data fusion. The corresponding results for individual spectroscopies ranged from 90% (UV-Vis spectroscopy) to 77% (synchronous fluorescence) in the test set, and from 93% to 97% in the training set. The results demonstrate that fluorescence, UV, and visible spectroscopies complement each other, giving a complementary effect for the quantification of roasted Coffea arabica and Coffea canephora var. robusta concentration in blends. Copyright © 2017 Elsevier B.V. All rights reserved.
Non-invasive method for quantitative evaluation of exogenous compound deposition on skin.
Stamatas, Georgios N; Wu, Jeff; Kollias, Nikiforos
2002-02-01
Topical application of active compounds on skin is common to both pharmaceutical and cosmetic industries. Quantification of the concentration of a compound deposited on the skin is important in determining the optimum formulation to deliver the pharmaceutical or cosmetic benefit. The most commonly used techniques to date are either invasive or not easily reproducible. In this study, we have developed a noninvasive alternative to these techniques based on spectrofluorimetry. A mathematical model based on diffusion approximation theory is utilized to correct fluorescence measurements for the attenuation caused by endogenous skin chromophore absorption. The limitation is that the compound of interest has to be either fluorescent itself or fluorescently labeled. We used the method to detect topically applied salicylic acid. Based on the mathematical model a calibration curve was constructed that is independent of endogenous chromophore concentration. We utilized the method to localize salicylic acid in epidermis and to follow its dynamics over a period of 3 d.
Pang, Susan; Cowen, Simon
2017-12-13
We describe a novel generic method to derive the unknown endogenous concentrations of analyte within complex biological matrices (e.g. serum or plasma) based upon the relationship between the immunoassay signal response of a biological test sample spiked with known analyte concentrations and the log transformed estimated total concentration. If the estimated total analyte concentration is correct, a portion of the sigmoid on a log-log plot is very close to linear, allowing the unknown endogenous concentration to be estimated using a numerical method. This approach obviates conventional relative quantification using an internal standard curve and need for calibrant diluent, and takes into account the individual matrix interference on the immunoassay by spiking the test sample itself. This technique is based on standard additions for chemical analytes. Unknown endogenous analyte concentrations within even 2-fold diluted human plasma may be determined reliably using as few as four reaction wells.
Lasnon, Charline; Dugué, Audrey Emmanuelle; Briand, Mélanie; Dutoit, Soizic; Aide, Nicolas
2015-12-01
Tail vein injection under short anesthesia is the most commonly used route for administering radiopharmaceuticals. However, the small caliber of the vein in rodents may lead to tracer extravasation and thereby compromise quantitative accuracy of PET. We aimed to evaluate a method for correction of interstitial radiotracer leakage in the context of pre-clinical therapeutic response assessment. In two separate studies involving 16 nude rats, a model of human ovarian cancer was xenografted and each was treated with a Phosphoinositide 3-kinase/mammalian target of rapamycin inhibitor or used as a control. Tracer injections were performed via the tail vein by a single operator. Two observers qualitatively evaluated the resulting images and if appropriate drew a volume of interest (VOI) over the injection site to record extravasated activities. Uncorrected and corrected tumors' mean standardized uptake value (SUV)mean was computed (corrected injected activity = calibrated activity - decay corrected residual syringe activity - decay corrected tail extravasated activity). Molecular analyses were taken as a gold standard. The frequency and magnitude of extravasation were analyzed, as well as the inter-observer agreement and the impact of the correction method on tumor uptake quantification. Extravasation never exceeded 20 % of the injected dose but occurred in more than 50 % of injections. It was independent of groups of animals and protocol time points with p values of 1.00 and 0.61, respectively, in the first experiment and 0.47 and 0.13, respectively, in the second experiment. There was a good inter-observer agreement for qualitative analysis (kappa = 0.72) and a moderate agreement when using quantitative analysis (ρ c = 0.94). In both experiments, there was significant difference between uncorrected and corrected SUVmean. Despite this significant difference, mean percent differences between uncorrected and corrected SUVmean in the first and the second experiments were -3.61 and -1.78, respectively. Concerning therapy assessment, in both experiments, significant differences in median %SUVmean between control and treated groups were observed over all time points with either uncorrected and corrected data (p < 0.05). Although extravasation is common and can be reproducibly corrected, this is probably not required for validation of response to drugs that induce large SUV changes. However, further studies are required to evaluate the impact of extravasation in situations where less marked metabolic responses are observed or important extravasations occur.
Ahn, Sung Hee; Bae, Yong Jin; Moon, Jeong Hee; Kim, Myung Soo
2013-09-17
We propose to divide matrix suppression in matrix-assisted laser desorption ionization into two parts, normal and anomalous. In quantification of peptides, the normal effect can be accounted for by constructing the calibration curve in the form of peptide-to-matrix ion abundance ratio versus concentration. The anomalous effect forbids reliable quantification and is noticeable when matrix suppression is larger than 70%. With this 70% rule, matrix suppression becomes a guideline for reliable quantification, rather than a nuisance. A peptide in a complex mixture can be quantified even in the presence of large amounts of contaminants, as long as matrix suppression is below 70%. The theoretical basis for the quantification method using a peptide as an internal standard is presented together with its weaknesses. A systematic method to improve quantification of high concentration analytes has also been developed.
Cai, Yicun; He, Yuping; Lv, Rong; Chen, Hongchao; Wang, Qiang; Pan, Liangwen
2017-01-01
Meat products often consist of meat from multiple animal species, and inaccurate food product adulteration and mislabeling can negatively affect consumers. Therefore, a cost-effective and reliable method for identification and quantification of animal species in meat products is required. In this study, we developed a duplex droplet digital PCR (dddPCR) detection and quantification system to simultaneously identify and quantify the source of meat in samples containing a mixture of beef (Bos taurus) and pork (Sus scrofa) in a single digital PCR reaction tube. Mixed meat samples of known composition were used to test the accuracy and applicability of this method. The limit of detection (LOD) and the limit of quantification (LOQ) of this detection and quantification system were also identified. We conclude that our dddPCR detection and quantification system is suitable for quality control and routine analyses of meat products.
Carlsson, Nils; Borde, Annika; Wölfel, Sebastian; Kerman, Björn; Larsson, Anette
2011-04-01
We investigated how the Bradford assay for measurements of protein released from a drug formulation may be affected by a concomitant release of a pharmaceutical polymer used to formulate the protein delivery device. The main result is that polymer-caused perturbations of the Coomassie dye absorbance at the Bradford monitoring wavelength (595nm) can be identified and corrected by recording absorption spectra in the region of 350-850mm. The pharmaceutical polymers Carbopol and chitosan illustrate two potential types of perturbations in the Bradford assay, whereas the third polymer, hydroxypropylmethylcellulose (HPMC), acts as a nonperturbing control. Carbopol increases the apparent absorbance at 595nm because the polymer aggregates at the low pH of the Bradford protocol, causing a turbidity contribution that can be corrected quantitatively at 595nm by measuring the sample absorbance at 850nm outside the dye absorption band. Chitosan is a cationic polymer under Bradford conditions and interacts directly with the anionic Coomassie dye and perturbs its absorption spectrum, including 595nm. In this case, the Bradford method remains useful if the polymer concentration is known but should be used with caution in release studies where the polymer concentration may vary and needs to be measured independently. Copyright © 2010 Elsevier Inc. All rights reserved.
A Non-Invasive Assessment of Cardiopulmonary Hemodynamics with MRI in Pulmonary Hypertension
Bane, Octavia; Shah, Sanjiv J.; Cuttica, Michael J.; Collins, Jeremy D.; Selvaraj, Senthil; Chatterjee, Neil R.; Guetter, Christoph; Carr, James C.; Carroll, Timothy J.
2015-01-01
Purpose We propose a method for non-invasive quantification of hemodynamic changes in the pulmonary arteries resulting from pulmonary hypertension (PH). Methods Using a two-element windkessel model, and input parameters derived from standard MRI evaluation of flow, cardiac function and valvular motion, we derive: pulmonary artery compliance (C), mean pulmonary artery pressure (mPAP), pulmonary vascular resistance (PVR), pulmonary capillary wedge pressure (PCWP), time-averaged intra-pulmonary pressure waveforms and pulmonary artery pressures (systolic (sPAP) and diastolic (dPAP)). MRI results were compared directly to reference standard values from right heart catheterization (RHC) obtained in a series of patients with suspected pulmonary hypertension (PH). Results In 7 patients with suspected PH undergoing RHC, MRI and echocardiography, there was no statistically significant difference (p<0.05) between parameters measured by MRI and RHC. Using standard clinical cutoffs to define PH (mPAP ≥ 25 mmHg), MRI was able to correctly identify all patients as having pulmonary hypertension, and to correctly distinguish between pulmonary arterial (mPAP≥ 25 mmHg, PCWP<15 mmHg) and venous hypertension (mPAP ≥ 25 mmHg, PCWP ≥ 15 mmHg) in 5 of 7 cases. Conclusions We have developed a mathematical model capable of quantifying physiological parameters that reflect the severity of PH. PMID:26283577
de Kinkelder, R; van der Veen, R L P; Verbaak, F D; Faber, D J; van Leeuwen, T G; Berendschot, T T J M
2011-01-01
Purpose Accurate assessment of the amount of macular pigment (MPOD) is necessary to investigate the role of carotenoids and their assumed protective functions. High repeatability and reliability are important to monitor patients in studies investigating the influence of diet and supplements on MPOD. We evaluated the Macuscope (Macuvision Europe Ltd., Lapworth, Solihull, UK), a recently introduced device for measuring MPOD using the technique of heterochromatic flicker photometry (HFP). We determined agreement with another HFP device (QuantifEye; MPS 9000 series: Tinsley Precision Instruments Ltd., Croydon, Essex, UK) and a fundus reflectance method. Methods The right eyes of 23 healthy subjects (mean age 33.9±15.1 years) were measured. We determined agreement with QuantifEye and correlation with a fundus reflectance method. Repeatability of QuantifEye was assessed in 20 other healthy subjects (mean age 32.1±7.3 years). Repeatability was also compared with measurements by a fundus reflectance method in 10 subjects. Results We found low agreement between test and retest measurements with Macuscope. The average difference and the limits of agreement were −0.041±0.32. We found high agreement between test and retest measurements of QuantifEye (−0.02±0.18) and the fundus reflectance method (−0.04±0.18). MPOD data obtained by Macuscope and QuantifEye showed poor agreement: −0.017±0.44. For Macuscope and the fundus reflectance method, the correlation coefficient was r=0.05 (P=0.83). A significant correlation of r=0.87 (P<0.001) was found between QuantifEye and the fundus reflectance method. Conclusions Because repeatability of Macuscope measurements was low (ie, wide limits of agreement) and MPOD values correlated poorly with the fundus reflectance method, and agreed poorly with QuantifEye, the tested Macuscope protocol seems less suitable for studying MPOD. PMID:21057522
Fluorescent quantification of melanin.
Fernandes, Bruno; Matamá, Teresa; Guimarães, Diana; Gomes, Andreia; Cavaco-Paulo, Artur
2016-11-01
Melanin quantification is reportedly performed by absorption spectroscopy, commonly at 405 nm. Here, we propose the implementation of fluorescence spectroscopy for melanin assessment. In a typical in vitro assay to assess melanin production in response to an external stimulus, absorption spectroscopy clearly overvalues melanin content. This method is also incapable of distinguishing non-melanotic/amelanotic control cells from those that are actually capable of performing melanogenesis. Therefore, fluorescence spectroscopy is the best method for melanin quantification as it proved to be highly specific and accurate, detecting even small variations in the synthesis of melanin. This method can also be applied to the quantification of melanin in more complex biological matrices like zebrafish embryos and human hair. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Collender, Philip A.; Kirby, Amy E.; Addiss, David G.; Freeman, Matthew C.; Remais, Justin V.
2015-01-01
Limiting the environmental transmission of soil-transmitted helminths (STH), which infect 1.5 billion people worldwide, will require sensitive, reliable, and cost effective methods to detect and quantify STH in the environment. We review the state of the art of STH quantification in soil, biosolids, water, produce, and vegetation with respect to four major methodological issues: environmental sampling; recovery of STH from environmental matrices; quantification of recovered STH; and viability assessment of STH ova. We conclude that methods for sampling and recovering STH require substantial advances to provide reliable measurements for STH control. Recent innovations in the use of automated image identification and developments in molecular genetic assays offer considerable promise for improving quantification and viability assessment. PMID:26440788
Delre, Antonio; Mønster, Jacob; Samuelsson, Jerker; Fredenslund, Anders M; Scheutz, Charlotte
2018-09-01
The tracer gas dispersion method (TDM) is a remote sensing method used for quantifying fugitive emissions by relying on the controlled release of a tracer gas at the source, combined with concentration measurements of the tracer and target gas plumes. The TDM was tested at a wastewater treatment plant for plant-integrated methane emission quantification, using four analytical instruments simultaneously and four different tracer gases. Measurements performed using a combination of an analytical instrument and a tracer gas, with a high ratio between the tracer gas release rate and instrument precision (a high release-precision ratio), resulted in well-defined plumes with a high signal-to-noise ratio and a high methane-to-tracer gas correlation factor. Measured methane emission rates differed by up to 18% from the mean value when measurements were performed using seven different instrument and tracer gas combinations. Analytical instruments with a high detection frequency and good precision were established as the most suitable for successful TDM application. The application of an instrument with a poor precision could only to some extent be overcome by applying a higher tracer gas release rate. A sideward misplacement of the tracer gas release point of about 250m resulted in an emission rate comparable to those obtained using a tracer gas correctly simulating the methane emission. Conversely, an upwind misplacement of about 150m resulted in an emission rate overestimation of almost 50%, showing the importance of proper emission source simulation when applying the TDM. Copyright © 2018 Elsevier B.V. All rights reserved.
Application of Quantitative Analytical Electron Microscopy to the Mineral Content of Insect Cuticle
NASA Astrophysics Data System (ADS)
Rasch, Ron; Cribb, Bronwen W.; Barry, John; Palmer, Christopher M.
2003-04-01
Quantification of calcium in the cuticle of the fly larva Exeretonevra angustifrons was undertaken at the micron scale using wavelength dispersive X-ray microanalysis, analytical standards, and a full matrix correction. Calcium and phosphorus were found to be present in the exoskeleton in a ratio that indicates amorphous calcium phosphate. This was confirmed through electron diffraction of the calcium-containing tissue. Due to the pragmatic difficulties of measuring light elements, it is not uncommon in the field of entomology to neglect the use of matrix corrections when performing microanalysis of bulk insect specimens. To determine, firstly, whether such a strategy affects the outcome and secondly, which matrix correction is preferable, phi-rho (z) and ZAF matrix corrections were contrasted with each other and without matrix correction. The best estimate of the mineral phase was found to be given by using the phi-rho (z) correction. When no correction was made, the ratio of Ca to P fell outside the range for amorphous calcium phosphate, possibly leading to flawed interpretation of the mineral form when used on its own.
Hanna, George B.
2018-01-01
Abstract Proton transfer reaction time of flight mass spectrometry (PTR‐ToF‐MS) is a direct injection MS technique, allowing for the sensitive and real‐time detection, identification, and quantification of volatile organic compounds. When aiming to employ PTR‐ToF‐MS for targeted volatile organic compound analysis, some methodological questions must be addressed, such as the need to correctly identify product ions, or evaluating the quantitation accuracy. This work proposes a workflow for PTR‐ToF‐MS method development, addressing the main issues affecting the reliable identification and quantification of target compounds. We determined the fragmentation patterns of 13 selected compounds (aldehydes, fatty acids, phenols). Experiments were conducted under breath‐relevant conditions (100% humid air), and within an extended range of reduced electric field values (E/N = 48–144 Td), obtained by changing drift tube voltage. Reactivity was inspected using H3O+, NO+, and O2 + as primary ions. The results show that a relatively low (<90 Td) E/N often permits to reduce fragmentation enhancing sensitivity and identification capabilities, particularly in the case of aldehydes using NO+, where a 4‐fold increase in sensitivity is obtained by means of drift voltage reduction. We developed a novel calibration methodology, relying on diffusion tubes used as gravimetric standards. For each of the tested compounds, it was possible to define suitable conditions whereby experimental error, defined as difference between gravimetric measurements and calculated concentrations, was 8% or lower. PMID:29336521
Disease quantification on PET/CT images without object delineation
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Wu, Caiyun; Fitzpatrick, Danielle; Winchell, Nicole; Schuster, Stephen J.; Torigian, Drew A.
2017-03-01
The derivation of quantitative information from images to make quantitative radiology (QR) clinically practical continues to face a major image analysis hurdle because of image segmentation challenges. This paper presents a novel approach to disease quantification (DQ) via positron emission tomography/computed tomography (PET/CT) images that explores how to decouple DQ methods from explicit dependence on object segmentation through the use of only object recognition results to quantify disease burden. The concept of an object-dependent disease map is introduced to express disease severity without performing explicit delineation and partial volume correction of either objects or lesions. The parameters of the disease map are estimated from a set of training image data sets. The idea is illustrated on 20 lung lesions and 20 liver lesions derived from 18F-2-fluoro-2-deoxy-D-glucose (FDG)-PET/CT scans of patients with various types of cancers and also on 20 NEMA PET/CT phantom data sets. Our preliminary results show that, on phantom data sets, "disease burden" can be estimated to within 2% of known absolute true activity. Notwithstanding the difficulty in establishing true quantification on patient PET images, our results achieve 8% deviation from "true" estimates, with slightly larger deviations for small and diffuse lesions where establishing ground truth becomes really questionable, and smaller deviations for larger lesions where ground truth set up becomes more reliable. We are currently exploring extensions of the approach to include fully automated body-wide DQ, extensions to just CT or magnetic resonance imaging (MRI) alone, to PET/CT performed with radiotracers other than FDG, and other functional forms of disease maps.
Liu, Junyan; Liu, Yang; Gao, Mingxia; Zhang, Xiangmin
2012-08-01
A facile proteomic quantification method, fluorescent labeling absolute quantification (FLAQ), was developed. Instead of using MS for quantification, the FLAQ method is a chromatography-based quantification in combination with MS for identification. Multidimensional liquid chromatography (MDLC) with laser-induced fluorescence (LIF) detection with high accuracy and tandem MS system were employed for FLAQ. Several requirements should be met for fluorescent labeling in MS identification: Labeling completeness, minimum side-reactions, simple MS spectra, and no extra tandem MS fragmentations for structure elucidations. A fluorescence dye, 5-iodoacetamidofluorescein, was finally chosen to label proteins on all cysteine residues. The fluorescent dye was compatible with the process of the trypsin digestion and MALDI MS identification. Quantitative labeling was achieved with optimization of reacting conditions. A synthesized peptide and model proteins, BSA (35 cysteines), OVA (five cysteines), were used for verifying the completeness of labeling. Proteins were separated through MDLC and quantified based on fluorescent intensities, followed by MS identification. High accuracy (RSD% < 1.58) and wide linearity of quantification (1-10(5) ) were achieved by LIF detection. The limit of quantitation for the model protein was as low as 0.34 amol. Parts of proteins in human liver proteome were quantified and demonstrated using FLAQ. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Bhat, Himanshu; Sajja, Balasrinivasa Rao; Narayana, Ponnada A.
2006-11-01
Accurate quantification of the MRSI-observed regional distribution of metabolites involves relatively long processing times. This is particularly true in dealing with large amount of data that is typically acquired in multi-center clinical studies. To significantly shorten the processing time, an artificial neural network (ANN)-based approach was explored for quantifying the phase corrected (as opposed to magnitude) spectra. Specifically, in these studies radial basis function neural network (RBFNN) was used. This method was tested on simulated and normal human brain data acquired at 3T. The N-acetyl aspartate (NAA)/creatine (Cr), choline (Cho)/Cr, glutamate + glutamine (Glx)/Cr, and myo-inositol (mI)/Cr ratios in normal subjects were compared with the line fitting (LF) technique and jMRUI-AMARES analysis, and published values. The average NAA/Cr, Cho/Cr, Glx/Cr and mI/Cr ratios in normal controls were found to be 1.58 ± 0.13, 0.9 ± 0.08, 0.7 ± 0.17 and 0.42 ± 0.07, respectively. The corresponding ratios using the LF and jMRUI-AMARES methods were 1.6 ± 0.11, 0.95 ± 0.08, 0.78 ± 0.18, 0.49 ± 0.1 and 1.61 ± 0.15, 0.78 ± 0.07, 0.61 ± 0.18, 0.42 ± 0.13, respectively. These results agree with those published in literature. Bland-Altman analysis indicated an excellent agreement and minimal bias between the results obtained with RBFNN and other methods. The computational time for the current method was 15 s compared to approximately 10 min for the LF-based analysis.
Nirogi, Ramakrishna; Ajjala, Devender Reddy; Kandikere, Vishwottam; Aleti, Raghupathi; Pantangi, Hanumanth Rao; Srikakolapu, Surya Rao; Benade, Vijay; Bhyrapuneni, Gopinadh; Vurimindi, Himabindu
2013-01-01
A sensitive LC-MS/MS method was developed and validated for the quantification of almotriptan in rat brain and blood dialysates. Almotriptan is a 5HT1B/1D receptor agonist used for the treatment of migraine pain. Method consists of rapid gradient elution program with 10mM ammonium formate (pH 3) and acetonitrile on a Xbridge column. The MRM transitions monitored were m/z 336.2-58.1 for almotriptan and m/z 448.2-285.3 for the IS. The assay was linear in the range of 0.1-20 ng/ml, with acceptable precision and accuracy along with adequate sensitivity. The between batch accuracy was in the range of 99.0-104.3% with precision in between 0.6% and 5.8%. Microdialysis is an important sampling technique, with the capability of capturing the concentrations of various analytes in different bio fluids, at a single time point. This method was applied to quantify brain and blood dialysate samples obtained from a microdialysis study of rats treated with almotriptan (10mg/kg, p.o.). In vivo recovery experiments were performed to correct the dialysate concentrations into extracellular concentrations. Mean peak dialysate concentrations of almotriptan were found to be 152 ± 78 and 7.4 ± 1.0 ng/ml in blood and prefrontal cortex, respectively. The brain penetration of almotriptan is characterized by the AUCbrain/AUCblood found to be 0.07 ± 0.05. The results revealed the importance of measuring the unbound almotriptan concentrations in the brain over the blood for understanding its PK/PD relationship. Copyright © 2013 Elsevier B.V. All rights reserved.
Semi-automated scar detection in delayed enhanced cardiac magnetic resonance images
NASA Astrophysics Data System (ADS)
Morisi, Rita; Donini, Bruno; Lanconelli, Nico; Rosengarden, James; Morgan, John; Harden, Stephen; Curzen, Nick
2015-06-01
Late enhancement cardiac magnetic resonance images (MRI) has the ability to precisely delineate myocardial scars. We present a semi-automated method for detecting scars in cardiac MRI. This model has the potential to improve routine clinical practice since quantification is not currently offered due to time constraints. A first segmentation step was developed for extracting the target regions for potential scar and determining pre-candidate objects. Pattern recognition methods are then applied to the segmented images in order to detect the position of the myocardial scar. The database of late gadolinium enhancement (LE) cardiac MR images consists of 111 blocks of images acquired from 63 patients at the University Hospital Southampton NHS Foundation Trust (UK). At least one scar was present for each patient, and all the scars were manually annotated by an expert. A group of images (around one third of the entire set) was used for training the system which was subsequently tested on all the remaining images. Four different classifiers were trained (Support Vector Machine (SVM), k-nearest neighbor (KNN), Bayesian and feed-forward neural network) and their performance was evaluated by using Free response Receiver Operating Characteristic (FROC) analysis. Feature selection was implemented for analyzing the importance of the various features. The segmentation method proposed allowed the region affected by the scar to be extracted correctly in 96% of the blocks of images. The SVM was shown to be the best classifier for our task, and our system reached an overall sensitivity of 80% with less than 7 false positives per patient. The method we present provides an effective tool for detection of scars on cardiac MRI. This may be of value in clinical practice by permitting routine reporting of scar quantification.
Analytical-Based Partial Volume Recovery in Mouse Heart Imaging
NASA Astrophysics Data System (ADS)
Dumouchel, Tyler; deKemp, Robert A.
2011-02-01
Positron emission tomography (PET) is a powerful imaging modality that has the ability to yield quantitative images of tracer activity. Physical phenomena such as photon scatter, photon attenuation, random coincidences and spatial resolution limit quantification potential and must be corrected to preserve the accuracy of reconstructed images. This study focuses on correcting the partial volume effects that arise in mouse heart imaging when resolution is insufficient to resolve the true tracer distribution in the myocardium. The correction algorithm is based on fitting 1D profiles through the myocardium in gated PET images to derive myocardial contours along with blood, background and myocardial activity. This information is interpolated onto a 2D grid and convolved with the tomograph's point spread function to derive regional recovery coefficients enabling partial volume correction. The point spread function was measured by placing a line source inside a small animal PET scanner. PET simulations were created based on noise properties measured from a reconstructed PET image and on the digital MOBY phantom. The algorithm can estimate the myocardial activity to within 5% of the truth when different wall thicknesses, backgrounds and noise properties are encountered that are typical of healthy FDG mouse scans. The method also significantly improves partial volume recovery in simulated infarcted tissue. The algorithm offers a practical solution to the partial volume problem without the need for co-registered anatomic images and offers a basis for improved quantitative 3D heart imaging.
Medrano-Gracia, Pau; Cowan, Brett R; Bluemke, David A; Finn, J Paul; Kadish, Alan H; Lee, Daniel C; Lima, Joao A C; Suinesiaputra, Avan; Young, Alistair A
2013-09-13
Cardiovascular imaging studies generate a wealth of data which is typically used only for individual study endpoints. By pooling data from multiple sources, quantitative comparisons can be made of regional wall motion abnormalities between different cohorts, enabling reuse of valuable data. Atlas-based analysis provides precise quantification of shape and motion differences between disease groups and normal subjects. However, subtle shape differences may arise due to differences in imaging protocol between studies. A mathematical model describing regional wall motion and shape was used to establish a coordinate system registered to the cardiac anatomy. The atlas was applied to data contributed to the Cardiac Atlas Project from two independent studies which used different imaging protocols: steady state free precession (SSFP) and gradient recalled echo (GRE) cardiovascular magnetic resonance (CMR). Shape bias due to imaging protocol was corrected using an atlas-based transformation which was generated from a set of 46 volunteers who were imaged with both protocols. Shape bias between GRE and SSFP was regionally variable, and was effectively removed using the atlas-based transformation. Global mass and volume bias was also corrected by this method. Regional shape differences between cohorts were more statistically significant after removing regional artifacts due to imaging protocol bias. Bias arising from imaging protocol can be both global and regional in nature, and is effectively corrected using an atlas-based transformation, enabling direct comparison of regional wall motion abnormalities between cohorts acquired in separate studies.
Comparison of sEMG processing methods during whole-body vibration exercise.
Lienhard, Karin; Cabasson, Aline; Meste, Olivier; Colson, Serge S
2015-12-01
The objective was to investigate the influence of surface electromyography (sEMG) processing methods on the quantification of muscle activity during whole-body vibration (WBV) exercises. sEMG activity was recorded while the participants performed squats on the platform with and without WBV. The spikes observed in the sEMG spectrum at the vibration frequency and its harmonics were deleted using state-of-the-art methods, i.e. (1) a band-stop filter, (2) a band-pass filter, and (3) spectral linear interpolation. The same filtering methods were applied on the sEMG during the no-vibration trial. The linear interpolation method showed the highest intraclass correlation coefficients (no vibration: 0.999, WBV: 0.757-0.979) with the comparison measure (unfiltered sEMG during the no-vibration trial), followed by the band-stop filter (no vibration: 0.929-0.975, WBV: 0.661-0.938). While both methods introduced a systematic bias (P < 0.001), the error increased with increasing mean values to a higher degree for the band-stop filter. After adjusting the sEMG(RMS) during WBV for the bias, the performance of the interpolation method and the band-stop filter was comparable. The band-pass filter was in poor agreement with the other methods (ICC: 0.207-0.697), unless the sEMG(RMS) was corrected for the bias (ICC ⩾ 0.931, %LOA ⩽ 32.3). In conclusion, spectral linear interpolation or a band-stop filter centered at the vibration frequency and its multiple harmonics should be applied to delete the artifacts in the sEMG signals during WBV. With the use of a band-stop filter it is recommended to correct the sEMG(RMS) for the bias as this procedure improved its performance. Copyright © 2015 Elsevier Ltd. All rights reserved.
Model Reduction via Principe Component Analysis and Markov Chain Monte Carlo (MCMC) Methods
NASA Astrophysics Data System (ADS)
Gong, R.; Chen, J.; Hoversten, M. G.; Luo, J.
2011-12-01
Geophysical and hydrogeological inverse problems often include a large number of unknown parameters, ranging from hundreds to millions, depending on parameterization and problems undertaking. This makes inverse estimation and uncertainty quantification very challenging, especially for those problems in two- or three-dimensional spatial domains. Model reduction technique has the potential of mitigating the curse of dimensionality by reducing total numbers of unknowns while describing the complex subsurface systems adequately. In this study, we explore the use of principal component analysis (PCA) and Markov chain Monte Carlo (MCMC) sampling methods for model reduction through the use of synthetic datasets. We compare the performances of three different but closely related model reduction approaches: (1) PCA methods with geometric sampling (referred to as 'Method 1'), (2) PCA methods with MCMC sampling (referred to as 'Method 2'), and (3) PCA methods with MCMC sampling and inclusion of random effects (referred to as 'Method 3'). We consider a simple convolution model with five unknown parameters as our goal is to understand and visualize the advantages and disadvantages of each method by comparing their inversion results with the corresponding analytical solutions. We generated synthetic data with noise added and invert them under two different situations: (1) the noised data and the covariance matrix for PCA analysis are consistent (referred to as the unbiased case), and (2) the noise data and the covariance matrix are inconsistent (referred to as biased case). In the unbiased case, comparison between the analytical solutions and the inversion results show that all three methods provide good estimates of the true values and Method 1 is computationally more efficient. In terms of uncertainty quantification, Method 1 performs poorly because of relatively small number of samples obtained, Method 2 performs best, and Method 3 overestimates uncertainty due to inclusion of random effects. However, in the biased case, only Method 3 correctly estimates all the unknown parameters, and both Methods 1 and 2 provide wrong values for the biased parameters. The synthetic case study demonstrates that if the covariance matrix for PCA analysis is inconsistent with true models, the PCA methods with geometric or MCMC sampling will provide incorrect estimates.
A SIMPLE METHOD FOR THE EXTRACTION AND QUANTIFICATION OF PHOTOPIGMENTS FROM SYMBIODINIUM SPP.
John E. Rogers and Dragoslav Marcovich. Submitted. Simple Method for the Extraction and Quantification of Photopigments from Symbiodinium spp.. Limnol. Oceanogr. Methods. 19 p. (ERL,GB 1192).
We have developed a simple, mild extraction procedure using methanol which, when...
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
2017-11-27
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
Röst, Hannes L; Liu, Yansheng; D'Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi
2016-09-01
Next-generation mass spectrometric (MS) techniques such as SWATH-MS have substantially increased the throughput and reproducibility of proteomic analysis, but ensuring consistent quantification of thousands of peptide analytes across multiple liquid chromatography-tandem MS (LC-MS/MS) runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we developed TRIC (http://proteomics.ethz.ch/tric/), a software tool that utilizes fragment-ion data to perform cross-run alignment, consistent peak-picking and quantification for high-throughput targeted proteomics. TRIC reduced the identification error compared to a state-of-the-art SWATH-MS analysis without alignment by more than threefold at constant recall while correcting for highly nonlinear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups. Thus, TRIC fills a gap in the pipeline for automated analysis of massively parallel targeted proteomics data sets.
Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H
2013-08-01
Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.
Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong
2015-01-01
Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.
Turner, Clare E; Russell, Bruce R; Gant, Nicholas
2015-11-01
Magnetic resonance spectroscopy (MRS) is an analytical procedure that can be used to non-invasively measure the concentration of a range of neural metabolites. Creatine is an important neurometabolite with dietary supplementation offering therapeutic potential for neurological disorders with dysfunctional energetic processes. Neural creatine concentrations can be probed using proton MRS and quantified using a range of software packages based on different analytical methods. This experiment examines the differences in quantification performance of two commonly used analysis packages following a creatine supplementation strategy with potential therapeutic application. Human participants followed a seven day dietary supplementation regime in a placebo-controlled, cross-over design interspersed with a five week wash-out period. Spectroscopy data were acquired the day immediately following supplementation and analyzed with two commonly-used software packages which employ vastly different quantification methods. Results demonstrate that neural creatine concentration was augmented following creatine supplementation when analyzed using the peak fitting method of quantification (105.9%±10.1). In contrast, no change in neural creatine levels were detected with supplementation when analysis was conducted using the basis spectrum method of quantification (102.6%±8.6). Results suggest that software packages that employ the peak fitting procedure for spectral quantification are possibly more sensitive to subtle changes in neural creatine concentrations. The relative simplicity of the spectroscopy sequence and the data analysis procedure suggest that peak fitting procedures may be the most effective means of metabolite quantification when detection of subtle alterations in neural metabolites is necessary. The straightforward technique can be used on a clinical magnetic resonance imaging system. Copyright © 2015 Elsevier Inc. All rights reserved.
Collender, Philip A; Kirby, Amy E; Addiss, David G; Freeman, Matthew C; Remais, Justin V
2015-12-01
Limiting the environmental transmission of soil-transmitted helminths (STHs), which infect 1.5 billion people worldwide, will require sensitive, reliable, and cost-effective methods to detect and quantify STHs in the environment. We review the state-of-the-art of STH quantification in soil, biosolids, water, produce, and vegetation with regard to four major methodological issues: environmental sampling; recovery of STHs from environmental matrices; quantification of recovered STHs; and viability assessment of STH ova. We conclude that methods for sampling and recovering STHs require substantial advances to provide reliable measurements for STH control. Recent innovations in the use of automated image identification and developments in molecular genetic assays offer considerable promise for improving quantification and viability assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lee, Dong-Hoon; Lee, Do-Wan; Henry, David; Park, Hae-Jin; Han, Bong-Soo; Woo, Dong-Cheol
2018-04-12
To evaluate the effects of signal intensity differences between the b0 image and diffusion tensor imaging (DTI) in the image registration process. To correct signal intensity differences between the b0 image and DTI data, a simple image intensity compensation (SIMIC) method, which is a b0 image re-calculation process from DTI data, was applied before the image registration. The re-calculated b0 image (b0 ext ) from each diffusion direction was registered to the b0 image acquired through the MR scanning (b0 nd ) with two types of cost functions and their transformation matrices were acquired. These transformation matrices were then used to register the DTI data. For quantifications, the dice similarity coefficient (DSC) values, diffusion scalar matrix, and quantified fibre numbers and lengths were calculated. The combined SIMIC method with two cost functions showed the highest DSC value (0.802 ± 0.007). Regarding diffusion scalar values and numbers and lengths of fibres from the corpus callosum, superior longitudinal fasciculus, and cortico-spinal tract, only using normalised cross correlation (NCC) showed a specific tendency toward lower values in the brain regions. Image-based distortion correction with SIMIC for DTI data would help in image analysis by accounting for signal intensity differences as one additional option for DTI analysis. • We evaluated the effects of signal intensity differences at DTI registration. • The non-diffusion-weighted image re-calculation process from DTI data was applied. • SIMIC can minimise the signal intensity differences at DTI registration.
NASA Astrophysics Data System (ADS)
Teuho, J.; Johansson, J.; Linden, J.; Saunavaara, V.; Tolvanen, T.; Teräs, M.
2014-01-01
Selection of reconstruction parameters has an effect on the image quantification in PET, with an additional contribution from a scanner-specific attenuation correction method. For achieving comparable results in inter- and intra-center comparisons, any existing quantitative differences should be identified and compensated for. In this study, a comparison between PET, PET/CT and PET/MR is performed by using an anatomical brain phantom, to identify and measure the amount of bias caused due to differences in reconstruction and attenuation correction methods especially in PET/MR. Differences were estimated by using visual, qualitative and quantitative analysis. The qualitative analysis consisted of a line profile analysis for measuring the reproduction of anatomical structures and the contribution of the amount of iterations to image contrast. The quantitative analysis consisted of measurement and comparison of 10 anatomical VOIs, where the HRRT was considered as the reference. All scanners reproduced the main anatomical structures of the phantom adequately, although the image contrast on the PET/MR was inferior when using a default clinical brain protocol. Image contrast was improved by increasing the amount of iterations from 2 to 5 while using 33 subsets. Furthermore, a PET/MR-specific bias was detected, which resulted in underestimation of the activity values in anatomical structures closest to the skull, due to the MR-derived attenuation map that ignores the bone. Thus, further improvements for the PET/MR reconstruction and attenuation correction could be achieved by optimization of RAMLA-specific reconstruction parameters and implementation of bone to the attenuation template.
Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.
Rangan, Aaditya V; Cai, David
2007-02-01
We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.
Performance of a malaria microscopy image analysis slide reading device.
Prescott, William R; Jordan, Robert G; Grobusch, Martin P; Chinchilli, Vernon M; Kleinschmidt, Immo; Borovsky, Joseph; Plaskow, Mark; Torrez, Miguel; Mico, Maximo; Schwabe, Christopher
2012-05-06
Viewing Plasmodium in Romanovsky-stained blood has long been considered the gold standard for diagnosis and a cornerstone in management of the disease. This method however, requires a subjective evaluation by trained, experienced diagnosticians and establishing proficiency of diagnosis is fraught with many challenges. Reported here is an evaluation of a diagnostic system (a "device" consisting of a microscope, a scanner, and a computer algorithm) that evaluates scanned images of standard Giemsa-stained slides and reports species and parasitaemia. The device was challenged with two independent tests: a 55 slide, expert slide reading test the composition of which has been published by the World Health Organization ("WHO55" test), and a second test in which slides were made from a sample of consenting subjects participating in a malaria incidence survey conducted in Equatorial Guinea (EGMIS test). These subjects' blood was tested by malaria RDT as well as having the blood smear diagnosis unequivocally determined by a worldwide panel of a minimum of six reference microscopists. Only slides with unequivocal microscopic diagnoses were used for the device challenge, n = 119. On the WHO55 test, the device scored a "Level 4" using the WHO published grading scheme. Broken down by more traditional analysis parameters this result was translated to 89% and 70% sensitivity and specificity, respectively. Species were correctly identified in 61% of the slides and the quantification of parasites fell within acceptable range of the validated parasitaemia in 10% of the cases. On the EGMIS test it scored 100% and 94% sensitivity/specificity, with 64% of the species correct and 45% of the parasitaemia within an acceptable range. A pooled analysis of the 174 slides used for both tests resulted in an overall 92% sensitivity and 90% specificity with 61% species and 19% quantifications correct. In its current manifestation, the device performs at a level comparable to that of many human slide readers. Because its use requires minimal additional equipment and it uses standard stained slides as starting material, its widespread adoption may eliminate the current uncertainty about the quality of microscopic diagnoses worldwide.
Yu, Yong-Jie; Wu, Hai-Long; Fu, Hai-Yan; Zhao, Juan; Li, Yuan-Na; Li, Shu-Fang; Kang, Chao; Yu, Ru-Qin
2013-08-09
Chromatographic background drift correction has been an important field of research in chromatographic analysis. In the present work, orthogonal spectral space projection for background drift correction of three-dimensional chromatographic data was described in detail and combined with parallel factor analysis (PARAFAC) to resolve overlapped chromatographic peaks and obtain the second-order advantage. This strategy was verified by simulated chromatographic data and afforded significant improvement in quantitative results. Finally, this strategy was successfully utilized to quantify eleven antibiotics in tap water samples. Compared with the traditional methodology of introducing excessive factors for the PARAFAC model to eliminate the effect of background drift, clear improvement in the quantitative performance of PARAFAC was observed after background drift correction by orthogonal spectral space projection. Copyright © 2013 Elsevier B.V. All rights reserved.
Quantification of mixed chimerism by real time PCR on whole blood-impregnated FTA cards.
Pezzoli, N; Silvy, M; Woronko, A; Le Treut, T; Lévy-Mozziconacci, A; Reviron, D; Gabert, J; Picard, C
2007-09-01
This study has investigated quantification of chimerism in sex-mismatched transplantations by quantitative real time PCR (RQ-PCR) using FTA paper for blood sampling. First, we demonstrate that the quantification of DNA from EDTA-blood which has been deposit on FTA card is accurate and reproducible. Secondly, we show that fraction of recipient cells detected by RQ-PCR was concordant between the FTA and salting-out method, reference DNA extraction method. Furthermore, the sensitivity of detection of recipient cells is relatively similar with the two methods. Our results show that this innovative method can be used for MC assessment by RQ-PCR.
Biniarz, Piotr; Łukaszewicz, Marcin
2017-06-01
The rapid and accurate quantification of biosurfactants in biological samples is challenging. In contrast to the orcinol method for rhamnolipids, no simple biochemical method is available for the rapid quantification of lipopeptides. Various liquid chromatography (LC) methods are promising tools for relatively fast and exact quantification of lipopeptides. Here, we report strategies for the quantification of the lipopeptides pseudofactin and surfactin in bacterial cultures using different high- (HPLC) and ultra-performance liquid chromatography (UPLC) systems. We tested three strategies for sample pretreatment prior to LC analysis. In direct analysis (DA), bacterial cultures were injected directly and analyzed via LC. As a modification, we diluted the samples with methanol and detected an increase in lipopeptide recovery in the presence of methanol. Therefore, we suggest this simple modification as a tool for increasing the accuracy of LC methods. We also tested freeze-drying followed by solvent extraction (FDSE) as an alternative for the analysis of "heavy" samples. In FDSE, the bacterial cultures were freeze-dried, and the resulting powder was extracted with different solvents. Then, the organic extracts were analyzed via LC. Here, we determined the influence of the extracting solvent on lipopeptide recovery. HPLC methods allowed us to quantify pseudofactin and surfactin with run times of 15 and 20 min per sample, respectively, whereas UPLC quantification was as fast as 4 and 5.5 min per sample, respectively. Our methods provide highly accurate measurements and high recovery levels for lipopeptides. At the same time, UPLC-MS provides the possibility to identify lipopeptides and their structural isoforms.
2015-01-01
Food consumption is an important behavior that is regulated by an intricate array of neuropeptides (NPs). Although many feeding-related NPs have been identified in mammals, precise mechanisms are unclear and difficult to study in mammals, as current methods are not highly multiplexed and require extensive a priori knowledge about analytes. New advances in data-independent acquisition (DIA) MS/MS and the open-source quantification software Skyline have opened up the possibility to identify hundreds of compounds and quantify them from a single DIA MS/MS run. An untargeted DIA MSE quantification method using Skyline software for multiplexed, discovery-driven quantification was developed and found to produce linear calibration curves for peptides at physiologically relevant concentrations using a protein digest as internal standard. By using this method, preliminary relative quantification of the crab Cancer borealis neuropeptidome (<2 kDa, 137 peptides from 18 families) was possible in microdialysates from 8 replicate feeding experiments. Of these NPs, 55 were detected with an average mass error below 10 ppm. The time-resolved profiles of relative concentration changes for 6 are shown, and there is great potential for the use of this method in future experiments to aid in correlation of NP changes with behavior. This work presents an unbiased approach to winnowing candidate NPs related to a behavior of interest in a functionally relevant manner, and demonstrates the success of such a UPLC-MSE quantification method using the open source software Skyline. PMID:25552291
Schmerberg, Claire M; Liang, Zhidan; Li, Lingjun
2015-01-21
Food consumption is an important behavior that is regulated by an intricate array of neuropeptides (NPs). Although many feeding-related NPs have been identified in mammals, precise mechanisms are unclear and difficult to study in mammals, as current methods are not highly multiplexed and require extensive a priori knowledge about analytes. New advances in data-independent acquisition (DIA) MS/MS and the open-source quantification software Skyline have opened up the possibility to identify hundreds of compounds and quantify them from a single DIA MS/MS run. An untargeted DIA MS(E) quantification method using Skyline software for multiplexed, discovery-driven quantification was developed and found to produce linear calibration curves for peptides at physiologically relevant concentrations using a protein digest as internal standard. By using this method, preliminary relative quantification of the crab Cancer borealis neuropeptidome (<2 kDa, 137 peptides from 18 families) was possible in microdialysates from 8 replicate feeding experiments. Of these NPs, 55 were detected with an average mass error below 10 ppm. The time-resolved profiles of relative concentration changes for 6 are shown, and there is great potential for the use of this method in future experiments to aid in correlation of NP changes with behavior. This work presents an unbiased approach to winnowing candidate NPs related to a behavior of interest in a functionally relevant manner, and demonstrates the success of such a UPLC-MS(E) quantification method using the open source software Skyline.
Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier
2018-06-01
Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mallah, Muhammad Ali; Sherazi, Syed Tufail Hussain; Bhanger, Muhammad Iqbal; Mahesar, Sarfaraz Ahmed; Bajeer, Muhammad Ashraf
2015-04-01
A transmission FTIR spectroscopic method was developed for direct, inexpensive and fast quantification of paracetamol content in solid pharmaceutical formulations. In this method paracetamol content is directly analyzed without solvent extraction. KBr pellets were formulated for the acquisition of FTIR spectra in transmission mode. Two chemometric models: simple Beer's law and partial least squares employed over the spectral region of 1800-1000 cm-1 for quantification of paracetamol content had a regression coefficient of (R2) of 0.999. The limits of detection and quantification using FTIR spectroscopy were 0.005 mg g-1 and 0.018 mg g-1, respectively. Study for interference was also done to check effect of the excipients. There was no significant interference from the sample matrix. The results obviously showed the sensitivity of transmission FTIR spectroscopic method for pharmaceutical analysis. This method is green in the sense that it does not require large volumes of hazardous solvents or long run times and avoids prior sample preparation.
Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.
Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A
2017-04-01
Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.
1H NMR quantification in very dilute toxin solutions: application to anatoxin-a analysis.
Dagnino, Denise; Schripsema, Jan
2005-08-01
A complete procedure is described for the extraction, detection and quantification of anatoxin-a in biological samples. Anatoxin-a is extracted from biomass by a routine acid base extraction. The extract is analysed by GC-MS, without the need of derivatization, with a detection limit of 0.5 ng. A method was developed for the accurate quantification of anatoxin-a in the standard solution to be used for the calibration of the GC analysis. 1H NMR allowed the accurate quantification of microgram quantities of anatoxin-a. The accurate quantification of compounds in standard solutions is rarely discussed, but for compounds like anatoxin-a (toxins with prices in the range of a million dollar a gram), of which generally only milligram quantities or less are available, this factor in the quantitative analysis is certainly not trivial. The method that was developed can easily be adapted for the accurate quantification of other toxins in very dilute solutions.
Kroll, Tina; Elmenhorst, David; Matusch, Andreas; Wedekind, Franziska; Weisshaupt, Angela; Beer, Simone; Bauer, Andreas
2013-08-01
While the selective 5-hydroxytryptamine type 2a receptor (5-HT2AR) radiotracer [18F]altanserin is well established in humans, the present study evaluated its suitability for quantifying cerebral 5-HT2ARs with positron emission tomography (PET) in albino rats. Ten Sprague Dawley rats underwent 180 min PET scans with arterial blood sampling. Reference tissue methods were evaluated on the basis of invasive kinetic models with metabolite-corrected arterial input functions. In vivo 5-HT2AR quantification with PET was validated by in vitro autoradiographic saturation experiments in the same animals. Overall brain uptake of [18F]altanserin was reliably quantified by invasive and non-invasive models with the cerebellum as reference region shown by linear correlation of outcome parameters. Unlike in humans, no lipophilic metabolites occurred so that brain activity derived solely from parent compound. PET data correlated very well with in vitro autoradiographic data of the same animals. [18F]Altanserin PET is a reliable tool for in vivo quantification of 5-HT2AR availability in albino rats. Models based on both blood input and reference tissue describe radiotracer kinetics adequately. Low cerebral tracer uptake might, however, cause restrictions in experimental usage.
NASA Astrophysics Data System (ADS)
Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.
2015-03-01
Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.
Eldib, Mootaz; Bini, Jason; Calcagno, Claudia; Robson, Philip M; Mani, Venkatesh; Fayad, Zahi A
2014-02-01
Attenuation correction for magnetic resonance (MR) coils is a new challenge that came about with the development of combined MR and positron emission tomography (PET) imaging. This task is difficult because such coils are not directly visible on either PET or MR acquisitions with current combined scanners and are therefore not easily localized in the field of view. This issue becomes more evident when trying to localize flexible MR coils (eg, cardiac or body matrix coil) that change position and shape from patient to patient and from one imaging session to another. In this study, we proposed a novel method to localize and correct for the attenuation and scatter of a flexible MR cardiac coil, using MR fiducial markers placed on the surface of the coil to allow for accurate registration of a template computed tomography (CT)-based attenuation map. To quantify the attenuation properties of the cardiac coil, a uniform cylindrical water phantom injected with 18F-fluorodeoxyglucose (18F-FDG) was imaged on a sequential MR/PET system with and without the flexible cardiac coil. After establishing the need to correct for the attenuation of the coil, we tested the feasibility of several methods to register a precomputed attenuation map to correct for the attenuation. To accomplish this, MR and CT visible markers were placed on the surface of the cardiac flexible coil. Using only the markers as a driver for registration, the CT image was registered to the reference image through a combination of rigid and deformable registration. The accuracy of several methods was compared for the deformable registration, including B-spline, thin-plate spline, elastic body spline, and volume spline. Finally, we validated our novel approach both in phantom and patient studies. The findings from the phantom experiments indicated that the presence of the coil resulted in a 10% reduction in measured 18F-FDG activity when compared with the phantom-only scan. Local underestimation reached 22% in regions of interest close to the coil. Various registration methods were tested, and the volume spline was deemed to be the most accurate, as measured by the Dice similarity metric. The results of our phantom experiments showed that the bias in the 18F-FDG quantification introduced by the presence of the coil could be reduced by using our registration method. An overestimation of only 1.9% of the overall activity for the phantom scan with the coil attenuation map was measured when compared with the baseline phantom scan without coil. A local overestimation of less than 3% was observed in the ROI analysis when using the proposed method to correct for the attenuation of the flexible cardiac coil. Quantitative results from the patient study agreed well with the phantom findings. We presented and validated an accurate method to localize and register a CT-based attenuation map to correct for the attenuation and scatter of flexible MR coils. This method may be translated to clinical use to produce quantitatively accurate measurements with the use of flexible MR coils during MR/PET imaging.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-27
... DEPARTMENT OF AGRICULTURE [Docket Number: USDA-2013-0003] Science-Based Methods for Entity-Scale Quantification of Greenhouse Gas Sources and Sinks From Agriculture and Forestry Practices AGENCY: Office of the... of Agriculture (USDA) has prepared a report containing methods for quantifying entity-scale...
[DNA quantification of blood samples pre-treated with pyramidon].
Zhu, Chuan-Hong; Zheng, Dao-Li; Ni, Rao-Zhi; Wang, Hai-Sheng; Ning, Ping; Fang, Hui; Liu, Yan
2014-06-01
To study DNA quantification and STR typing of samples pre-treated with pyramidon. The blood samples of ten unrelated individuals were anticoagulated in EDTA. The blood stains were made on the filter paper. The experimental groups were divided into six groups in accordance with the storage time, 30 min, 1 h, 3 h, 6 h, 12 h and 24h after pre-treated with pyramidon. DNA was extracted by three methods: magnetic bead-based extraction, QIAcube DNA purification method and Chelex-100 method. The quantification of DNA was made by fluorescent quantitative PCR. STR typing was detected by PCR-STR fluorescent technology. In the same DNA extraction method, the sample DNA decreased gradually with times after pre-treatment with pyramidon. In the same storage time, the DNA quantification in different extraction methods had significant differences. Sixteen loci DNA typing were detected in 90.56% of samples. Pyramidon pre-treatment could cause DNA degradation, but effective STR typing can be achieved within 24 h. The magnetic bead-based extraction is the best method for STR profiling and DNA extraction.
Simple and rapid quantification of brominated vegetable oil in commercial soft drinks by LC–MS
Chitranshi, Priyanka; da Costa, Gonçalo Gamboa
2016-01-01
We report here a simple and rapid method for the quantification of brominated vegetable oil (BVO) in soft drinks based upon liquid chromatography–electrospray ionization mass spectrometry. Unlike previously reported methods, this novel method does not require hydrolysis, extraction or derivatization steps, but rather a simple “dilute and shoot” sample preparation. The quantification is conducted by mass spectrometry in selected ion recording mode and a single point standard addition procedure. The method was validated in the range of 5–25 μg/mL BVO, encompassing the legal limit of 15 μg/mL established by the US FDA for fruit-flavored beverages in the US market. The method was characterized by excellent intra- and inter-assay accuracy (97.3–103.4%) and very low imprecision [0.5–3.6% (RSD)]. The direct nature of the quantification, simplicity, and excellent statistical performance of this methodology constitute clear advantages in relation to previously published methods for the analysis of BVO in soft drinks. PMID:27451219
USDA-ARS?s Scientific Manuscript database
High performance liquid chromatography of dabsyl derivatives of amino acids was employed for quantification of physiologic amino acids in cucurbits. This method is particularly useful because the dabsyl derivatives of glutamine and citrulline are sufficiently separated to allow quantification of ea...
Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?
Ershadi, Saba; Shayanfar, Ali
2018-03-22
The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.
Han, Yongming; Chen, Antony; Cao, Junji; Fung, Kochy; Ho, Fai; Yan, Beizhan; Zhan, Changlin; Liu, Suixin; Wei, Chong; An, Zhisheng
2013-01-01
Quantifying elemental carbon (EC) content in geological samples is challenging due to interferences of crustal, salt, and organic material. Thermal/optical analysis, combined with acid pretreatment, represents a feasible approach. However, the consistency of various thermal/optical analysis protocols for this type of samples has never been examined. In this study, urban street dust and soil samples from Baoji, China were pretreated with acids and analyzed with four thermal/optical protocols to investigate how analytical conditions and optical correction affect EC measurement. The EC values measured with reflectance correction (ECR) were found always higher and less sensitive to temperature program than the EC values measured with transmittance correction (ECT). A high-temperature method with extended heating times (STN120) showed the highest ECT/ECR ratio (0.86) while a low-temperature protocol (IMPROVE-550), with heating time adjusted for sample loading, showed the lowest (0.53). STN ECT was higher than IMPROVE ECT, in contrast to results from aerosol samples. A higher peak inert-mode temperature and extended heating times can elevate ECT/ECR ratios for pretreated geological samples by promoting pyrolyzed organic carbon (PyOC) removal over EC under trace levels of oxygen. Considering that PyOC within filter increases ECR while decreases ECT from the actual EC levels, simultaneous ECR and ECT measurements would constrain the range of EC loading and provide information on method performance. Further testing with standard reference materials of common environmental matrices supports the findings. Char and soot fractions of EC can be further separated using the IMPROVE protocol. The char/soot ratio was lower in street dusts (2.2 on average) than in soils (5.2 on average), most likely reflecting motor vehicle emissions. The soot concentrations agreed with EC from CTO-375, a pure thermal method. PMID:24358286
de Castro, Eduardo da S G; Cassella, Ricardo J
2016-05-15
Reference methods for quality control of vaccines usually require treatment of the samples before analysis. These procedures are expensive, time-consuming, unhealthy and require careful manipulation of the sample, making them a potential source of analytical errors. This work proposes a novel method for the quality control of thermostabilizer samples of the yellow fever vaccine employing attenuated total reflectance Fourier transform infrared spectrometry (ATR-FTIR). The main advantage of the proposed method is the possibility of direct determination of the analytes (sodium glutamate and sorbitol) without any pretreatment of the samples. Operational parameters of the FTIR technique, such as the number of accumulated scans and nominal resolution, were evaluated. The best conditions for sodium glutamate were achieved when 64 scans were accumulated using a nominal resolution of 4 cm(-1). The measurements for sodium glutamate were performed at 1347 cm(-1) (baseline correction between 1322 and 1369 cm(-1)). In the case of sorbitol, the measurements were done at 890cm(-1) (baseline correction between 825 and 910 cm(-1)) using a nominal resolution of 2 cm(-1) with 32 accumulated scans. In both cases, the quantitative variable was the band height. Recovery tests were performed in order to evaluate the accuracy of the method and recovery percentages in the range 93-106% were obtained. Also, the methods were compared with reference methods and no statistical differences were observed. The limits of detection and quantification for sodium glutamate were 0.20 and 0.62% (m/v), respectively, whereas for sorbitol they were 1 and 3.3% (m/v), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Taylor, Jonathan Christopher; Fenner, John Wesley
2017-11-29
Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification performance was lower for the local database than the research database for both semi-quantitative and machine learning algorithms. However, for both databases, the machine learning methods generated equal or higher mean accuracies (with lower variance) than any of the semi-quantification approaches. The gain in performance from using machine learning algorithms as compared to semi-quantification was relatively small and may be insufficient, when considered in isolation, to offer significant advantages in the clinical context.
Lau, Billy T; Ji, Hanlee P
2017-09-21
RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.
Isak, I; Patel, M; Riddell, M; West, M; Bowers, T; Wijeyekoon, S; Lloyd, J
2016-08-01
Fourier transform infrared (FTIR) spectroscopy was used in this study for the rapid quantification of polyhydroxyalkanoates (PHA) in mixed and pure culture bacterial biomass. Three different statistical analysis methods (regression, partial least squares (PLS) and nonlinear) were applied to the FTIR data and the results were plotted against the PHA values measured with the reference gas chromatography technique. All methods predicted PHA content in mixed culture biomass with comparable efficiency, indicated by similar residuals values. The PHA in these cultures ranged from low to medium concentration (0-44 wt% of dried biomass content). However, for the analysis of the combined mixed and pure culture biomass with PHA concentration ranging from low to high (0-93% of dried biomass content), the PLS method was most efficient. This paper reports, for the first time, the use of a single calibration model constructed with a combination of mixed and pure cultures covering a wide PHA range, for predicting PHA content in biomass. Currently no one universal method exists for processing FTIR data for polyhydroxyalkanoates (PHA) quantification. This study compares three different methods of analysing FTIR data for quantification of PHAs in biomass. A new data-processing approach was proposed and the results were compared against existing literature methods. Most publications report PHA quantification of medium range in pure culture. However, in our study we encompassed both mixed and pure culture biomass containing a broader range of PHA in the calibration curve. The resulting prediction model is useful for rapid quantification of a wider range of PHA content in biomass. © 2016 The Society for Applied Microbiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siman, W.; Mikell, J. K.; Kappadath, S. C., E-mail
Purpose: To develop a practical background compensation (BC) technique to improve quantitative {sup 90}Y-bremsstrahlung single-photon emission computed tomography (SPECT)/computed tomography (CT) using a commercially available imaging system. Methods: All images were acquired using medium-energy collimation in six energy windows (EWs), ranging from 70 to 410 keV. The EWs were determined based on the signal-to-background ratio in planar images of an acrylic phantom of different thicknesses (2–16 cm) positioned below a {sup 90}Y source and set at different distances (15–35 cm) from a gamma camera. The authors adapted the widely used EW-based scatter-correction technique by modeling the BC as scaled images.more » The BC EW was determined empirically in SPECT/CT studies using an IEC phantom based on the sphere activity recovery and residual activity in the cold lung insert. The scaling factor was calculated from 20 clinical planar {sup 90}Y images. Reconstruction parameters were optimized in the same SPECT images for improved image quantification and contrast. A count-to-activity calibration factor was calculated from 30 clinical {sup 90}Y images. Results: The authors found that the most appropriate imaging EW range was 90–125 keV. BC was modeled as 0.53× images in the EW of 310–410 keV. The background-compensated clinical images had higher image contrast than uncompensated images. The maximum deviation of their SPECT calibration in clinical studies was lowest (<10%) for SPECT with attenuation correction (AC) and SPECT with AC + BC. Using the proposed SPECT-with-AC + BC reconstruction protocol, the authors found that the recovery coefficient of a 37-mm sphere (in a 10-mm volume of interest) increased from 39% to 90% and that the residual activity in the lung insert decreased from 44% to 14% over that of SPECT images with AC alone. Conclusions: The proposed EW-based BC model was developed for {sup 90}Y bremsstrahlung imaging. SPECT with AC + BC gave improved lesion detectability and activity quantification compared to SPECT with AC only. The proposed methodology can readily be used to tailor {sup 90}Y SPECT/CT acquisition and reconstruction protocols with different SPECT/CT systems for quantification and improved image quality in clinical settings.« less
Magota, Keiichi; Shiga, Tohru; Asano, Yukari; Shinyama, Daiki; Ye, Jinghan; Perkins, Amy E; Maniawski, Piotr J; Toyonaga, Takuya; Kobayashi, Kentaro; Hirata, Kenji; Katoh, Chietsugu; Hattori, Naoya; Tamaki, Nagara
2017-12-01
In 3-dimensional PET/CT imaging of the brain with 15 O-gas inhalation, high radioactivity in the face mask creates cold artifacts and affects the quantitative accuracy when scatter is corrected by conventional methods (e.g., single-scatter simulation [SSS] with tail-fitting scaling [TFS-SSS]). Here we examined the validity of a newly developed scatter-correction method that combines SSS with a scaling factor calculated by Monte Carlo simulation (MCS-SSS). Methods: We performed phantom experiments and patient studies. In the phantom experiments, a plastic bottle simulating a face mask was attached to a cylindric phantom simulating the brain. The cylindric phantom was filled with 18 F-FDG solution (3.8-7.0 kBq/mL). The bottle was filled with nonradioactive air or various levels of 18 F-FDG (0-170 kBq/mL). Images were corrected either by TFS-SSS or MCS-SSS using the CT data of the bottle filled with nonradioactive air. We compared the image activity concentration in the cylindric phantom with the true activity concentration. We also performed 15 O-gas brain PET based on the steady-state method on patients with cerebrovascular disease to obtain quantitative images of cerebral blood flow and oxygen metabolism. Results: In the phantom experiments, a cold artifact was observed immediately next to the bottle on TFS-SSS images, where the image activity concentrations in the cylindric phantom were underestimated by 18%, 36%, and 70% at the bottle radioactivity levels of 2.4, 5.1, and 9.7 kBq/mL, respectively. At higher bottle radioactivity, the image activity concentrations in the cylindric phantom were greater than 98% underestimated. For the MCS-SSS, in contrast, the error was within 5% at each bottle radioactivity level, although the image generated slight high-activity artifacts around the bottle when the bottle contained significantly high radioactivity. In the patient imaging with 15 O 2 and C 15 O 2 inhalation, cold artifacts were observed on TFS-SSS images, whereas no artifacts were observed on any of the MCS-SSS images. Conclusion: MCS-SSS accurately corrected the scatters in 15 O-gas brain PET when the 3-dimensional acquisition mode was used, preventing the generation of cold artifacts, which were observed immediately next to a face mask on TFS-SSS images. The MCS-SSS method will contribute to accurate quantitative assessments. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Chen, Xing; Pavan, Matteo; Heinzer-Schweizer, Susanne; Boesiger, Peter; Henning, Anke
2012-01-01
This report describes our efforts on quantification of tissue metabolite concentrations in mM by nuclear Overhauser enhanced and proton decoupled (13) C magnetic resonance spectroscopy and the Electric Reference To access In vivo Concentrations (ERETIC) method. Previous work showed that a calibrated synthetic magnetic resonance spectroscopy-like signal transmitted through an optical fiber and inductively coupled into a transmit/receive coil represents a reliable reference standard for in vivo (1) H magnetic resonance spectroscopy quantification on a clinical platform. In this work, we introduce a related implementation that enables simultaneous proton decoupling and ERETIC-based metabolite quantification and hence extends the applicability of the ERETIC method to nuclear Overhauser enhanced and proton decoupled in vivo (13) C magnetic resonance spectroscopy. In addition, ERETIC signal stability under the influence of simultaneous proton decoupling is investigated. The proposed quantification method was cross-validated against internal and external reference standards on human skeletal muscle. The ERETIC signal intensity stability was 100.65 ± 4.18% over 3 months including measurements with and without proton decoupling. Glycogen and unsaturated fatty acid concentrations measured with the ERETIC method were in excellent agreement with internal creatine and external phantom reference methods, showing a difference of 1.85 ± 1.21% for glycogen and 1.84 ± 1.00% for unsaturated fatty acid between ERETIC and creatine-based quantification, whereas the deviations between external reference and creatine-based quantification are 6.95 ± 9.52% and 3.19 ± 2.60%, respectively. Copyright © 2011 Wiley Periodicals, Inc.
Zhou, Yun; Sojkova, Jitka; Resnick, Susan M; Wong, Dean F
2012-04-01
Both the standardized uptake value ratio (SUVR) and the Logan plot result in biased distribution volume ratios (DVRs) in ligand-receptor dynamic PET studies. The objective of this study was to use a recently developed relative equilibrium-based graphical (RE) plot method to improve and simplify the 2 commonly used methods for quantification of (11)C-Pittsburgh compound B ((11)C-PiB) PET. The overestimation of DVR in SUVR was analyzed theoretically using the Logan and the RE plots. A bias-corrected SUVR (bcSUVR) was derived from the RE plot. Seventy-eight (11)C-PiB dynamic PET scans (66 from controls and 12 from participants with mild cognitive impaired [MCI] from the Baltimore Longitudinal Study of Aging) were acquired over 90 min. Regions of interest (ROIs) were defined on coregistered MR images. Both the ROI and the pixelwise time-activity curves were used to evaluate the estimates of DVR. DVRs obtained using the Logan plot applied to ROI time-activity curves were used as a reference for comparison of DVR estimates. Results from the theoretic analysis were confirmed by human studies. ROI estimates from the RE plot and the bcSUVR were nearly identical to those from the Logan plot with ROI time-activity curves. In contrast, ROI estimates from DVR images in frontal, temporal, parietal, and cingulate regions and the striatum were underestimated by the Logan plot (controls, 4%-12%; MCI, 9%-16%) and overestimated by the SUVR (controls, 8%-16%; MCI, 16%-24%). This bias was higher in the MCI group than in controls (P < 0.01) but was not present when data were analyzed using either the RE plot or the bcSUVR. The RE plot improves pixelwise quantification of (11)C-PiB dynamic PET, compared with the conventional Logan plot. The bcSUVR results in lower bias and higher consistency of DVR estimates than of SUVR. The RE plot and the bcSUVR are practical quantitative approaches that improve the analysis of (11)C-PiB studies.
Alves, L P S; Almeida, A T; Cruz, L M; Pedrosa, F O; de Souza, E M; Chubatsu, L S; Müller-Santos, M; Valdameri, G
2017-01-16
The conventional method for quantification of polyhydroxyalkanoates based on whole-cell methanolysis and gas chromatography (GC) is laborious and time-consuming. In this work, a method based on flow cytometry of Nile red stained bacterial cells was established to quantify poly-3-hydroxybutyrate (PHB) production by the diazotrophic and plant-associated bacteria, Herbaspirillum seropedicae and Azospirillum brasilense. The method consists of three steps: i) cell permeabilization, ii) Nile red staining, and iii) analysis by flow cytometry. The method was optimized step-by-step and can be carried out in less than 5 min. The final results indicated a high correlation coefficient (R2=0.99) compared to a standard method based on methanolysis and GC. This method was successfully applied to the quantification of PHB in epiphytic bacteria isolated from rice roots.
Wang, Hanghang; Muehlbauer, Michael J.; O’Neal, Sara K.; Newgard, Christopher B.; Hauser, Elizabeth R.; Shah, Svati H.
2017-01-01
The field of metabolomics as applied to human disease and health is rapidly expanding. In recent efforts of metabolomics research, greater emphasis has been placed on quality control and method validation. In this study, we report an experience with quality control and a practical application of method validation. Specifically, we sought to identify and modify steps in gas chromatography-mass spectrometry (GC-MS)-based, non-targeted metabolomic profiling of human plasma that could influence metabolite identification and quantification. Our experimental design included two studies: (1) a limiting-dilution study, which investigated the effects of dilution on analyte identification and quantification; and (2) a concentration-specific study, which compared the optimal plasma extract volume established in the first study with the volume used in the current institutional protocol. We confirmed that contaminants, concentration, repeatability and intermediate precision are major factors influencing metabolite identification and quantification. In addition, we established methods for improved metabolite identification and quantification, which were summarized to provide recommendations for experimental design of GC-MS-based non-targeted profiling of human plasma. PMID:28841195
Rapid and Easy Protocol for Quantification of Next-Generation Sequencing Libraries.
Hawkins, Steve F C; Guest, Paul C
2018-01-01
The emergence of next-generation sequencing (NGS) over the last 10 years has increased the efficiency of DNA sequencing in terms of speed, ease, and price. However, the exact quantification of a NGS library is crucial in order to obtain good data on sequencing platforms developed by the current market leader Illumina. Different approaches for DNA quantification are available currently and the most commonly used are based on analysis of the physical properties of the DNA through spectrophotometric or fluorometric methods. Although these methods are technically simple, they do not allow exact quantification as can be achieved using a real-time quantitative PCR (qPCR) approach. A qPCR protocol for DNA quantification with applications in NGS library preparation studies is presented here. This can be applied in various fields of study such as medical disorders resulting from nutritional programming disturbances.
NASA Astrophysics Data System (ADS)
Thomas, Benjamin A.; Cuplov, Vesna; Bousse, Alexandre; Mendes, Adriana; Thielemans, Kris; Hutton, Brian F.; Erlandsson, Kjell
2016-11-01
Positron emission tomography (PET) images are degraded by a phenomenon known as the partial volume effect (PVE). Approaches have been developed to reduce PVEs, typically through the utilisation of structural information provided by other imaging modalities such as MRI or CT. These methods, known as partial volume correction (PVC) techniques, reduce PVEs by compensating for the effects of the scanner resolution, thereby improving the quantitative accuracy. The PETPVC toolbox described in this paper comprises a suite of methods, both classic and more recent approaches, for the purposes of applying PVC to PET data. Eight core PVC techniques are available. These core methods can be combined to create a total of 22 different PVC techniques. Simulated brain PET data are used to demonstrate the utility of toolbox in idealised conditions, the effects of applying PVC with mismatched point-spread function (PSF) estimates and the potential of novel hybrid PVC methods to improve the quantification of lesions. All anatomy-based PVC techniques achieve complete recovery of the PET signal in cortical grey matter (GM) when performed in idealised conditions. Applying deconvolution-based approaches results in incomplete recovery due to premature termination of the iterative process. PVC techniques are sensitive to PSF mismatch, causing a bias of up to 16.7% in GM recovery when over-estimating the PSF by 3 mm. The recovery of both GM and a simulated lesion was improved by combining two PVC techniques together. The PETPVC toolbox has been written in C++, supports Windows, Mac and Linux operating systems, is open-source and publicly available.
Ghedira, Rim; Papazova, Nina; Vuylsteke, Marnik; Ruttink, Tom; Taverniers, Isabel; De Loose, Marc
2009-10-28
GMO quantification, based on real-time PCR, relies on the amplification of an event-specific transgene assay and a species-specific reference assay. The uniformity of the nucleotide sequences targeted by both assays across various transgenic varieties is an important prerequisite for correct quantification. Single nucleotide polymorphisms (SNPs) frequently occur in the maize genome and might lead to nucleotide variation in regions used to design primers and probes for reference assays. Further, they may affect the annealing of the primer to the template and reduce the efficiency of DNA amplification. We assessed the effect of a minor DNA template modification, such as a single base pair mismatch in the primer attachment site, on real-time PCR quantification. A model system was used based on the introduction of artificial mismatches between the forward primer and the DNA template in the reference assay targeting the maize starch synthase (SSIIb) gene. The results show that the presence of a mismatch between the primer and the DNA template causes partial to complete failure of the amplification of the initial DNA template depending on the type and location of the nucleotide mismatch. With this study, we show that the presence of a primer/template mismatch affects the estimated total DNA quantity to a varying degree.
A conceptually and computationally simple method for the definition, display, quantification, and comparison of the shapes of three-dimensional mathematical molecular models is presented. Molecular or solvent-accessible volume and surface area can also be calculated. Algorithms, ...
Shimizu, Eri; Kato, Hisashi; Nakagawa, Yuki; Kodama, Takashi; Futo, Satoshi; Minegishi, Yasutaka; Watanabe, Takahiro; Akiyama, Hiroshi; Teshima, Reiko; Furui, Satoshi; Hino, Akihiro; Kitta, Kazumi
2008-07-23
A novel type of quantitative competitive polymerase chain reaction (QC-PCR) system for the detection and quantification of the Roundup Ready soybean (RRS) was developed. This system was designed based on the advantage of a fully validated real-time PCR method used for the quantification of RRS in Japan. A plasmid was constructed as a competitor plasmid for the detection and quantification of genetically modified soy, RRS. The plasmid contained the construct-specific sequence of RRS and the taxon-specific sequence of lectin1 (Le1), and both had 21 bp oligonucleotide insertion in the sequences. The plasmid DNA was used as a reference molecule instead of ground seeds, which enabled us to precisely and stably adjust the copy number of targets. The present study demonstrated that the novel plasmid-based QC-PCR method could be a simple and feasible alternative to the real-time PCR method used for the quantification of genetically modified organism contents.
Piñeiro, Zulema; Cantos-Villar, Emma; Palma, Miguel; Puertas, Belen
2011-11-09
A validated HPLC method with fluorescence detection for the simultaneous quantification of hydroxytyrosol and tyrosol in red wines is described. Detection conditions for both compounds were optimized (excitation at 279 and 278 and emission at 631 and 598 nm for hydroxytyrosol and tyrosol, respectively). The validation of the analytical method was based on selectivity, linearity, robustness, detection and quantification limits, repeatability, and recovery. The detection and quantification limits in red wines were set at 0.023 and 0.076 mg L(-1) for hydroxytyrosol and at 0.007 and 0.024 mg L(-1) for tyrosol determination, respectively. Precision values, both within-day and between-day (n = 5), remained below 3% for both compounds. In addition, a fractional factorial experimental design was developed to analyze the influence of six different conditions on analysis. The final optimized HPLC-fluorescence method allowed the analysis of 30 nonpretreated Spanish red wines to evaluate their hydroxytyrosol and tyrosol contents.
Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra
2018-02-01
The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Höhener, Patrick
2014-05-01
Chlorinated solvent spills at industrial and urban sites create groundwater plumes where tetrachloro- and trichloroethene may degrade to their daughter compounds, dichloroethenes, vinyl chloride and ethane. The assessment of degradation and natural attenuation at such sites may be based on the analysis and inverse modelling of concentration data, on the calculation of mass fluxes in transsects, and/or on the analysis of stable isotope ratios in the ethenes. Relatively few work has investigated the possibility of using ratio of concentrations for gaining information on degradation rates. The use of ratios bears the advantage that dilution of a single sample with contaminant-free water does not matter. It will be shown that molar ratios of daughter to parent compounds measured along a plume streamline are a rapid and robust mean of determining whether degradation rates increase or decrease along the degradation chain, and allow furthermore a quantitation of the relative magnitude of degradation rates compared to the rate of the parent compound. Furthermore, ratios of concentration will become constant in zones where degradation is absent, and this allows to sketching the extension of actively degrading zones. The assessment is possible for pure sources and also for mixed sources. A quantification method is proposed in order to estimate first-order degradation rates in zones of constant degradation activity. This quantification method includes corrections that are needed due to longitudinal and transversal dispersivity. The method was tested on a number of real field sites from literature. At the majority of these sites, the first-order degradation rates were decreasing along the degradation chain from tetrachloroethene to vinyl chloride, meaning that the latter was often reaching important concentrations. This is bad news for site owners due to the increased toxicity of vinyl chloride compared to its parent compounds.
Nava, Nicoletta; Chen, Fenghua; Wegener, Gregers; Popoli, Maurizio; Nyengaard, Jens Randel
2014-02-01
Communication between neurons is mediated by the release of neurotransmitter-containing vesicles from presynaptic terminals. Quantitative characterization of synaptic vesicles can be highly valuable for understanding mechanisms underlying synaptic function and plasticity. We performed a quantitative ultrastructural analysis of cortical excitatory synapses by mean of a new, efficient method, as an alternative to three-dimensional (3D) reconstruction. Based on a hierarchical sampling strategy and unequivocal identification of the region of interest, serial sections from excitatory synapses of medial prefrontal cortex (mPFC) of six Sprague-Dawley rats were acquired with a transmission electron microscope. Unbiased estimates of total 3D volume of synaptic terminals were obtained through the Cavalieri estimator, and adequate correction factors for vesicle profile number estimation were applied for final vesicle quantification. Our analysis was based on 79 excitatory synapses, nonperforated (NPSs) and perforated (PSs) subtypes. We found that total number of docked and reserve-pool vesicles in PSs significantly exceeded that in NPSs (by, respectively, 77% and 78%). These differences were found to be related to changes in size between the two subtypes (active zone area by 86%; bouton volume by 105%) rather than to postsynaptic density shape. Positive significant correlations were found between number of docked and reserve-pool vesicles, active zone area and docked vesicles, and bouton volume and reserve pool vesicles. Our method confirmed the large size of mPFC PSs and a linear correlation between presynaptic features of typical hippocampal synapses. Moreover, a greater number of docked vesicles in PSs may promote a high synaptic strength of these synapses. Copyright © 2013 Wiley Periodicals, Inc.
Zugaj, D; Chenet, A; Petit, L; Vaglio, J; Pascual, T; Piketty, C; Bourdes, V
2018-02-04
Currently, imaging technologies that can accurately assess or provide surrogate markers of the human cutaneous microvessel network are limited. Dynamic optical coherence tomography (D-OCT) allows the detection of blood flow in vivo and visualization of the skin microvasculature. However, image processing is necessary to correct images, filter artifacts, and exclude irrelevant signals. The objective of this study was to develop a novel image processing workflow to enhance the technical capabilities of D-OCT. Single-center, vehicle-controlled study including healthy volunteers aged 18-50 years. A capsaicin solution was applied topically on the subject's forearm to induce local inflammation. Measurements of capsaicin-induced increase in dermal blood flow, within the region of interest, were performed by laser Doppler imaging (LDI) (reference method) and D-OCT. Sixteen subjects were enrolled. A good correlation was shown between D-OCT and LDI, using the image processing workflow. Therefore, D-OCT offers an easy-to-use alternative to LDI, with good repeatability, new robust morphological features (dermal-epidermal junction localization), and quantification of the distribution of vessel size and changes in this distribution induced by capsaicin. The visualization of the vessel network was improved through bloc filtering and artifact removal. Moreover, the assessment of vessel size distribution allows a fine analysis of the vascular patterns. The newly developed image processing workflow enhances the technical capabilities of D-OCT for the accurate detection and characterization of microcirculation in the skin. A direct clinical application of this image processing workflow is the quantification of the effect of topical treatment on skin vascularization. © 2018 The Authors. Skin Research and Technology Published by John Wiley & Sons Ltd.
Yang, Qi; Duan, Jiangang; Fan, Zhaoyang; Qu, Xiaofeng; Xie, Yibin; Nguyen, Christopher; Du, Xiangying; Bi, Xiaoming; Li, Kuncheng; Ji, Xunming; Li, Debiao
2015-01-01
Background and Purpose Early diagnosis of cerebral venous and sinus thrombosis (CVT) is currently a major clinical challenge. We proposed a novel MR black-blood thrombus imaging technique(MRBTI) for detection and quantification of CVT. Methods MRBTI was performed on 23 patients with proven CVT and 24 patients with negative CVT confirmed by conventional imaging techniques. Patients were divided into two groups based on the duration of clinical onset: ≤ 7 days (group 1); between 7 and 30 days (group 2). Signal-to-noise ratio (SNR) was calculated for the detected thrombus and contrast-to-noise ratio (CNR) was measured between thrombus and lumen, and also between thrombus and brain tisssue. The feasibility of using MRBTI for thrombus volume measurement was explored and total thrombus volume was calculated for each patient. Results In 23 patients with proven CVT, MRBTI correctly identified 113 out of 116 segments with a sensitivity of 97.4%. Thrombus SNR was 153±57 and 261±95 for group 1(n=10) and group 2(n=13), respectively(P<0.01). Thrombus to lumen CNR was 149±57 and 256±94 for group 1 and group 2. Thrombus to brain tissue CNR was 41±36 and 120±63 (P<0.01), respectively. Quantification of thrombus volume was successfully conducted in all patients with CVT, and mean volume of thrombus was 10.5±6.9cc. Conclusions The current findings support that with effectively suppressed blood signal, MRBTI allows selective visualization of thrombus as opposed to indirect detection of venous flow perturbation and can be used as a promising first line diagnostic imaging tool. PMID:26670082
Reiter, Rolf; Wetzel, Martin; Hamesch, Karim; Strnad, Pavel; Asbach, Patrick; Haas, Matthias; Siegmund, Britta; Trautwein, Christian; Hamm, Bernd; Klatt, Dieter; Braun, Jürgen; Sack, Ingolf; Tzschätzsch, Heiko
2018-01-01
Although it has been known for decades that patients with alpha1-antitrypsin deficiency (AATD) have an increased risk of cirrhosis and hepatocellular carcinoma, limited data exist on non-invasive imaging-based methods for assessing liver fibrosis such as magnetic resonance elastography (MRE) and acoustic radiation force impulse (ARFI) quantification, and no data exist on 2D-shear wave elastography (2D-SWE). Therefore, the purpose of this study is to evaluate and compare the applicability of different elastography methods for the assessment of AATD-related liver fibrosis. Fifteen clinically asymptomatic AATD patients (11 homozygous PiZZ, 4 heterozygous PiMZ) and 16 matched healthy volunteers were examined using MRE and ARFI quantification. Additionally, patients were examined with 2D-SWE. A high correlation is evident for the shear wave speed (SWS) determined with different elastography methods in AATD patients: 2D-SWE/MRE, ARFI quantification/2D-SWE, and ARFI quantification/MRE (R = 0.8587, 0.7425, and 0.6914, respectively; P≤0.0089). Four AATD patients with pathologically increased SWS were consistently identified with all three methods-MRE, ARFI quantification, and 2D-SWE. The high correlation and consistent identification of patients with pathologically increased SWS using MRE, ARFI quantification, and 2D-SWE suggest that elastography has the potential to become a suitable imaging tool for the assessment of AATD-related liver fibrosis. These promising results provide motivation for further investigation of non-invasive assessment of AATD-related liver fibrosis using elastography.
Otani, Kyoko; Nakazono, Akemi; Salgo, Ivan S; Lang, Roberto M; Takeuchi, Masaaki
2016-10-01
Echocardiographic determination of left heart chamber volumetric parameters by using manual tracings during multiple beats is tedious in atrial fibrillation (AF). The aim of this study was to determine the usefulness of fully automated left chamber quantification software with single-beat three-dimensional transthoracic echocardiographic data sets in patients with AF. Single-beat full-volume three-dimensional transthoracic echocardiographic data sets were prospectively acquired during consecutive multiple cardiac beats (≥10 beats) in 88 patients with AF. In protocol 1, left ventricular volumes, left ventricular ejection fraction, and maximal left atrial volume were validated using automated quantification against the manual tracing method in identical beats in 10 patients. In protocol 2, automated quantification-derived averaged values from multiple beats were compared with the corresponding values obtained from the indexed beat in all patients. Excellent correlations of left chamber parameters between automated quantification and the manual method were observed (r = 0.88-0.98) in protocol 1. The time required for the analysis with the automated quantification method (5 min) was significantly less compared with the manual method (27 min) (P < .0001). In protocol 2, there were excellent linear correlations between the averaged left chamber parameters and the corresponding values obtained from the indexed beat (r = 0.94-0.99), and test-retest variability of left chamber parameters was low (3.5%-4.8%). Three-dimensional transthoracic echocardiography with fully automated quantification software is a rapid and reliable way to measure averaged values of left heart chamber parameters during multiple consecutive beats. Thus, it is a potential new approach for left chamber quantification in patients with AF in daily routine practice. Copyright © 2016 American Society of Echocardiography. Published by Elsevier Inc. All rights reserved.
90Y Liver Radioembolization Imaging Using Amplitude-Based Gated PET/CT.
Osborne, Dustin R; Acuff, Shelley; Neveu, Melissa; Kaman, Austin; Syed, Mumtaz; Fu, Yitong
2017-05-01
The usage of PET/CT to monitor patients with hepatocellular carcinoma following Y radioembolization has increased; however, image quality is often poor because of low count efficiency and respiratory motion. Motion can be corrected using gating techniques but at the expense of additional image noise. Amplitude-based gating has been shown to improve quantification in FDG PET, but few have used this technique in Y liver imaging. The patients shown in this work indicate that amplitude-based gating can be used in Y PET/CT liver imaging to provide motion-corrected images with higher estimates of activity concentration that may improve posttherapy dosimetry.
Development for 2D pattern quantification method on mask and wafer
NASA Astrophysics Data System (ADS)
Matsuoka, Ryoichi; Mito, Hiroaki; Toyoda, Yasutaka; Wang, Zhigang
2010-03-01
We have developed the effective method of mask and silicon 2-dimensional metrology. The aim of this method is evaluating the performance of the silicon corresponding to Hotspot on a mask. The method adopts a metrology management system based on DBM (Design Based Metrology). This is the high accurate contouring created by an edge detection algorithm used in mask CD-SEM and silicon CD-SEM. Currently, as semiconductor manufacture moves towards even smaller feature size, this necessitates more aggressive optical proximity correction (OPC) to drive the super-resolution technology (RET). In other words, there is a trade-off between highly precise RET and mask manufacture, and this has a big impact on the semiconductor market that centers on the mask business. 2-dimensional Shape quantification is important as optimal solution over these problems. Although 1-dimensional shape measurement has been performed by the conventional technique, 2-dimensional shape management is needed in the mass production line under the influence of RET. We developed the technique of analyzing distribution of shape edge performance as the shape management technique. On the other hand, there is roughness in the silicon shape made from a mass-production line. Moreover, there is variation in the silicon shape. For this reason, quantification of silicon shape is important, in order to estimate the performance of a pattern. In order to quantify, the same shape is equalized in two dimensions. And the method of evaluating based on the shape is popular. In this study, we conducted experiments for averaging method of the pattern (Measurement Based Contouring) as two-dimensional mask and silicon evaluation technique. That is, observation of the identical position of a mask and a silicon was considered. It is possible to analyze variability of the edge of the same position with high precision. The result proved its detection accuracy and reliability of variability on two-dimensional pattern (mask and silicon) and is adaptable to following fields of mask quality management. - Estimate of the correlativity of shape variability and a process margin. - Determination of two-dimensional variability of pattern. - Verification of the performance of the pattern of various kinds of Hotspots. In this report, we introduce the experimental results and the application. We expect that the mask measurement and the shape control on mask production will make a huge contribution to mask yield-enhancement and that the DFM solution for mask quality control process will become much more important technology than ever. It is very important to observe the shape of the same location of Design, Mask, and Silicon in such a viewpoint.
Standardless quantification by parameter optimization in electron probe microanalysis
NASA Astrophysics Data System (ADS)
Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.
2012-11-01
A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively.
Lamb Wave Damage Quantification Using GA-Based LS-SVM.
Sun, Fuqiang; Wang, Ning; He, Jingjing; Guan, Xuefei; Yang, Jinsong
2017-06-12
Lamb waves have been reported to be an efficient tool for non-destructive evaluations (NDE) for various application scenarios. However, accurate and reliable damage quantification using the Lamb wave method is still a practical challenge, due to the complex underlying mechanism of Lamb wave propagation and damage detection. This paper presents a Lamb wave damage quantification method using a least square support vector machine (LS-SVM) and a genetic algorithm (GA). Three damage sensitive features, namely, normalized amplitude, phase change, and correlation coefficient, were proposed to describe changes of Lamb wave characteristics caused by damage. In view of commonly used data-driven methods, the GA-based LS-SVM model using the proposed three damage sensitive features was implemented to evaluate the crack size. The GA method was adopted to optimize the model parameters. The results of GA-based LS-SVM were validated using coupon test data and lap joint component test data with naturally developed fatigue cracks. Cases of different loading and manufacturer were also included to further verify the robustness of the proposed method for crack quantification.
Lamb Wave Damage Quantification Using GA-Based LS-SVM
Sun, Fuqiang; Wang, Ning; He, Jingjing; Guan, Xuefei; Yang, Jinsong
2017-01-01
Lamb waves have been reported to be an efficient tool for non-destructive evaluations (NDE) for various application scenarios. However, accurate and reliable damage quantification using the Lamb wave method is still a practical challenge, due to the complex underlying mechanism of Lamb wave propagation and damage detection. This paper presents a Lamb wave damage quantification method using a least square support vector machine (LS-SVM) and a genetic algorithm (GA). Three damage sensitive features, namely, normalized amplitude, phase change, and correlation coefficient, were proposed to describe changes of Lamb wave characteristics caused by damage. In view of commonly used data-driven methods, the GA-based LS-SVM model using the proposed three damage sensitive features was implemented to evaluate the crack size. The GA method was adopted to optimize the model parameters. The results of GA-based LS-SVM were validated using coupon test data and lap joint component test data with naturally developed fatigue cracks. Cases of different loading and manufacturer were also included to further verify the robustness of the proposed method for crack quantification. PMID:28773003
Powder X-ray diffraction method for the quantification of cocrystals in the crystallization mixture.
Padrela, Luis; de Azevedo, Edmundo Gomes; Velaga, Sitaram P
2012-08-01
The solid state purity of cocrystals critically affects their performance. Thus, it is important to accurately quantify the purity of cocrystals in the final crystallization product. The aim of this study was to develop a powder X-ray diffraction (PXRD) quantification method for investigating the purity of cocrystals. The method developed was employed to study the formation of indomethacin-saccharin (IND-SAC) cocrystals by mechanochemical methods. Pure IND-SAC cocrystals were geometrically mixed with 1:1 w/w mixture of indomethacin/saccharin in various proportions. An accurately measured amount (550 mg) of the mixture was used for the PXRD measurements. The most intense, non-overlapping, characteristic diffraction peak of IND-SAC was used to construct the calibration curve in the range 0-100% (w/w). This calibration model was validated and used to monitor the formation of IND-SAC cocrystals by liquid-assisted grinding (LAG). The IND-SAC cocrystal calibration curve showed excellent linearity (R(2) = 0.9996) over the entire concentration range, displaying limit of detection (LOD) and limit of quantification (LOQ) values of 1.23% (w/w) and 3.74% (w/w), respectively. Validation results showed excellent correlations between actual and predicted concentrations of IND-SAC cocrystals (R(2) = 0.9981). The accuracy and reliability of the PXRD quantification method depend on the methods of sample preparation and handling. The crystallinity of the IND-SAC cocrystals was higher when larger amounts of methanol were used in the LAG method. The PXRD quantification method is suitable and reliable for verifying the purity of cocrystals in the final crystallization product.
NASA Technical Reports Server (NTRS)
Benek, John A.; Luckring, James M.
2017-01-01
A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.
NASA Technical Reports Server (NTRS)
Benek, John A.; Luckring, James M.
2017-01-01
A NATO symposium held in Greece in 2008 identified many promising sensitivity analysis and uncertainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not clear. The NATO Science and Technology Organization, Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic vehicle development problems. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper summarizes findings and lessons learned from the task group.
[Detection of recombinant-DNA in foods from stacked genetically modified plants].
Sorokina, E Iu; Chernyshova, O N
2012-01-01
A quantitative real-time multiplex polymerase chain reaction method was applied to the detection and quantification of MON863 and MON810 in stacked genetically modified maize MON 810xMON 863. The limit of detection was approximately 0,1%. The accuracy of the quantification, measured as bias from the accepted value and the relative repeatability standard deviation, which measures the intra-laboratory variability, were within 25% at each GM-level. A method verification has demonstrated that the MON 863 and the MON810 methods can be equally applied in quantification of the respective events in stacked MON810xMON 863.
Source separation on hyperspectral cube applied to dermatology
NASA Astrophysics Data System (ADS)
Mitra, J.; Jolivot, R.; Vabres, P.; Marzani, F. S.
2010-03-01
This paper proposes a method of quantification of the components underlying the human skin that are supposed to be responsible for the effective reflectance spectrum of the skin over the visible wavelength. The method is based on independent component analysis assuming that the epidermal melanin and the dermal haemoglobin absorbance spectra are independent of each other. The method extracts the source spectra that correspond to the ideal absorbance spectra of melanin and haemoglobin. The noisy melanin spectrum is fixed using a polynomial fit and the quantifications associated with it are reestimated. The results produce feasible quantifications of each source component in the examined skin patch.
NASA Astrophysics Data System (ADS)
Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun
2015-01-01
Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).
Experimental validation of a multi-energy x-ray adapted scatter separation method
NASA Astrophysics Data System (ADS)
Sossin, A.; Rebuffel, V.; Tabary, J.; Létang, J. M.; Freud, N.; Verger, L.
2016-12-01
Both in radiography and computed tomography (CT), recently emerged energy-resolved x-ray photon counting detectors enable the identification and quantification of individual materials comprising the inspected object. However, the approaches used for these operations require highly accurate x-ray images. The accuracy of the images is severely compromised by the presence of scattered radiation, which leads to a loss of spatial contrast and, more importantly, a bias in radiographic material imaging and artefacts in CT. The aim of the present study was to experimentally evaluate a recently introduced partial attenuation spectral scatter separation approach (PASSSA) adapted for multi-energy imaging. For this purpose, a prototype x-ray system was used. Several radiographic acquisitions of an anthropomorphic thorax phantom were performed. Reference primary images were obtained via the beam-stop (BS) approach. The attenuation images acquired from PASSSA-corrected data showed a substantial increase in local contrast and internal structure contour visibility when compared to uncorrected images. A substantial reduction of scatter induced bias was also achieved. Quantitatively, the developed method proved to be in relatively good agreement with the BS data. The application of the proposed scatter correction technique lowered the initial normalized root-mean-square error (NRMSE) of 45% between the uncorrected total and the reference primary spectral images by a factor of 9, thus reducing it to around 5%.
Bayesian power spectrum inference with foreground and target contamination treatment
NASA Astrophysics Data System (ADS)
Jasche, J.; Lavaux, G.
2017-10-01
This work presents a joint and self-consistent Bayesian treatment of various foreground and target contaminations when inferring cosmological power spectra and three-dimensional density fields from galaxy redshift surveys. This is achieved by introducing additional block-sampling procedures for unknown coefficients of foreground and target contamination templates to the previously presented ARES framework for Bayesian large-scale structure analyses. As a result, the method infers jointly and fully self-consistently three-dimensional density fields, cosmological power spectra, luminosity-dependent galaxy biases, noise levels of the respective galaxy distributions, and coefficients for a set of a priori specified foreground templates. In addition, this fully Bayesian approach permits detailed quantification of correlated uncertainties amongst all inferred quantities and correctly marginalizes over observational systematic effects. We demonstrate the validity and efficiency of our approach in obtaining unbiased estimates of power spectra via applications to realistic mock galaxy observations that are subject to stellar contamination and dust extinction. While simultaneously accounting for galaxy biases and unknown noise levels, our method reliably and robustly infers three-dimensional density fields and corresponding cosmological power spectra from deep galaxy surveys. Furthermore, our approach correctly accounts for joint and correlated uncertainties between unknown coefficients of foreground templates and the amplitudes of the power spectrum. This effect amounts to correlations and anti-correlations of up to 10 per cent across wide ranges in Fourier space.
Kato, Megumi; Yamazaki, Taichi; Kato, Hisashi; Eyama, Sakae; Goto, Mari; Yoshioka, Mariko; Takatsu, Akiko
2015-01-01
To ensure the reliability of amino acid analyses, the National Metrology Institute of Japan of the National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) has developed high-purity certified reference materials (CRMs) for 17 proteinogenic amino acids. These CRMs are intended for use as primary reference materials to enable the traceable quantification of amino acids. The purity of the present CRMs was determined based on two traceable methods: nonaqueous acidimetric titration and nitrogen determination by the Kjeldahl method. Since neither method could distinguish compounds with similar structures, such as amino acid-related impurities, impurities were thoroughly quantified by combining several HPLC methods, and subtracted from the obtained purity of each method. The property value of each amino acid was calculated as a weighted mean of the corrected purities by the two methods. The uncertainty of the property value was obtained by combining measurement uncertainties of the two methods, a difference between the two methods, the uncertainty from the contribution of impurities, and the uncertainty derived from inhomogeneity. The uncertainty derived from instability was considered to be negligible based on stability monitoring of some CRMs. The certified value of each amino acid, property value with uncertainty, was given for both with or without enantiomeric separation.
2011-01-01
Purpose Eddy current induced velocity offsets are of concern for accuracy in cardiovascular magnetic resonance (CMR) volume flow quantification. However, currently known theoretical aspects of eddy current behavior have not led to effective guidelines for the optimization of flow quantification sequences. This study is aimed at identifying correlations between protocol parameters and the resulting velocity error in clinical CMR flow measurements in a multi-vendor study. Methods Nine 1.5T scanners of three different types/vendors were studied. Measurements were performed on a large stationary phantom. Starting from a clinical breath-hold flow protocol, several protocol parameters were varied. Acquisitions were made in three clinically relevant orientations. Additionally, a time delay between the bipolar gradient and read-out, asymmetric versus symmetric velocity encoding, and gradient amplitude and slew rate were studied in adapted sequences as exploratory measurements beyond the protocol. Image analysis determined the worst-case offset for a typical great-vessel flow measurement. Results The results showed a great variation in offset behavior among scanners (standard deviation among samples of 0.3, 0.4, and 0.9 cm/s for the three different scanner types), even for small changes in the protocol. Considering the absolute values, none of the tested protocol settings consistently reduced the velocity offsets below the critical level of 0.6 cm/s neither for all three orientations nor for all three scanner types. Using multilevel linear model analysis, oblique aortic and pulmonary slices showed systematic higher offsets than the transverse aortic slices (oblique aortic 0.6 cm/s, and pulmonary 1.8 cm/s higher than transverse aortic). The exploratory measurements beyond the protocol yielded some new leads for further sequence development towards reduction of velocity offsets; however those protocols were not always compatible with the time-constraints of breath-hold imaging and flow-related artefacts. Conclusions This study showed that with current systems there was no generic protocol which resulted into acceptable flow offset values. Protocol optimization would have to be performed on a per scanner and per protocol basis. Proper optimization might make accurate (transverse) aortic flow quantification possible for most scanners. Pulmonary flow quantification would still need further (offline) correction. PMID:21388521
Zhou, Bin; Chang, Jun; Wang, Ping; Li, Jie; Cheng, Dan; Zheng, Peng-Wu
2014-01-01
The quality of Morindaofficinalis, which has been used as a Yang-tonic agent for a long time in China, can be evaluated. A double-development high performance thin layer chromatography (HPTLC) method has been established to simultaneously analyze quality and quantity of seven inulin-type oligosaccharides (DP=3-9) in Morindaofficinalis. The chromatography was performed on a silica gel 60 plate with the 7:5:2:1 proportion (v/v) of n-butanol-isopropanol-water-acetic acid for the first and second developments, respectively. The bands were visualized by the reaction with aniline-diphenylamine-phosphoric acid solution and analyzed by densitometric TLC at 540 nm. Quantification of seven oligosaccharides was achieved by densitometry at 540 nm. The investigated standard sugar had good linearity (R2>0.99) within test ranges. The amounts of seven oligosaccharides were calculated by the relative correction factor (RCF). Therefore, the developed TLC method could be used for quality control of Morindaofficinalis.
Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra
2015-01-01
Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150
Ramsey, David J; Sunness, Janet S; Malviya, Poorva; Applegate, Carol; Hager, Gregory D; Handa, James T
2014-07-01
To develop a computer-based image segmentation method for standardizing the quantification of geographic atrophy (GA). The authors present an automated image segmentation method based on the fuzzy c-means clustering algorithm for the detection of GA lesions. The method is evaluated by comparing computerized segmentation against outlines of GA drawn by an expert grader for a longitudinal series of fundus autofluorescence images with paired 30° color fundus photographs for 10 patients. The automated segmentation method showed excellent agreement with an expert grader for fundus autofluorescence images, achieving a performance level of 94 ± 5% sensitivity and 98 ± 2% specificity on a per-pixel basis for the detection of GA area, but performed less well on color fundus photographs with a sensitivity of 47 ± 26% and specificity of 98 ± 2%. The segmentation algorithm identified 75 ± 16% of the GA border correctly in fundus autofluorescence images compared with just 42 ± 25% for color fundus photographs. The results of this study demonstrate a promising computerized segmentation method that may enhance the reproducibility of GA measurement and provide an objective strategy to assist an expert in the grading of images.
Vu, Dai Long; Ranglová, Karolína; Hájek, Jan; Hrouzek, Pavel
2018-05-01
Quantification of selenated amino-acids currently relies on methods employing inductively coupled plasma mass spectrometry (ICP-MS). Although very accurate, these methods do not allow the simultaneous determination of standard amino-acids, hampering the comparison of the content of selenated versus non-selenated species such as methionine (Met) and selenomethionine (SeMet). This paper reports two approaches for the simultaneous quantification of Met and SeMet. In the first approach, standard enzymatic hydrolysis employing Protease XIV was applied for the preparation of samples. The second approach utilized methanesulfonic acid (MA) for the hydrolysis of samples, either in a reflux system or in a microwave oven, followed by derivatization with diethyl ethoxymethylenemalonate. The prepared samples were then analyzed by multiple reaction monitoring high performance liquid chromatography tandem mass spectrometry (MRM-HPLC-MS/MS). Both approaches provided platforms for the accurate determination of selenium/sulfur substitution rate in Met. Moreover the second approach also provided accurate simultaneous quantification of Met and SeMet with a low limit of detection, low limit of quantification and wide linearity range, comparable to the commonly used gas chromatography mass spectrometry (GC-MS) method or ICP-MS. The novel method was validated using certified reference material in conjunction with the GC-MS reference method. Copyright © 2018. Published by Elsevier B.V.
Testing 3D landform quantification methods with synthetic drumlins in a real digital elevation model
NASA Astrophysics Data System (ADS)
Hillier, John K.; Smith, Mike J.
2012-06-01
Metrics such as height and volume quantifying the 3D morphology of landforms are important observations that reflect and constrain Earth surface processes. Errors in such measurements are, however, poorly understood. A novel approach, using statistically valid ‘synthetic' landscapes to quantify the errors is presented. The utility of the approach is illustrated using a case study of 184 drumlins observed in Scotland as quantified from a Digital Elevation Model (DEM) by the ‘cookie cutter' extraction method. To create the synthetic DEMs, observed drumlins were removed from the measured DEM and replaced by elongate 3D Gaussian ones of equivalent dimensions positioned randomly with respect to the ‘noise' (e.g. trees) and regional trends (e.g. hills) that cause the errors. Then, errors in the cookie cutter extraction method were investigated by using it to quantify these ‘synthetic' drumlins, whose location and size is known. Thus, the approach determines which key metrics are recovered accurately. For example, mean height of 6.8 m is recovered poorly at 12.5 ± 0.6 (2σ) m, but mean volume is recovered correctly. Additionally, quantification methods can be compared: A variant on the cookie cutter using an un-tensioned spline induced about twice (× 1.79) as much error. Finally, a previously reportedly statistically significant (p = 0.007) difference in mean volume between sub-populations of different ages, which may reflect formational processes, is demonstrated to be only 30-50% likely to exist in reality. Critically, the synthetic DEMs are demonstrated to realistically model parameter recovery, primarily because they are still almost entirely the original landscape. Results are insensitive to the exact method used to create the synthetic DEMs, and the approach could be readily adapted to assess a variety of landforms (e.g. craters, dunes and volcanoes).
Pyschik, Marcelina; Klein-Hitpaß, Marcel; Girod, Sabrina; Winter, Martin; Nowak, Sascha
2017-02-01
In this study, an optimized method using capillary electrophoresis (CE) with a direct contactless conductivity detector (C 4 D) for a new application field is presented for the quantification of fluoride in common used lithium ion battery (LIB) electrolyte using LiPF 6 in organic carbonate solvents and in ionic liquids (ILs) after contacted to Li metal. The method development for finding the right buffer and the suitable CE conditions for the quantification of fluoride was investigated. The results of the concentration of fluoride in different LIB electrolyte samples were compared to the results from the ion-selective electrode (ISE). The relative standard deviations (RSDs) and recovery rates for fluoride were obtained with a very high accuracy in both methods. The results of the fluoride concentration in the LIB electrolytes were in very good agreement for both methods. In addition, the limit of detection (LOD) and limit of quantification (LOQ) values were determined for the CE method. The CE method has been applied also for the quantification of fluoride in ILs. In the fresh IL sample, the concentration of fluoride was under the LOD. Another sample of the IL mixed with Li metal has been investigated as well. It was possible to quantify the fluoride concentration in this sample. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos
2014-06-01
Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.
NASA Astrophysics Data System (ADS)
Asselin, Marie-Claude; Cunningham, Vincent J.; Amano, Shigeko; Gunn, Roger N.; Nahmias, Claude
2004-03-01
A non-invasive alternative to arterial blood sampling for the generation of a blood input function for brain positron emission tomography (PET) studies is presented. The method aims to extract the dimensions of the blood vessel directly from PET images and to simultaneously correct the radioactivity concentration for partial volume and spillover. This involves simulation of the tomographic imaging process to generate images of different blood vessel and background geometries and selecting the one that best fits, in a least-squares sense, the acquired PET image. A phantom experiment was conducted to validate the method which was then applied to eight subjects injected with 6-[18F]fluoro-L-DOPA and one subject injected with [11C]CO-labelled red blood cells. In the phantom study, the diameter of syringes filled with an 11C solution and inserted into a water-filled cylinder were estimated with an accuracy of half a pixel (1 mm). The radioactivity concentration was recovered to 100 ± 4% in the 8.7 mm diameter syringe, the one that most closely approximated the superior sagittal sinus. In the human studies, the method systematically overestimated the calibre of the superior sagittal sinus by 2-3 mm compared to measurements made in magnetic resonance venograms on the same subjects. Sources of discrepancies related to the anatomy of the blood vessel were found not to be fundamental limitations to the applicability of the method to human subjects. This method has the potential to provide accurate quantification of blood radioactivity concentration from PET images without the need for blood samples, corrections for delay and dispersion, co-registered anatomical images, or manually defined regions of interest.
Uncertainty Quantification in Alchemical Free Energy Methods.
Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V
2018-06-12
Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.
Deconinck, E; Crevits, S; Baten, P; Courselle, P; De Beer, J
2011-04-05
A fully validated UHPLC method for the identification and quantification of folic acid in pharmaceutical preparations was developed. The starting conditions for the development were calculated starting from the HPLC conditions of a validated method. These start conditions were tested on four different UHPLC columns: Grace Vision HT™ C18-P, C18, C18-HL and C18-B (2 mm × 100 mm, 1.5 μm). After selection of the stationary phase, the method was further optimised by testing two aqueous and two organic phases and by adapting to a gradient method. The obtained method was fully validated based on its measurement uncertainty (accuracy profile) and robustness tests. A UHPLC method was obtained for the identification and quantification of folic acid in pharmaceutical preparations, which will cut analysis times and solvent consumption. Copyright © 2010 Elsevier B.V. All rights reserved.
Alves, L.P.S.; Almeida, A.T.; Cruz, L.M.; Pedrosa, F.O.; de Souza, E.M.; Chubatsu, L.S.; Müller-Santos, M.; Valdameri, G.
2017-01-01
The conventional method for quantification of polyhydroxyalkanoates based on whole-cell methanolysis and gas chromatography (GC) is laborious and time-consuming. In this work, a method based on flow cytometry of Nile red stained bacterial cells was established to quantify poly-3-hydroxybutyrate (PHB) production by the diazotrophic and plant-associated bacteria, Herbaspirillum seropedicae and Azospirillum brasilense. The method consists of three steps: i) cell permeabilization, ii) Nile red staining, and iii) analysis by flow cytometry. The method was optimized step-by-step and can be carried out in less than 5 min. The final results indicated a high correlation coefficient (R2=0.99) compared to a standard method based on methanolysis and GC. This method was successfully applied to the quantification of PHB in epiphytic bacteria isolated from rice roots. PMID:28099582
Rakesh Minocha; P. Thangavel; Om Parkash Dhankher; Stephanie Long
2008-01-01
The HPLC method presented here for the quantification of metal-binding thiols is considerably shorter than most previously published methods. It is a sensitive and highly reproducible method that separates monobromobimane tagged monothiols (cysteine, glutathione, γ-glutamylcysteine) along with polythiols (PC2, PC3...
Roger, B; Fernandez, X; Jeannot, V; Chahboun, J
2010-01-01
The essential oil obtained from iris rhizomes is one of the most precious raw materials for the perfume industry. Its fragrance is due to irones that are gradually formed by oxidative degradation of iridals during rhizome ageing. The development of an alternative method allowing irone quantification in iris rhizomes using HS-SPME-GC. The development of the method using HS-SPME-GC was achieved using the results obtained from a conventional method, i.e. a solid-liquid extraction (SLE) followed by irone quantification by CG. Among several calibration methods tested, internal calibration gave the best results and was the least sensitive to the matrix effect. The proposed method using HS-SPME-GC is as accurate and reproducible as the conventional one using SLE. These two methods were used to monitor and compare irone concentrations in iris rhizomes that had been stored for 6 months to 9 years. Irone quantification in iris rhizome can be achieved using HS-SPME-GC. This method can thus be used for the quality control of the iris rhizomes. It offers the advantage of combining extraction and analysis with an automated device and thus allows a large number of rhizome batches to be analysed and compared in a limited amount of time. Copyright © 2010 John Wiley & Sons, Ltd.
López-García, Ester; Mastroianni, Nicola; Postigo, Cristina; Valcárcel, Yolanda; González-Alonso, Silvia; Barceló, Damia; López de Alda, Miren
2018-04-15
This work presents a fast, sensitive and reliable multi-residue methodology based on fat and protein precipitation and liquid chromatography-tandem mass spectrometry for the determination of common legal and illegal psychoactive drugs, and major metabolites, in breast milk. One-fourth of the 40 target analytes is investigated for the first time in this biological matrix. The method was validated in breast milk and also in various types of bovine milk, as tranquilizers are occasionally administered to food-producing animals. Absolute recoveries were satisfactory for 75% of the target analytes. The use of isotopically labeled compounds assisted in correcting analyte losses due to ionization suppression matrix effects (higher in whole milk than in the other investigated milk matrices) and ensured the reliability of the results. Average method limits of quantification ranged between 0.4 and 6.8 ng/mL. Application of the developed method showed the presence of caffeine in breast milk samples (12-179 ng/mL). Copyright © 2017 Elsevier Ltd. All rights reserved.
Christians, Stefan; van Treel, Nadine Denise; Bieniara, Gabriele; Eulig-Wien, Annika; Hanschmann, Kay-Martin; Giess, Siegfried
2016-07-01
Capillary zone electrophoresis (CZE) provides an alternative means of separating native proteins on the basis of their inherent electrophoretic mobilities. The major advantage of CZE is the quantification by UV detection, circumventing the drawbacks of staining and densitometry in the case of gel electrophoresis methods. The data of this validation study showed that CZE is a reliable assay for the determination of protein composition in therapeutic preparations of human albumin and human polyclonal immunoglobulins. Data obtained by CZE are in line with "historical" data obtained by the compendial method, provided that peak integration is performed without time correction. The focus here was to establish a rapid and reliable test to substitute the current gel based zone electrophoresis techniques for the control of protein composition of human immunoglobulins or albumins in the European Pharmacopoeia. We believe that the more advanced and modern CZE method described here is a very good alternative to the procedures currently described in the relevant monographs. Copyright © 2016 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Castro, Marcelo A.; Williford, Joshua P.; Cota, Martin R.; MacLaren, Judy M.; Dardzinski, Bernard J.; Latour, Lawrence L.; Pham, Dzung L.; Butman, John A.
2016-03-01
Traumatic meningeal injury is a novel imaging marker of traumatic brain injury, which appears as enhancement of the dura on post-contrast T2-weighted FLAIR images, and is likely associated with inflammation of the meninges. Dynamic Contrast Enhanced MRI provides a better discrimination of abnormally perfused regions. A method to properly identify those regions is presented. Images of seventeen patients scanned within 96 hours of head injury with positive traumatic meningeal injury were normalized and aligned. The difference between the pre- and last post-contrast acquisitions was segmented and voxels in the higher class were spatially clustered. Spatial and morphological descriptors were used to identify the regions of enhancement: a) centroid; b) distance to the brain mask from external voxels; c) distance from internal voxels; d) size; e) shape. The method properly identified thirteen regions among all patients. The method failed in one case due to the presence of a large brain lesion that altered the mask boundaries. Most false detections were correctly rejected resulting in a sensitivity and specificity of 92.9% and 93.6%, respectively.
Nicolás, Paula; Lassalle, Verónica L; Ferreira, María L
2017-02-01
The aim of this manuscript was to study the application of a new method of protein quantification in Candida antarctica lipase B commercial solutions. Error sources associated to the traditional Bradford technique were demonstrated. Eight biocatalysts based on C. antarctica lipase B (CALB) immobilized onto magnetite nanoparticles were used. Magnetite nanoparticles were coated with chitosan (CHIT) and modified with glutaraldehyde (GLUT) and aminopropyltriethoxysilane (APTS). Later, CALB was adsorbed on the modified support. The proposed novel protein quantification method included the determination of sulfur (from protein in CALB solution) by means of Atomic Emission by Inductive Coupling Plasma (AE-ICP). Four different protocols were applied combining AE-ICP and classical Bradford assays, besides Carbon, Hydrogen and Nitrogen (CHN) analysis. The calculated error in protein content using the "classic" Bradford method with bovine serum albumin as standard ranged from 400 to 1200% when protein in CALB solution was quantified. These errors were calculated considering as "true protein content values" the results of the amount of immobilized protein obtained with the improved method. The optimum quantification procedure involved the combination of Bradford method, ICP and CHN analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Adal, Kedir M.; van Etten, Peter G.; Martinez, Jose P.; Rouwen, Kenneth; Vermeer, Koenraad A.; van Vliet, Lucas J.
2017-03-01
Automated detection and quantification of spatio-temporal retinal changes is an important step to objectively assess disease progression and treatment effects for dynamic retinal diseases such as diabetic retinopathy (DR). However, detecting retinal changes caused by early DR lesions such as microaneurysms and dot hemorrhages from longitudinal pairs of fundus images is challenging due to intra and inter-image illumination variation between fundus images. This paper explores a method for automated detection of retinal changes from illumination normalized fundus images using a deep convolutional neural network (CNN), and compares its performance with two other CNNs trained separately on color and green channel fundus images. Illumination variation was addressed by correcting for the variability in the luminosity and contrast estimated from a large scale retinal regions. The CNN models were trained and evaluated on image patches extracted from a registered fundus image set collected from 51 diabetic eyes that were screened at two different time-points. The results show that using normalized images yield better performance than color and green channel images, suggesting that illumination normalization greatly facilitates CNNs to quickly and correctly learn distinctive local image features of DR related retinal changes.
Nakajima, Kenichi; Matsumoto, Naoya; Kasai, Tokuo; Matsuo, Shinro; Kiso, Keisuke; Okuda, Koichi
2016-04-01
As a 2-year project of the Japanese Society of Nuclear Medicine working group activity, normal myocardial imaging databases were accumulated and summarized. Stress-rest with gated and non-gated image sets were accumulated for myocardial perfusion imaging and could be used for perfusion defect scoring and normal left ventricular (LV) function analysis. For single-photon emission computed tomography (SPECT) with multi-focal collimator design, databases of supine and prone positions and computed tomography (CT)-based attenuation correction were created. The CT-based correction provided similar perfusion patterns between genders. In phase analysis of gated myocardial perfusion SPECT, a new approach for analyzing dyssynchrony, normal ranges of parameters for phase bandwidth, standard deviation and entropy were determined in four software programs. Although the results were not interchangeable, dependency on gender, ejection fraction and volumes were common characteristics of these parameters. Standardization of (123)I-MIBG sympathetic imaging was performed regarding heart-to-mediastinum ratio (HMR) using a calibration phantom method. The HMRs from any collimator types could be converted to the value with medium-energy comparable collimators. Appropriate quantification based on common normal databases and standard technology could play a pivotal role for clinical practice and researches.
USDA-ARS?s Scientific Manuscript database
Arbuscular mycorrhizal fungi (AMF) are well-known plant symbionts which provide enhanced phosphorus uptake as well as other benefits to their host plants. Quantification of mycorrhizal biomass and root colonization has traditionally been performed by root staining and microscopic examination methods...
The quantification of solute concentrations in laboratory aquifer models has been largely limited to the use of sampling ports, from which samples are collected for external analysis. One of the drawbacks to this method is that the act of sampling may disturb plume dynamics and ...
21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.
Code of Federal Regulations, 2011 CFR
2011-04-01
..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS EXTRALABEL DRUG USE IN ANIMALS Specific Provisions Relating to Extralabel Use of Animal and Human Drugs in Food-Producing Animals § 530.24 Procedure for announcing analytical methods for drug residue quantification. (a...
Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions
Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.
2010-01-01
Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256
Zha, Kan; Busch, Stephen; Park, Cheolwoong; ...
2016-06-24
In-cylinder flow measurements are necessary to gain a fundamental understanding of swirl-supported, light-duty Diesel engine processes for high thermal efficiency and low emissions. Planar particle image velocimetry (PIV) can be used for non-intrusive, in situ measurement of swirl-plane velocity fields through a transparent piston. In order to keep the flow unchanged from all-metal engine operation, the geometry of the transparent piston must adapt the production-intent metal piston geometry. As a result, a temporally- and spatially-variant optical distortion is introduced to the particle images. Here, to ensure reliable measurement of particle displacements, this work documents a systematic exploration of optical distortionmore » quantification and a hybrid back-projection procedure that combines ray-tracing-based geometric and in situ manual back-projection approaches.« less
Correction for Metastability in the Quantification of PID in Thin-film Module Testing: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hacke, Peter L; Johnston, Steven; Spataru, Sergiu
A fundamental change in the analysis for the accelerated stress testing of thin-film modules is proposed, whereby power changes due to metastability and other effects that may occur due to the thermal history are removed from the power measurement that we obtain as a function of the applied stress factor. The power of reference modules normalized to an initial state - undergoing the same thermal and light- exposure history but without the applied stress factor such as humidity or voltage bias - is subtracted from that of the stressed modules. For better understanding and appropriate application in standardized tests, themore » method is demonstrated and discussed for potential-induced degradation testing in view of the parallel-occurring but unrelated physical mechanisms that can lead to confounding power changes in the module.« less
NASA Astrophysics Data System (ADS)
Hasegawa, Bruce; Tang, H. Roger; Da Silva, Angela J.; Wong, Kenneth H.; Iwata, Koji; Wu, Max C.
2001-09-01
In comparison to conventional medical imaging techniques, dual-modality imaging offers the advantage of correlating anatomical information from X-ray computed tomography (CT) with functional measurements from single-photon emission computed tomography (SPECT) or with positron emission tomography (PET). The combined X-ray/radionuclide images from dual-modality imaging can help the clinician to differentiate disease from normal uptake of radiopharmaceuticals, and to improve diagnosis and staging of disease. In addition, phantom and animal studies have demonstrated that a priori structural information from CT can be used to improve quantification of tissue uptake and organ function by correcting the radionuclide data for errors due to photon attenuation, partial volume effects, scatter radiation, and other physical effects. Dual-modality imaging therefore is emerging as a method of improving the visual quality and the quantitative accuracy of radionuclide imaging for diagnosis of patients with cancer and heart disease.
Kang, Bo-Kyeong; Yu, Eun Sil; Lee, Seung Soo; Lee, Youngjoo; Kim, Namkug; Sirlin, Claude B; Cho, Eun Yoon; Yeom, Suk Keu; Byun, Jae Ho; Park, Seong Ho; Lee, Moon-Gyu
2012-06-01
The aims of this study were to assess the confounding effects of hepatic iron deposition, inflammation, and fibrosis on hepatic steatosis (HS) evaluation by magnetic resonance imaging (MRI) and magnetic resonance spectroscopy (MRS) and to assess the accuracies of MRI and MRS for HS evaluation, using histology as the reference standard. In this institutional review board-approved prospective study, 56 patients gave informed consents and underwent chemical-shift MRI and MRS of the liver on a 1.5-T magnetic resonance scanner. To estimate MRI fat fraction (FF), 4 analysis methods were used (dual-echo, triple-echo, multiecho, and multi-interference), and MRS FF was calculated with T2 correction. Degrees of HS, iron deposition, inflammation, and fibrosis were analyzed in liver resection (n = 37) and biopsy (n = 19) specimens. The confounding effects of histology on fat quantification were assessed by multiple linear regression analysis. Using the histologic degree of HS as the reference standard, the accuracies of each method in estimating HS and diagnosing an HS of 5% or greater were determined by linear regression and receiver operating characteristic analyses. Iron deposition significantly confounded estimations of FF by the dual-echo (P < 0.001) and triple-echo (P = 0.033) methods, whereas no histologic feature confounded the multiecho and multi-interference methods or MRS. The MRS (r = 0.95) showed the strongest correlation with histologic degree of HS, followed by the multiecho (r = 0.92), multi-interference (r = 0.91), triple-echo (r = 0.90), and dual-echo (r = 0.85) methods. For diagnosing HS, the areas under the curve tended to be higher for MRS (0.96) and the multiecho (0.95), multi-interference (0.95), and triple-echo (0.95) methods than for the dual-echo method (0.88) (P ≥ 0.13). The multiecho and multi-interference MRI methods and MRS can accurately quantify hepatic fat, with coexisting histologic abnormalities having no confounding effects.
Richardson, Keith; Denny, Richard; Hughes, Chris; Skilling, John; Sikora, Jacek; Dadlez, Michał; Manteca, Angel; Jung, Hye Ryung; Jensen, Ole Nørregaard; Redeker, Virginie; Melki, Ronald; Langridge, James I.; Vissers, Johannes P.C.
2013-01-01
A probability-based quantification framework is presented for the calculation of relative peptide and protein abundance in label-free and label-dependent LC-MS proteomics data. The results are accompanied by credible intervals and regulation probabilities. The algorithm takes into account data uncertainties via Poisson statistics modified by a noise contribution that is determined automatically during an initial normalization stage. Protein quantification relies on assignments of component peptides to the acquired data. These assignments are generally of variable reliability and may not be present across all of the experiments comprising an analysis. It is also possible for a peptide to be identified to more than one protein in a given mixture. For these reasons the algorithm accepts a prior probability of peptide assignment for each intensity measurement. The model is constructed in such a way that outliers of any type can be automatically reweighted. Two discrete normalization methods can be employed. The first method is based on a user-defined subset of peptides, while the second method relies on the presence of a dominant background of endogenous peptides for which the concentration is assumed to be unaffected. Normalization is performed using the same computational and statistical procedures employed by the main quantification algorithm. The performance of the algorithm will be illustrated on example data sets, and its utility demonstrated for typical proteomics applications. The quantification algorithm supports relative protein quantification based on precursor and product ion intensities acquired by means of data-dependent methods, originating from all common isotopically-labeled approaches, as well as label-free ion intensity-based data-independent methods. PMID:22871168
Psifidi, Androniki; Dovas, Chrysostomos; Banos, Georgios
2011-01-01
Background Single nucleotide polymorphisms (SNP) have proven to be powerful genetic markers for genetic applications in medicine, life science and agriculture. A variety of methods exist for SNP detection but few can quantify SNP frequencies when the mutated DNA molecules correspond to a small fraction of the wild-type DNA. Furthermore, there is no generally accepted gold standard for SNP quantification, and, in general, currently applied methods give inconsistent results in selected cohorts. In the present study we sought to develop a novel method for accurate detection and quantification of SNP in DNA pooled samples. Methods The development and evaluation of a novel Ligase Chain Reaction (LCR) protocol that uses a DNA-specific fluorescent dye to allow quantitative real-time analysis is described. Different reaction components and thermocycling parameters affecting the efficiency and specificity of LCR were examined. Several protocols, including gap-LCR modifications, were evaluated using plasmid standard and genomic DNA pools. A protocol of choice was identified and applied for the quantification of a polymorphism at codon 136 of the ovine PRNP gene that is associated with susceptibility to a transmissible spongiform encephalopathy in sheep. Conclusions The real-time LCR protocol developed in the present study showed high sensitivity, accuracy, reproducibility and a wide dynamic range of SNP quantification in different DNA pools. The limits of detection and quantification of SNP frequencies were 0.085% and 0.35%, respectively. Significance The proposed real-time LCR protocol is applicable when sensitive detection and accurate quantification of low copy number mutations in DNA pools is needed. Examples include oncogenes and tumour suppressor genes, infectious diseases, pathogenic bacteria, fungal species, viral mutants, drug resistance resulting from point mutations, and genetically modified organisms in food. PMID:21283808
Lao, Yexing; Yang, Cuiping; Zou, Wei; Gan, Manquan; Chen, Ping; Su, Weiwei
2012-05-01
The cryptand Kryptofix 2.2.2 is used extensively as a phase-transfer reagent in the preparation of [18F]fluoride-labelled radiopharmaceuticals. However, it has considerable acute toxicity. The aim of this study was to develop and validate a method for rapid (within 1 min), specific and sensitive quantification of Kryptofix 2.2.2 at trace levels. Chromatographic separations were carried out by rapid-resolution liquid chromatography (Agilent ZORBAX SB-C18 rapid-resolution column, 2.1 × 30 mm, 3.5 μm). Tandem mass spectra were acquired using a triple quadrupole mass spectrometer equipped with an electrospray ionization interface. Quantitative mass spectrometric analysis was conducted in positive ion mode and multiple reaction monitoring mode for the m/z 377.3 → 114.1 transition for Kryptofix 2.2.2. The external standard method was used for quantification. The method met the precision and efficiency requirements for PET radiopharmaceuticals, providing satisfactory results for specificity, matrix effect, stability, linearity (0.5-100 ng/ml, r(2)=0.9975), precision (coefficient of variation < 5%), accuracy (relative error < ± 3%), sensitivity (lower limit of quantification=0.5 ng) and detection time (<1 min). Fluorodeoxyglucose (n=6) was analysed, and the Kryptofix 2.2.2 content was found to be well below the maximum permissible levels approved by the US Food and Drug Administration. The developed method has a short analysis time (<1 min) and high sensitivity (lower limit of quantification=0.5 ng/ml) and can be successfully applied to rapid quantification of Kryptofix 2.2.2 at trace levels in fluorodeoxyglucose. This method could also be applied to other [18F]fluorine-labelled radiopharmaceuticals that use Kryptofix 2.2.2 as a phase-transfer reagent.
Using RT-PCR and bDNA assays to measure non-clade B HIV-1 subtype RNA.
Pasquier, C; Sandres, K; Salama, G; Puel, J; Izopet, J
1999-08-01
The performance of the new version of RT-PCR assay (Amplicor HIV-1 Monitor v1.5) was assessed. The quantification of non-B subtype HIV-1 plasma RNA (30A, 1C, 1D, 3E, 2F, 3G) obtained using Monitor v1.5 was compared to the former version of this assay (Monitor v1.0) and to the Quantiplex v2.0 bDNA assay. The new primers used in Monitor v1.5 were similar to the former version in both specificity and sensitivity. The new primers corrected the detection and quantification defect observed previously for HIV-1 non-B subtypes and gave slightly higher RNA concentrations than those measured using the bDNA assay (+0.39 log copies/ml).
Riss, Patrick J; Hong, Young T; Williamson, David; Caprioli, Daniele; Sitnikov, Sergey; Ferrari, Valentina; Sawiak, Steve J; Baron, Jean-Claude; Dalley, Jeffrey W; Fryer, Tim D; Aigbirhio, Franklin I
2011-01-01
The 5-hydroxytryptamine type 2a (5-HT2A) selective radiotracer [18F]altanserin has been subjected to a quantitative micro-positron emission tomography study in Lister Hooded rats. Metabolite-corrected plasma input modeling was compared with reference tissue modeling using the cerebellum as reference tissue. [18F]altanserin showed sufficient brain uptake in a distribution pattern consistent with the known distribution of 5-HT2A receptors. Full binding saturation and displacement was documented, and no significant uptake of radioactive metabolites was detected in the brain. Blood input as well as reference tissue models were equally appropriate to describe the radiotracer kinetics. [18F]altanserin is suitable for quantification of 5-HT2A receptor availability in rats. PMID:21750562
Risk and benefit of diffraction in Energy Dispersive X-ray fluorescence mapping
NASA Astrophysics Data System (ADS)
Nikonow, Wilhelm; Rammlmair, Dieter
2016-11-01
Energy dispersive X-ray fluorescence mapping (μ-EDXRF) is a fast and non-destructive method for chemical quantification and therefore used in many scientific fields. The combination of spatial and chemical information is highly valuable for understanding geological processes. Problems occur with crystalline samples due to diffraction, which appears according to Bragg's law, depending on the energy of the X-ray beam, the incident angle and the crystal parameters. In the spectra these peaks can overlap with element peaks suggesting higher element concentrations. The aim of this study is to investigate the effect of diffraction, the possibility of diffraction removal and potential geoscientific applications for X-ray mapping. In this work the μ-EDXRF M4 Tornado from Bruker was operated with a Rh-tube and polychromatic beam with two SDD detectors mounted each at ± 90° to the tube. Due to the polychromatic beam the Bragg condition fits for several mineral lattice planes. Since diffraction depends on the angle, it is shown that a novel correction approach can be applied by measuring from two different angles and calculating the minimum spectrum of both detectors gaining a better limit of quantification for this method. Furthermore, it is possible to use the diffraction information for separation of differently oriented crystallites within a monomineralic aggregate and obtain parameters like particle size distribution for the sample, as it is done by thin section image analysis in cross-polarized light. Only with μ-EDXRF this can be made on larger samples without preparation of thin sections.
NASA Astrophysics Data System (ADS)
Xu, Xiaochun; Sinha, Lagnojita; Singh, Aparna; Yang, Cynthia; Xiang, Jialing; Tichauer, Kenneth M.
2015-03-01
Immunofluorescence staining is a robust way to visualize the distribution of targeted biomolecules invasively in in fixed tissues and tissue culture. Despite the fact that these methods has been a well-established method in fixed tissue imaging for over 70 years, quantification of receptor concentration still simply assumes that the signal from the targeted fluorescent marker after incubation and sufficient rinsing is directly proportional to the concentration of targeted biomolecules, thus neglecting the experimental inconsistencies in incubation and rinsing procedures and assuming no, nonspecific binding of the fluorescent markers. This work presents the first imaging approach capable of quantifying the concentration of cell surface receptor on cancer cells grown in vitro based on compartment modeling in a nondestructive way. The approach utilizes a dual-tracer protocol where any non-specific retention or variability in incubation and rinsing of a receptor-targeted imaging agent is corrected by simultaneously imaging the retention of a chemically similar, "untargeted" imaging agent. Various different compartment models were used to analyze the data in order to find the optimal procedure for extracting estimates of epidermal growth factor receptor (EGFR) concentration (a receptor overexpressed in many cancers and a key target for emerging molecular therapies) in tissue cultures with varying concentrations of human glioma cells (U251). Preliminary results demonstrated a need to model nonspecific binding of both the targeted and untargeted imaging agents used. The approach could be used to carry out the first repeated measures of cell surface receptor dynamics during 3D tumor mass development, in addition to the receptor response to therapies.
NASA Astrophysics Data System (ADS)
Migliozzi, D.; Nguyen, H. T.; Gijs, M. A. M.
2018-02-01
Immunohistochemistry (IHC) is one of the main techniques currently used in the clinics for biomarker characterization. It consists in colorimetric labeling with specific antibodies followed by microscopy analysis. The results are then used for diagnosis and therapeutic targeting. Well-known drawbacks of such protocols are their limited accuracy and precision, which prevent the clinicians from having quantitative and robust IHC results. With our work, we combined rapid microfluidic immunofluorescent staining with efficient image-based cell segmentation and signal quantification to increase the robustness of both experimental and analytical protocols. The experimental protocol is very simple and based on fast-fluidic-exchange in a microfluidic chamber created on top of the formalin-fixed-paraffin-embedded (FFPE) slide by clamping it a silicon chip with a polydimethyl siloxane (PDMS) sealing ring. The image-processing protocol is based on enhancement and subsequent thresholding of the local contrast of the obtained fluorescence image. As a case study, given that the human epidermal growth factor receptor 2 (HER2) protein is often used as a biomarker for breast cancer, we applied our method to HER2+ and HER2- cell lines. We report very fast (5 minutes) immunofluorescence staining of both HER2 and cytokeratin (a marker used to define the tumor region) on FFPE slides. The image-processing program can segment cells correctly and give a cell-based quantitative immunofluorescent signal. With this method, we found a reproducible well-defined separation for the HER2-to-cytokeratin ratio for positive and negative control samples.
NASA Astrophysics Data System (ADS)
Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen
2017-06-01
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.
Plasma protein absolute quantification by nano-LC Q-TOF UDMSE for clinical biomarker verification
ILIES, MARIA; IUGA, CRISTINA ADELA; LOGHIN, FELICIA; DHOPLE, VISHNU MUKUND; HAMMER, ELKE
2017-01-01
Background and aims Proteome-based biomarker studies are targeting proteins that could serve as diagnostic, prognosis, and prediction molecules. In the clinical routine, immunoassays are currently used for the absolute quantification of such biomarkers, with the major limitation that only one molecule can be targeted per assay. The aim of our study was to test a mass spectrometry based absolute quantification method for the verification of plasma protein sets which might serve as reliable biomarker panels for the clinical practice. Methods Six EDTA plasma samples were analyzed after tryptic digestion using a high throughput data independent acquisition nano-LC Q-TOF UDMSE proteomics approach. Synthetic Escherichia coli standard peptides were spiked in each sample for the absolute quantification. Data analysis was performed using ProgenesisQI v2.0 software (Waters Corporation). Results Our method ensured absolute quantification of 242 non redundant plasma proteins in a single run analysis. The dynamic range covered was 105. 86% were represented by classical plasma proteins. The overall median coefficient of variation was 0.36, while a set of 63 proteins was found to be highly stable. Absolute protein concentrations strongly correlated with values reviewed in the literature. Conclusions Nano-LC Q-TOF UDMSE proteomic analysis can be used for a simple and rapid determination of absolute amounts of plasma proteins. A large number of plasma proteins could be analyzed, while a wide dynamic range was covered with low coefficient of variation at protein level. The method proved to be a reliable tool for the quantification of protein panel for biomarker verification in the clinical practice. PMID:29151793
Loziuk, Philip L.; Sederoff, Ronald R.; Chiang, Vincent L.; Muddiman, David C.
2014-01-01
Quantitative mass spectrometry has become central to the field of proteomics and metabolomics. Selected reaction monitoring is a widely used method for the absolute quantification of proteins and metabolites. This method renders high specificity using several product ions measured simultaneously. With growing interest in quantification of molecular species in complex biological samples, confident identification and quantitation has been of particular concern. A method to confirm purity or contamination of product ion spectra has become necessary for achieving accurate and precise quantification. Ion abundance ratio assessments were introduced to alleviate some of these issues. Ion abundance ratios are based on the consistent relative abundance (RA) of specific product ions with respect to the total abundance of all product ions. To date, no standardized method of implementing ion abundance ratios has been established. Thresholds by which product ion contamination is confirmed vary widely and are often arbitrary. This study sought to establish criteria by which the relative abundance of product ions can be evaluated in an absolute quantification experiment. These findings suggest that evaluation of the absolute ion abundance for any given transition is necessary in order to effectively implement RA thresholds. Overall, the variation of the RA value was observed to be relatively constant beyond an absolute threshold ion abundance. Finally, these RA values were observed to fluctuate significantly over a 3 year period, suggesting that these values should be assessed as close as possible to the time at which data is collected for quantification. PMID:25154770
2014-04-01
Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael
Simple, Fast, and Sensitive Method for Quantification of Tellurite in Culture Media▿
Molina, Roberto C.; Burra, Radhika; Pérez-Donoso, José M.; Elías, Alex O.; Muñoz, Claudia; Montes, Rebecca A.; Chasteen, Thomas G.; Vásquez, Claudio C.
2010-01-01
A fast, simple, and reliable chemical method for tellurite quantification is described. The procedure is based on the NaBH4-mediated reduction of TeO32− followed by the spectrophotometric determination of elemental tellurium in solution. The method is highly reproducible, is stable at different pH values, and exhibits linearity over a broad range of tellurite concentrations. PMID:20525868
Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon
2018-03-01
Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis. A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide. Copyright © 2017 Elsevier B.V. All rights reserved.
Naveen, P.; Lingaraju, H. B.; Prasad, K. Shyam
2017-01-01
Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica, is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica. RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography–mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica. SUMMARY The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica. The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica. Abbreviations Used: M. indica: Mangifera indica, RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification. PMID:28539748
Naveen, P; Lingaraju, H B; Prasad, K Shyam
2017-01-01
Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica , is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica . RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography-mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica . The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica . The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica . Abbreviations Used: M. indica : Mangifera indica , RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification.
NASA Astrophysics Data System (ADS)
Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.
2017-11-01
This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.
Gokduman, Kurtulus; Avsaroglu, M Dilek; Cakiris, Aris; Ustek, Duran; Gurakan, G Candan
2016-03-01
The aim of the current study was to develop, a new, rapid, sensitive and quantitative Salmonella detection method using a Real-Time PCR technique based on an inexpensive, easy to produce, convenient and standardized recombinant plasmid positive control. To achieve this, two recombinant plasmids were constructed as reference molecules by cloning the two most commonly used Salmonella-specific target gene regions, invA and ttrRSBC. The more rapid detection enabled by the developed method (21 h) compared to the traditional culture method (90 h) allows the quantitative evaluation of Salmonella (quantification limits of 10(1)CFU/ml and 10(0)CFU/ml for the invA target and the ttrRSBC target, respectively), as illustrated using milk samples. Three advantages illustrated by the current study demonstrate the potential of the newly developed method to be used in routine analyses in the medical, veterinary, food and water/environmental sectors: I--The method provides fast analyses including the simultaneous detection and determination of correct pathogen counts; II--The method is applicable to challenging samples, such as milk; III--The method's positive controls (recombinant plasmids) are reproducible in large quantities without the need to construct new calibration curves. Copyright © 2016 Elsevier B.V. All rights reserved.
Quantification of fungicides in snow-melt runoff from turf: A comparison of four extraction methods
USDA-ARS?s Scientific Manuscript database
A variety of pesticides are used to control diverse stressors to turf. These pesticides have a wide range in physical and chemical properties. The objective of this project was to develop an extraction and analysis method for quantification of chlorothalonil and PCNB (pentachloronitrobenzene), two p...
The quantification of solute concentrations in laboratory aquifer models has been largely limited to the use of sampling ports, from which samples are collected for external analysis. One of the drawbacks to this method is that the act of sampling may disturb plume dynamics and ...
The quantification of solute concentrations in laboratory aquifer models has been largely limited to the use of sampling ports, from which samples are collected for external analysis. One of the drawbacks to this method is that the act of sampling may disturb plume dynamics and ...
The quantification of solute concentrations in laboratory aquifer models has been largely limited to the use of sampling ports, from which samples are collected for external analysis. One of the drawbacks to this method is that the act of sampling may disturb plume dynamics and ...
Chai, Liuying; Zhang, Jianwei; Zhang, Lili; Chen, Tongsheng
2015-03-01
Spectral measurement of fluorescence resonance energy transfer (FRET), spFRET, is a widely used FRET quantification method in living cells today. We set up a spectrometer-microscope platform that consists of a miniature fiber optic spectrometer and a widefield fluorescence microscope for the spectral measurement of absolute FRET efficiency (E) and acceptor-to-donor concentration ratio (R(C)) in single living cells. The microscope was used for guiding cells and the spectra were simultaneously detected by the miniature fiber optic spectrometer. Moreover, our platform has independent excitation and emission controllers, so different excitations can share the same emission channel. In addition, we developed a modified spectral FRET quantification method (mlux-FRET) for the multiple donors and multiple acceptors FRET construct (mD∼nA) sample, and we also developed a spectra-based 2-channel acceptor-sensitized FRET quantification method (spE-FRET). We implemented these modified FRET quantification methods on our platform to measure the absolute E and R(C) values of tandem constructs with different acceptor/donor stoichiometries in single living Huh-7 cells.
NASA Astrophysics Data System (ADS)
Yao, Rutao; Ma, Tianyu; Shao, Yiping
2008-08-01
This work is part of a feasibility study to develop SPECT imaging capability on a lutetium oxyorthosilicate (LSO) based animal PET system. The SPECT acquisition was enabled by inserting a collimator assembly inside the detector ring and acquiring data in singles mode. The same LSO detectors were used for both PET and SPECT imaging. The intrinsic radioactivity of 176Lu in the LSO crystals, however, contaminates the SPECT data, and can generate image artifacts and introduce quantification error. The objectives of this study were to evaluate the effectiveness of a LSO background subtraction method, and to estimate the minimal detectable target activity (MDTA) of image object for SPECT imaging. For LSO background correction, the LSO contribution in an image study was estimated based on a pre-measured long LSO background scan and subtracted prior to the image reconstruction. The MDTA was estimated in two ways. The empirical MDTA (eMDTA) was estimated from screening the tomographic images at different activity levels. The calculated MDTA (cMDTA) was estimated from using a formula based on applying a modified Currie equation on an average projection dataset. Two simulated and two experimental phantoms with different object activity distributions and levels were used in this study. The results showed that LSO background adds concentric ring artifacts to the reconstructed image, and the simple subtraction method can effectively remove these artifacts—the effect of the correction was more visible when the object activity level was near or above the eMDTA. For the four phantoms studied, the cMDTA was consistently about five times of the corresponding eMDTA. In summary, we implemented a simple LSO background subtraction method and demonstrated its effectiveness. The projection-based calculation formula yielded MDTA results that closely correlate with that obtained empirically and may have predicative value for imaging applications.
Liu, Ruijuan; Wang, Mengmeng; Ding, Li
2014-10-01
Menadione (VK3), an essential fat-soluble naphthoquinone, takes very important physiological and pathological roles, but its detection and quantification is challenging. Herein, a new method was developed for quantification of VK3 in human plasma by liquid chromatography-tandem mass spectrometry (LC-MS/MS) after derivatization with 3-mercaptopropionic acid via Michael addition reaction. The derivative had been identified by the mass spectra and the derivatization conditions were optimized by considering different parameters. The method was demonstrated with high sensitivity and a low limit of quantification of 0.03 ng mL(-1) for VK3, which is about 33-fold better than that for the direct analysis of the underivatized compound. The method also had good precision and reproducibility. It was applied in the determination of basal VK3 in human plasma and a clinical pharmacokinetic study of menadiol sodium diphosphate. Furthermore, the method for the quantification of VK3 using LC-MS/MS was reported in this paper for the first time, and it will provide an important strategy for the further research on VK3 and menadione analogs. Copyright © 2014 Elsevier B.V. All rights reserved.
HPLC Quantification of astaxanthin and canthaxanthin in Salmonidae eggs.
Tzanova, Milena; Argirova, Mariana; Atanasov, Vasil
2017-04-01
Astaxanthin and canthaxanthin are naturally occurring antioxidants referred to as xanthophylls. They are used as food additives in fish farms to improve the organoleptic qualities of salmonid products and to prevent reproductive diseases. This study reports the development and single-laboratory validation of a rapid method for quantification of astaxanthin and canthaxanthin in eggs of rainbow trout (Oncorhynchus mykiss) and brook trout (Salvelinus fontinalis М.). An advantage of the proposed method is the perfect combination of selective extraction of the xanthophylls and analysis of the extract by high-performance liquid chromatography and photodiode array detection. The method validation was carried out in terms of linearity, accuracy, precision, recovery and limits of detection and quantification. The method was applied for simultaneous quantification of the two xanthophylls in eggs of rainbow trout and brook trout after their selective extraction. The results show that astaxanthin accumulations in salmonid fish eggs are larger than those of canthaxanthin. As the levels of these two xanthophylls affect fish fertility, this method can be used to improve the nutritional quality and to minimize the occurrence of the M74 syndrome in fish populations. Copyright © 2016 John Wiley & Sons, Ltd.
Lin, Qiuping; Huang, Xiaoqiong; Xu, Yue; Yang, Xiaoping
2016-01-01
Purpose Facial asymmetry often persists even after mandibular deviation corrected by the bilateral sagittal split ramus osteotomy (BSSRO) operation, since the reference facial sagittal plane for the asymmetry analysis is usually set up before the mandibular menton (Me) point correction. Our aim is to develop a predictive and quantitative method to assess the true asymmetry of the mandible after a midline correction performed by a virtual BSSRO, and to verify its availability by evaluation of the post-surgical improvement. Patients and Methods A retrospective cohort study was conducted at the Hospital of Stomatology, Sun Yat-sen University (China) of patients with pure hemi-mandibular elongation (HE) from September 2010 through May 2014. Mandibular models were reconstructed from CBCT images of patients with pre-surgical orthodontic treatment. After mandibular de-rotation and midline alignment with virtual BSSRO, the elongation hemi-mandible was virtually mirrored along the facial sagittal plane. The residual asymmetry, defined as the superimposition and boolean operation of the mirrored elongation side on the normal side, was calculated, including the volumetric differences and the length of transversal and vertical asymmetry discrepancy. For more specific evaluation, both sides of the hemi-mandible were divided into the symphysis and parasymphysis (SP), mandibular body (MB), and mandibular angle (MA) regions. Other clinical variables include deviation of Me point, dental midline and molar relationship. The measurement of volumetric discrepancy between the two sides of post-surgical hemi-mandible were also calculated to verify the availability of virtual surgery. Paired t-tests were computed and the P value was set at .05. Results This study included 45 patients. The volume differences were 407.8±64.8 mm3, 2139.1±72.5 mm3, and 422.5±36.9 mm3; residual average transversal discrepancy, 1.9 mm, 1.0 mm, and 2.2 mm; average vertical discrepancy, 1.1 mm, 2.2 mm, and 2.2 mm (before virtual surgery). The post-surgical volumetric measurement showed no statistical differences between bilateral mandibular regions. Conclusions Mandibular asymmetry persists after Me point correction. A 3D quantification of mandibular residual asymmetry after Me point correction and mandible de-rotation with virtual BSSRO sets up a true reference mirror plane for comprehensive asymmetry assessment of bilateral mandibular structure, thereby providing an accurate guidance for orthognathic surgical planning. PMID:27571364
Cankar, Katarina; Štebih, Dejan; Dreo, Tanja; Žel, Jana; Gruden, Kristina
2006-01-01
Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary criterion by which to evaluate the quality and performance on different matrixes and extraction techniques. The effect of PCR efficiency on the resulting GMO content is demonstrated. Conclusion The crucial influence of extraction technique and sample matrix properties on the results of GMO quantification is demonstrated. Appropriate extraction techniques for each matrix need to be determined to achieve accurate DNA quantification. Nevertheless, as it is shown that in the area of food and feed testing matrix with certain specificities is impossible to define strict quality controls need to be introduced to monitor PCR. The results of our study are also applicable to other fields of quantitative testing by real-time PCR. PMID:16907967
Correlated displacement-T2 MRI by means of a Pulsed Field Gradient-Multi Spin Echo Method.
Windt, Carel W; Vergeldt, Frank J; Van As, Henk
2007-04-01
A method for correlated displacement-T2 imaging is presented. A Pulsed Field Gradient-Multi Spin Echo (PFG-MSE) sequence is used to record T2 resolved propagators on a voxel-by-voxel basis, making it possible to perform single voxel correlated displacement-T2 analyses. In spatially heterogeneous media the method thus gives access to sub-voxel information about displacement and T2 relaxation. The sequence is demonstrated using a number of flow conducting model systems: a tube with flowing water of variable intrinsic T2's, mixing fluids of different T2's in an "X"-shaped connector, and an intact living plant. PFG-MSE can be applied to yield information about the relation between flow, pore size and exchange behavior, and can aid volume flow quantification by making it possible to correct for T2 relaxation during the displacement labeling period Delta in PFG displacement imaging methods. Correlated displacement-T2 imaging can be of special interest for a number of research subjects, such as the flow of liquids and mixtures of liquids or liquids and solids moving through microscopic conduits of different sizes (e.g., plants, porous media, bioreactors, biomats).
Mawson, Deborah H; Jeffrey, Keon L; Teale, Philip; Grace, Philip B
2018-06-19
A rapid, accurate and robust method for the determination of catechin (C), epicatechin (EC), gallocatechin (GC), epigallocatechin (EGC), catechin gallate (Cg), epicatechin gallate (ECg), gallocatechin gallate (GCg) and epigallocatechin gallate (EGCg) concentrations in human plasma has been developed. The method utilises protein precipitation following enzyme hydrolysis, with chromatographic separation and detection using reversed-phase liquid chromatography - tandem mass spectrometry (LC-MS/MS). Traditional issues such as lengthy chromatographic run times, sample and extract stability, and lack of suitable internal standards have been addressed. The method has been evaluated using a comprehensive validation procedure, confirming linearity over appropriate concentration ranges, and inter/intra batch precision and accuracies within suitable thresholds (precisions within 13.8% and accuracies within 12.4%). Recoveries of analytes were found to be consistent between different matrix samples, compensated for using suitable internal markers and within the performance of the instrumentation used. Similarly, chromatographic interferences have been corrected using the internal markers selected. Stability of all analytes in matrix is demonstrated over 32 days and throughout extraction conditions. This method is suitable for high throughput sample analysis studies. This article is protected by copyright. All rights reserved.
Provost, Karine; Leblond, Antoine; Gauthier-Lemire, Annie; Filion, Édith; Bahig, Houda; Lord, Martin
2017-09-01
Planar perfusion scintigraphy with 99m Tc-labeled macroaggregated albumin is often used for pretherapy quantification of regional lung perfusion in lung cancer patients, particularly those with poor respiratory function. However, subdividing lung parenchyma into rectangular regions of interest, as done on planar images, is a poor reflection of true lobar anatomy. New tridimensional methods using SPECT and SPECT/CT have been introduced, including semiautomatic lung segmentation software. The present study evaluated inter- and intraobserver agreement on quantification using SPECT/CT software and compared the results for regional lung contribution obtained with SPECT/CT and planar scintigraphy. Methods: Thirty lung cancer patients underwent ventilation-perfusion scintigraphy with 99m Tc-macroaggregated albumin and 99m Tc-Technegas. The regional lung contribution to perfusion and ventilation was measured on both planar scintigraphy and SPECT/CT using semiautomatic lung segmentation software by 2 observers. Interobserver and intraobserver agreement for the SPECT/CT software was assessed using the intraclass correlation coefficient, Bland-Altman plots, and absolute differences in measurements. Measurements from planar and tridimensional methods were compared using the paired-sample t test and mean absolute differences. Results: Intraclass correlation coefficients were in the excellent range (above 0.9) for both interobserver and intraobserver agreement using the SPECT/CT software. Bland-Altman analyses showed very narrow limits of agreement. Absolute differences were below 2.0% in 96% of both interobserver and intraobserver measurements. There was a statistically significant difference between planar and SPECT/CT methods ( P < 0.001) for quantification of perfusion and ventilation for all right lung lobes, with a maximal mean absolute difference of 20.7% for the right middle lobe. There was no statistically significant difference in quantification of perfusion and ventilation for the left lung lobes using either method; however, absolute differences reached 12.0%. The total right and left lung contributions were similar for the two methods, with a mean difference of 1.2% for perfusion and 2.0% for ventilation. Conclusion: Quantification of regional lung perfusion and ventilation using SPECT/CT-based lung segmentation software is highly reproducible. This tridimensional method yields statistically significant differences in measurements for right lung lobes when compared with planar scintigraphy. We recommend that SPECT/CT-based quantification be used for all lung cancer patients undergoing pretherapy evaluation of regional lung function. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
A multi-center study benchmarks software tools for label-free proteome quantification
Gillet, Ludovic C; Bernhardt, Oliver M.; MacLean, Brendan; Röst, Hannes L.; Tate, Stephen A.; Tsou, Chih-Chiang; Reiter, Lukas; Distler, Ute; Rosenberger, George; Perez-Riverol, Yasset; Nesvizhskii, Alexey I.; Aebersold, Ruedi; Tenzer, Stefan
2016-01-01
The consistent and accurate quantification of proteins by mass spectrometry (MS)-based proteomics depends on the performance of instruments, acquisition methods and data analysis software. In collaboration with the software developers, we evaluated OpenSWATH, SWATH2.0, Skyline, Spectronaut and DIA-Umpire, five of the most widely used software methods for processing data from SWATH-MS (sequential window acquisition of all theoretical fragment ion spectra), a method that uses data-independent acquisition (DIA) for label-free protein quantification. We analyzed high-complexity test datasets from hybrid proteome samples of defined quantitative composition acquired on two different MS instruments using different SWATH isolation windows setups. For consistent evaluation we developed LFQbench, an R-package to calculate metrics of precision and accuracy in label-free quantitative MS, and report the identification performance, robustness and specificity of each software tool. Our reference datasets enabled developers to improve their software tools. After optimization, all tools provided highly convergent identification and reliable quantification performance, underscoring their robustness for label-free quantitative proteomics. PMID:27701404
Accurate proteome-wide protein quantification from high-resolution 15N mass spectra
2011-01-01
In quantitative mass spectrometry-based proteomics, the metabolic incorporation of a single source of 15N-labeled nitrogen has many advantages over using stable isotope-labeled amino acids. However, the lack of a robust computational framework for analyzing the resulting spectra has impeded wide use of this approach. We have addressed this challenge by introducing a new computational methodology for analyzing 15N spectra in which quantification is integrated with identification. Application of this method to an Escherichia coli growth transition reveals significant improvement in quantification accuracy over previous methods. PMID:22182234
Jiang, Tingting; Dai, Yongmei; Miao, Miao; Zhang, Yue; Song, Chenglin; Wang, Zhixu
2015-07-01
To evaluate the usefulness and efficiency of a novel dietary method among urban pregnant women. Sixty one pregnant women were recruited from the ward and provided with a meal accurately weighed before cooking. The meal was photographed from three different angles before and after eating. The subjects were also interviewed for 24 h dietary recall by the investigators. Food weighting, image quantification and 24 h dietary recall were conducted by investigators from three different groups, and the messages were isolated from each other. Food consumption was analyzed on bases of classification and total summation. Nutrient intake from the meal was calculated for each subject. The data obtained from the dietary recall and the image quantification were compared with the actual values. Correlation and regression analyses were carried out on values between weight method and image quantification as well as dietary recall. Total twenty three kinds of food including rice, vegetables, fish, meats and soy bean curd were included in the experimental meal for the study. Compared with data from 24 h dietary recall (r = 0.413, P < 0.05), food weight estimated by image quantification (r = 0.778, P < 0.05, n = 308) were more correlated with weighed data, and show more concentrated linear distribution. Absolute difference distribution between image quantification and weight method of all food was 77.23 ± 56.02 (P < 0.05, n = 61), which was much small than the difference (172.77 ± 115.18) between 24 h recall and weight method. Values of almost all nutrients, including energy, protein, fat, carbohydrate, vitamin A, vitamin C, calcium, iron and zine calculated based on food weight from image quantification were more close to those of weighed data compared with 24 h dietary recall (P < 0.01). The results found by the Bland Altman analysis showed that the majority of the measurements for nutrient intake, were scattered along the mean difference line and close to the equality line (difference = 0). The plots show fairly good agreement between estimated and actual food consumption. It indicate that the differences (including the outliers) were random and did not exhibit any systematic bias, being consistent over different levels of mean food amount. On the other hand, the questionnaire showed that fifty six pregnant women considered the image quantification was less time-consuming and burdened than 24 h recall. Fifty eight of them would like to use image quantification to know their dietary status. The novel method which called instant photography (image quantification) for dietary assessment is more effective than conventional 24 h dietary recall and it also can obtain food intake values close to weighed data.
Quantification of taurine in energy drinks using ¹H NMR.
Hohmann, Monika; Felbinger, Christine; Christoph, Norbert; Wachter, Helmut; Wiest, Johannes; Holzgrabe, Ulrike
2014-05-01
The consumption of so called energy drinks is increasing, especially among adolescents. These beverages commonly contain considerable amounts of the amino sulfonic acid taurine, which is related to a magnitude of various physiological effects. The customary method to control the legal limit of taurine in energy drinks is LC-UV/vis with postcolumn derivatization using ninhydrin. In this paper we describe the quantification of taurine in energy drinks by (1)H NMR as an alternative to existing methods of quantification. Variation of pH values revealed the separation of a distinct taurine signal in (1)H NMR spectra, which was applied for integration and quantification. Quantification was performed using external calibration (R(2)>0.9999; linearity verified by Mandel's fitting test with a 95% confidence level) and PULCON. Taurine concentrations in 20 different energy drinks were analyzed by both using (1)H NMR and LC-UV/vis. The deviation between (1)H NMR and LC-UV/vis results was always below the expanded measurement uncertainty of 12.2% for the LC-UV/vis method (95% confidence level) and at worst 10.4%. Due to the high accordance to LC-UV/vis data and adequate recovery rates (ranging between 97.1% and 108.2%), (1)H NMR measurement presents a suitable method to quantify taurine in energy drinks. Copyright © 2013 Elsevier B.V. All rights reserved.
Louwagie, Mathilde; Kieffer-Jaquinod, Sylvie; Dupierris, Véronique; Couté, Yohann; Bruley, Christophe; Garin, Jérôme; Dupuis, Alain; Jaquinod, Michel; Brun, Virginie
2012-07-06
Accurate quantification of pure peptides and proteins is essential for biotechnology, clinical chemistry, proteomics, and systems biology. The reference method to quantify peptides and proteins is amino acid analysis (AAA). This consists of an acidic hydrolysis followed by chromatographic separation and spectrophotometric detection of amino acids. Although widely used, this method displays some limitations, in particular the need for large amounts of starting material. Driven by the need to quantify isotope-dilution standards used for absolute quantitative proteomics, particularly stable isotope-labeled (SIL) peptides and PSAQ proteins, we developed a new AAA assay (AAA-MS). This method requires neither derivatization nor chromatographic separation of amino acids. It is based on rapid microwave-assisted acidic hydrolysis followed by high-resolution mass spectrometry analysis of amino acids. Quantification is performed by comparing MS signals from labeled amino acids (SIL peptide- and PSAQ-derived) with those of unlabeled amino acids originating from co-hydrolyzed NIST standard reference materials. For both SIL peptides and PSAQ standards, AAA-MS quantification results were consistent with classical AAA measurements. Compared to AAA assay, AAA-MS was much faster and was 100-fold more sensitive for peptide and protein quantification. Finally, thanks to the development of a labeled protein standard, we also extended AAA-MS analysis to the quantification of unlabeled proteins.
Walker, S. Hunter; Taylor, Amber D.; Muddiman, David C.
2013-01-01
The INLIGHT strategy for the sample preparation, data analysis, and relative quantification of N-linked glycans is presented. Glycans are derivatized with either natural (L) or stable-isotope labeled (H) hydrazide reagents and analyzed using reversed phase liquid chromatography coupled online to a Q Exactive mass spectrometer. A simple glycan ladder, maltodextrin, is first used to demonstrate the relative quantification strategy in samples with negligible analytical and biological variability. It is shown that after a molecular weight correction due to isotopic overlap and a post-acquisition normalization of the data to account for both the systematic variability, a plot of the experimental H:L ratio vs. the calculated H:L ratio exhibits a correlation of unity for maltodextrin samples mixed in different ratios. We also demonstrate that the INLIGHT approach can quantify species over four orders of magnitude in ion abundance. The INLIGHT strategy is further demonstrated in pooled human plasma, where it is shown that the post-acquisition normalization is more effective than using a single spiked-in internal standard. Finally, changes in glycosylation are able to be detected in complex biological matrices, when spiked with a glycoprotein. The ability to spike in a glycoprotein and detect change at the glycan level validates both the sample preparation and data analysis strategy, making INLIGHT an invaluable relative quantification strategy for the field of glycomics. PMID:23860851
Gibby, Jacob T; Njeru, Dennis K; Cvetko, Steve T; Heiny, Eric L; Creer, Andrew R; Gibby, Wendell A
We correlate and evaluate the accuracy of accepted anthropometric methods of percent body fat (%BF) quantification, namely, hydrostatic weighing (HW) and air displacement plethysmography (ADP), to 2 automatic adipose tissue quantification methods using computed tomography (CT). Twenty volunteer subjects (14 men, 6 women) received head-to-toe CT scans. Hydrostatic weighing and ADP were obtained from 17 and 12 subjects, respectively. The CT data underwent conversion using 2 separate algorithms, namely, the Schneider method and the Beam method, to convert Hounsfield units to their respective tissue densities. The overall mass and %BF of both methods were compared with HW and ADP. When comparing ADP to CT data using the Schneider method and Beam method, correlations were r = 0.9806 and 0.9804, respectively. Paired t tests indicated there were no statistically significant biases. Additionally, observed average differences in %BF between ADP and the Schneider method and the Beam method were 0.38% and 0.77%, respectively. The %BF measured from ADP, the Schneider method, and the Beam method all had significantly higher mean differences when compared with HW (3.05%, 2.32%, and 1.94%, respectively). We have shown that total body mass correlates remarkably well with both the Schneider method and Beam method of mass quantification. Furthermore, %BF calculated with the Schneider method and Beam method CT algorithms correlates remarkably well with ADP. The application of these CT algorithms have utility in further research to accurately stratify risk factors with periorgan, visceral, and subcutaneous types of adipose tissue, and has the potential for significant clinical application.
Amarathunga, J P; Schuetz, M A; Yarlagadda, K V D; Schmutz, B
2015-04-01
Intramedullary nailing is the standard fixation method for displaced diaphyseal fractures of tibia. Selection of the correct nail insertion point is important for axial alignment of bone fragments and to avoid iatrogenic fractures. However, the standard entry point (SEP) may not always optimise the bone-nail fit due to geometric variations of bones. This study aimed to investigate the optimal entry for a given bone-nail pair using the fit quantification software tool previously developed by the authors. The misfit was quantified for 20 bones with two nail designs (ETN and ETN-Proximal Bend) related to the SEP and 5 entry points which were 5 mm and 10 mm away from the SEP. The SEP was the optimal entry point for 50% of the bones used. For the remaining bones, the optimal entry point was located 5 mm away from the SEP, which improved the overall fit by 40% on average. However, entry points 10 mm away from the SEP doubled the misfit. The optimised bone-nail fit can be achieved through the SEP and within the range of a 5 mm radius, except posteriorly. The study results suggest that the optimal entry point should be selected by considering the fit during insertion and not only at the final position. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Wang, Jinpeng; Wei, Ren; Tian, Yaoqi; Yang, Na; Xu, Xueming; Zimmermann, Wolfgang; Jin, Zhengyu
2015-05-20
Large-ring cyclodextrins (LR-CDs) have a number of intriguing properties for potential use in pharmaceutical and food industry. To date, no colorimetric method has been reported for LR-CD content quantification. In this study, triple wavelength colorimetry (TWC) and orthogonal-function spectrophotometry (OFS) have been successfully applied to determine ingredient concentrations in a mixture of amylose and LR-CDs. Both TWC and OFS yielded precise amylose content data in good agreement with expected values. For quantification of LR-CD content, OFS provided a higher accuracy than TWC, which resulted in a slight over-determination. As a comparison, single-wavelength colorimetry performed at the corresponding absorption maximum led to a significant over-determination of both amylose and LR-CD contents. The validity of TWC and OFS allowed their application for discriminative detection of the cyclization and total activity of a 4-α-glucanotransferase (4 αGTase) from Thermus aquaticus regarding the synthesis of LR-CDs and the conversion of amylose to small molecules, respectively. High pressure size exclusion chromatography analysis of the post-reaction mixtures following 4 αGTase-catalyzed conversion of amylose revealed the presence of linear malto-oligosaccharides in the LR-CD fraction. By introduction of a correction factor, the interference caused by linear malto-oligosaccharides was eliminated for a more accurate determination of LR-CD cyclization activity. Copyright © 2015 Elsevier Ltd. All rights reserved.
Neudecker, Denise; Taddeucci, Terry Nicholas; Haight, Robert Cameron; ...
2016-01-06
The spectrum of neutrons emitted promptly after 239Pu(n,f)—a so-called prompt fission neutron spectrum (PFNS)—is a quantity of high interest, for instance, for reactor physics and global security. However, there are only few experimental data sets available that are suitable for evaluations. In addition, some of those data sets differ by more than their 1-σ uncertainty boundaries. We present the results of MCNP studies indicating that these differences are partly caused by underestimated multiple scattering contributions, over-corrected background, and inconsistent deconvolution methods. A detailed uncertainty quantification for suitable experimental data was undertaken including these effects, and test-evaluations were performed with themore » improved uncertainty information. The test-evaluations illustrate that the inadequately estimated effects and detailed uncertainty quantification have an impact on the evaluated PFNS and associated uncertainties as well as the neutron multiplicity of selected critical assemblies. A summary of data and documentation needs to improve the quality of the experimental database is provided based on the results of simulations and test-evaluations. Furthermore, given the possibly substantial distortion of the PFNS by multiple scattering and background effects, special care should be taken to reduce these effects in future measurements, e.g., by measuring the 239Pu PFNS as a ratio to either the 235U or 252Cf PFNS.« less
Doumayrou, Juliette; Sheber, Melissa; Bonning, Bryony C; Miller, W Allen
2017-02-01
Pea enation mosaic virus 1 (PEMV1) and Pea enation mosaic virus 2 (PEMV2) are two viruses in an obligate symbiosis that cause pea enation mosaic disease mainly in plants in the Fabaceae family. This virus system is a valuable model to investigate plant virus replication, movement and vector transmission. Thus, here we describe growth conditions, virus detection methods, and virus accumulation behavior. To measure the accumulation and movement of PEMV1 and PEMV2 in plants during the course of infection, we developed a quantitative real-time one-step reverse transcription PCR procedure using the SYBR-green ® technology. Viral primers were designed that anneal to conserved but distinct regions in the RNA-dependent RNA polymerase gene of each virus. Moreover, the normalization of viral accumulation was performed to correct for sample-to-sample variation by designing primers to two different Pisum sativum housekeeping genes: actin and β-tubulin. Transcript levels for these housekeeping genes did not change significantly in response to PEMV infection. Conditions were established for maximum PCR efficiency for each gene, and quantification using QuBit ® technology. Both viruses reached maximum accumulation around 21days post-inoculation of pea plants. These results provide valuable tools and knowledge to allow reproducible studies of this emerging model virus system virus complex. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lu, Aiming; Atkinson, Ian C.; Vaughn, J. Thomas; Thulborn, Keith R.
2011-12-01
The rapid biexponential transverse relaxation of the sodium MR signal from brain tissue requires efficient k-space sampling for quantitative imaging in a time that is acceptable for human subjects. The flexible twisted projection imaging (flexTPI) sequence has been shown to be suitable for quantitative sodium imaging with an ultra-short echo time to minimize signal loss. The fidelity of the k-space center location is affected by the readout gradient timing errors on the three physical axes, which is known to cause image distortion for projection-based acquisitions. This study investigated the impact of these timing errors on the voxel-wise accuracy of the tissue sodium concentration (TSC) bioscale measured with the flexTPI sequence. Our simulations show greater than 20% spatially varying quantification errors when the gradient timing errors are larger than 10 μs on all three axes. The quantification is more tolerant of gradient timing errors on the Z-axis. An existing method was used to measure the gradient timing errors with <1 μs error. The gradient timing error measurement is shown to be RF coil dependent, and timing error differences of up to ˜16 μs have been observed between different RF coils used on the same scanner. The measured timing errors can be corrected prospectively or retrospectively to obtain accurate TSC values.
NASA Astrophysics Data System (ADS)
Hrdlička, Aleš; Hegrová, Jitka; Novotný, Karel; Kanický, Viktor; Prochazka, David; Novotný, Jan; Modlitbová, Pavlína; Sládková, Lucia; Pořízka, Pavel; Kaiser, Jozef
2018-04-01
A LIBS equipment operating at 532 nm was optimized and used for sulfur determination in concrete samples. The influence of He atmosphere in a gas-tight chamber (1000-200 mbar) on S I 921.29 nm line sensitivity, signal-to-background and signal-to-noise ratio was studied at gate delays 100-2000 ns. Wide range of gate delays from 500 to about 1000 ns and pressures from several hundreds of mbar to the atmospheric pressure can be used for the desired detection of sulfur. The LIBS quantification was done using a simple calibration method. A synthetic limestone enriched by defined amounts of sodium sulfate was newly employed for direct quantification of S in concrete. This powder material was pressed into pellets and ablated with the LIBS system. The average content of sulfur as SO3 in the samples was 0.41-0.70 wt% by LIBS and 0.43-0.61 wt% by a reference standard procedure employing gravimetry and Inductively Coupled Plasma Triple Quad Mass Spectrometry (ICP-QQQMS). The uncertainty of the yielded LIBS results covers also the dispersion of the points in the calibration line and ranges from 16 to 28% at the probability level of 95%. The uncertainty of the ICP-QQQMS results was almost 10%. No correction on different signal response on the limestone and on the concrete was necessary.
Dong, Tao; Yu, Liang; Gao, Difeng; Yu, Xiaochen; Miao, Chao; Zheng, Yubin; Lian, Jieni; Li, Tingting; Chen, Shulin
2015-12-01
Accurate determination of fatty acid contents is routinely required in microalgal and yeast biofuel studies. A method of rapid in situ fatty acid methyl ester (FAME) derivatization directly from wet fresh microalgal and yeast biomass was developed in this study. This method does not require prior solvent extraction or dehydration. FAMEs were prepared with a sequential alkaline hydrolysis (15 min at 85 °C) and acidic esterification (15 min at 85 °C) process. The resulting FAMEs were extracted into n-hexane and analyzed using gas chromatography. The effects of each processing parameter (temperature, reaction time, and water content) upon the lipids quantification in the alkaline hydrolysis step were evaluated with a full factorial design. This method could tolerate water content up to 20% (v/v) in total reaction volume, which equaled up to 1.2 mL of water in biomass slurry (with 0.05-25 mg of fatty acid). There were no significant differences in FAME quantification (p>0.05) between the standard AOAC 991.39 method and the proposed wet in situ FAME preparation method. This fatty acid quantification method is applicable to fresh wet biomass of a wide range of microalgae and yeast species.
[Progress in stable isotope labeled quantitative proteomics methods].
Zhou, Yuan; Shan, Yichu; Zhang, Lihua; Zhang, Yukui
2013-06-01
Quantitative proteomics is an important research field in post-genomics era. There are two strategies for proteome quantification: label-free methods and stable isotope labeling methods which have become the most important strategy for quantitative proteomics at present. In the past few years, a number of quantitative methods have been developed, which support the fast development in biology research. In this work, we discuss the progress in the stable isotope labeling methods for quantitative proteomics including relative and absolute quantitative proteomics, and then give our opinions on the outlook of proteome quantification methods.
Tsukahara, Keita; Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Nishimaki-Mogami, Tomoko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi
2016-01-01
A real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event, MON87701. First, a standard plasmid for MON87701 quantification was constructed. The conversion factor (C f ) required to calculate the amount of genetically modified organism (GMO) was experimentally determined for a real-time PCR instrument. The determined C f for the real-time PCR instrument was 1.24. For the evaluation of the developed method, a blind test was carried out in an inter-laboratory trial. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDr), respectively. The determined biases and the RSDr values were less than 30 and 13%, respectively, at all evaluated concentrations. The limit of quantitation of the method was 0.5%, and the developed method would thus be applicable for practical analyses for the detection and quantification of MON87701.
Takabatake, Reona; Onishi, Mari; Koiwa, Tomohiro; Futo, Satoshi; Minegishi, Yasutaka; Akiyama, Hiroshi; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi
2013-01-01
A novel real-time polymerase chain reaction (PCR)-based quantitative screening method was developed for three genetically modified soybeans: RRS, A2704-12, and MON89788. The 35S promoter (P35S) of cauliflower mosaic virus is introduced into RRS and A2704-12 but not MON89788. We then designed a screening method comprised of the combination of the quantification of P35S and the event-specific quantification of MON89788. The conversion factor (Cf) required to convert the amount of a genetically modified organism (GMO) from a copy number ratio to a weight ratio was determined experimentally. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDR), respectively. The determined RSDR values for the method were less than 25% for both targets. We consider that the developed method would be suitable for the simple detection and approximate quantification of GMO.
Wang, Hongrui; Wang, Cheng; Wang, Ying; ...
2017-04-05
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less
Kumberger, Peter; Durso-Cain, Karina; Uprichard, Susan L; Dahari, Harel; Graw, Frederik
2018-04-17
Mathematical models based on ordinary differential equations (ODE) that describe the population dynamics of viruses and infected cells have been an essential tool to characterize and quantify viral infection dynamics. Although an important aspect of viral infection is the dynamics of viral spread, which includes transmission by cell-free virions and direct cell-to-cell transmission, models used so far ignored cell-to-cell transmission completely, or accounted for this process by simple mass-action kinetics between infected and uninfected cells. In this study, we show that the simple mass-action approach falls short when describing viral spread in a spatially-defined environment. Using simulated data, we present a model extension that allows correct quantification of cell-to-cell transmission dynamics within a monolayer of cells. By considering the decreasing proportion of cells that can contribute to cell-to-cell spread with progressing infection, our extension accounts for the transmission dynamics on a single cell level while still remaining applicable to standard population-based experimental measurements. While the ability to infer the proportion of cells infected by either of the transmission modes depends on the viral diffusion rate, the improved estimates obtained using our novel approach emphasize the need to correctly account for spatial aspects when analyzing viral spread.
Culligan, Patrick J; Littman, Paul M; Salamon, Charbel G; Priestley, Jennifer L; Shariati, Amir
2010-11-01
We sought to track objective and subjective outcomes ≥1 year after transvaginal mesh system to correct prolapse. This was a retrospective cohort study of 120 women who received a transvaginal mesh procedure (Avaulta Solo, CR Bard Inc, Covington, GA). Outcomes were pelvic organ prolapse quantification values; Pelvic Floor Distress Inventory, Short Form 20/Pelvic Floor Impact Questionnaire, Short Form 7 scores; and a surgical satisfaction survey. "Surgical failure" was defined as pelvic organ prolapse quantification point >0, and/or any reports of vaginal bulge. Of 120 patients, 116 (97%) were followed up for a mean of 14.4 months (range, 12-30). In all, 74 patients had only anterior mesh, 21 only posterior mesh, and 21 both meshes. Surgical cure rate was 81%. Surgical failure was more common if preoperative point C ≥+2 (35% vs 16%; P = .04). Mesh erosion and de novo pain occurred in 11.7% and 3.3%, respectively. Pelvic Floor Distress Inventory, Short Form 20/Pelvic Floor Impact Questionnaire, Short Form 7 scores improved (P < .01). Objective and subjective improvements occurred at ≥1 year, yet failure rates were high when preoperative point C was ≥+2. Copyright © 2010 Mosby, Inc. All rights reserved.
Li, Maoyin; Butka, Emily; Wang, Xuemin
2014-10-10
Soybean seeds are an important source of vegetable oil and biomaterials. The content of individual triacylglycerol species (TAG) in soybean seeds is difficult to quantify in an accurate and rapid way. The present study establishes an approach to quantify TAG species in soybean seeds utilizing an electrospray ionization tandem mass spectrometry with multiple neutral loss scans. Ten neutral loss scans were performed to detect the fatty acyl chains of TAG, including palmitic (P, 1650), linolenic (Ln, 1853), linoleic (L, 1852), oleic (O, 1851), stearic (S, 1850), eicosadienoic (2052), gadoleic (2051), arachidic (2050), erucic (2251), and behenic (2250). The abundance ofmore » ten fatty acyl chains at 46 TAG masses (mass-to-charge ratio, m/z) were determined after isotopic deconvolution and correction by adjustment factors at each TAG mass. The direct sample infusion and multiple internal standards correction allowed a rapid and accurate quantification of TAG species. Ninety-three TAG species were resolved and their levels were determined.The most abundant TAG species were LLL, OLL, LLLn, PLL, OLLn, OOL, POL, and SLL. Many new species were detected and quantified. As a result, this shotgun lipidomics approach should facilitate the study of TAG metabolism and genetic breeding of soybean seeds for desirable TAG content and composition.« less
NASA Astrophysics Data System (ADS)
Hatzoglou, C.; Radiguet, B.; Pareige, P.
2017-08-01
Oxide Dispersion Strengthened (ODS) steels are promising candidates for future nuclear reactors, partly due to the fine dispersion of the nanoparticles they contain. Until now, there was no consensus as to the nature of the nanoparticles because their analysis pushed the techniques to their limits and in consequence, introduced some artefacts. In this study, the artefacts that occur during atom probe tomography analysis are quantified. The artefacts quantification reveals that the particles morphology, chemical composition and atomic density are biased. A model is suggested to correct these artefacts in order to obtain a fine and accurate characterization of the nanoparticles. This model is based on volume fraction calculation and an analytical expression of the atomic density. Then, the studied ODS steel reveals nanoparticles, pure in Y, Ti and O, with a core/shell structure. The shell is rich in Cr. The Cr content of the shell is dependent on that of the matrix by a factor of 1.5. This study also shows that 15% of the atoms that were initially in the particles are not detected during the analysis. This only affects O atoms. The particle stoichiometry evolves from YTiO2 for the smallest observed (<2 nm) to Y2TiO5 for the biggest (>8 nm).
MRI-guided brain PET image filtering and partial volume correction
NASA Astrophysics Data System (ADS)
Yan, Jianhua; Chu-Shern Lim, Jason; Townsend, David W.
2015-02-01
Positron emission tomography (PET) image quantification is a challenging problem due to limited spatial resolution of acquired data and the resulting partial volume effects (PVE), which depend on the size of the structure studied in relation to the spatial resolution and which may lead to over or underestimation of the true tissue tracer concentration. In addition, it is usually necessary to perform image smoothing either during image reconstruction or afterwards to achieve a reasonable signal-to-noise ratio. Typically, an isotropic Gaussian filtering (GF) is used for this purpose. However, the noise suppression is at the cost of deteriorating spatial resolution. As hybrid imaging devices such as PET/MRI have become available, the complementary information derived from high definition morphologic images could be used to improve the quality of PET images. In this study, first of all, we propose an MRI-guided PET filtering method by adapting a recently proposed local linear model and then incorporate PVE into the model to get a new partial volume correction (PVC) method without parcellation of MRI. In addition, both the new filtering and PVC are voxel-wise non-iterative methods. The performance of the proposed methods were investigated with simulated dynamic FDG brain dataset and 18F-FDG brain data of a cervical cancer patient acquired with a simultaneous hybrid PET/MR scanner. The initial simulation results demonstrated that MRI-guided PET image filtering can produce less noisy images than traditional GF and bias and coefficient of variation can be further reduced by MRI-guided PET PVC. Moreover, structures can be much better delineated in MRI-guided PET PVC for real brain data.
Chen, Fangfang; Gong, Zhiyuan; Kelly, Barry C
2015-02-27
A sensitive analytical method based on liquid-liquid extraction (LLE) and liquid chromatography tandem mass spectrometry (LC-MS/MS) was developed for rapid analysis of 11 pharmaceuticals and personal care products (PPCPs) in fish plasma micro-aliquots (∼20μL). Target PPCPs included, bisphenol A, carbamazepine, diclofenac, fluoxetine, gemfibrozil, ibuprofen, naproxen, risperidone, sertraline, simvastatin and triclosan. A relatively quicker and cheaper LLE procedure exhibited comparable analyte recoveries with solid-phase extraction. Rapid separation and analysis of target compounds in fish plasma extracts was achieved by employing a high efficiency C-18 HPLC column (Agilent Poroshell 120 SB-C18, 2.1mm×50mm, 2.7μm) and fast polarity switching, enabling effective monitoring of positive and negative ions in a single 9min run. With the exception of bisphenol A, which exhibited relatively high background contamination, method detection limits of individual PPCPs ranged between 0.15 and 0.69pg/μL, while method quantification limits were between 0.05 and 2.3pg/μL. Mean matrix effect (ME) values ranged between 65 and 156% for the various target analytes. Isotope dilution quantification using isotopically labelled internal surrogates was utilized to correct for signal suppression or enhancement and analyte losses during sample preparation. The method was evaluated by analysis of 20μL plasma micro-aliquots collected from zebrafish (Danio rerio) from a laboratory bioaccumulation study, which included control group fish (no exposure), as well as fish exposed to environmentally relevant concentrations of PPCPs. Using the developed LC-MS/MS based method, concentrations of the studied PPCPs were consistently detected in the low pg/μL (ppb) range. The method may be useful for investigations requiring fast, reliable concentration measurements of PPCPs in fish plasma. In particular, the method may be applicable for in situ contaminant biomonitoring, as well as bioaccumulation and toxicology studies employing small fishes with low blood compartment volumes. Copyright © 2015 Elsevier B.V. All rights reserved.
Lowering the quantification limit of the QubitTM RNA HS assay using RNA spike-in.
Li, Xin; Ben-Dov, Iddo Z; Mauro, Maurizio; Williams, Zev
2015-05-06
RNA quantification is often a prerequisite for most RNA analyses such as RNA sequencing. However, the relatively low sensitivity and large sample consumption of traditional RNA quantification methods such as UV spectrophotometry and even the much more sensitive fluorescence-based RNA quantification assays, such as the Qubit™ RNA HS Assay, are often inadequate for measuring minute levels of RNA isolated from limited cell and tissue samples and biofluids. Thus, there is a pressing need for a more sensitive method to reliably and robustly detect trace levels of RNA without interference from DNA. To improve the quantification limit of the Qubit™ RNA HS Assay, we spiked-in a known quantity of RNA to achieve the minimum reading required by the assay. Samples containing trace amounts of RNA were then added to the spike-in and measured as a reading increase over RNA spike-in baseline. We determined the accuracy and precision of reading increases between 1 and 20 pg/μL as well as RNA-specificity in this range, and compared to those of RiboGreen(®), another sensitive fluorescence-based RNA quantification assay. We then applied Qubit™ Assay with RNA spike-in to quantify plasma RNA samples. RNA spike-in improved the quantification limit of the Qubit™ RNA HS Assay 5-fold, from 25 pg/μL down to 5 pg/μL while maintaining high specificity to RNA. This enabled quantification of RNA with original concentration as low as 55.6 pg/μL compared to 250 pg/μL for the standard assay and decreased sample consumption from 5 to 1 ng. Plasma RNA samples that were not measurable by the Qubit™ RNA HS Assay were measurable by our modified method. The Qubit™ RNA HS Assay with RNA spike-in is able to quantify RNA with high specificity at 5-fold lower concentration and uses 5-fold less sample quantity than the standard Qubit™ Assay.
Sfetsas, Themistoklis; Michailof, Chrysa; Lappas, Angelos; Li, Qiangyi; Kneale, Brian
2011-05-27
Pyrolysis oils have attracted a lot of interest, as they are liquid energy carriers and general sources of chemicals. In this work, gas chromatography with flame ionization detector (GC-FID) and two-dimensional gas chromatography with time-of-flight mass spectrometry (GC×GC-TOFMS) techniques were used to provide both qualitative and quantitative results of the analysis of three different pyrolysis oils. The chromatographic methods and parameters were optimized and solvent choice and separation restrictions are discussed. Pyrolysis oil samples were diluted in suitable organic solvent and were analyzed by GC×GC-TOFMS. An average of 300 compounds were detected and identified in all three samples using the ChromaToF (Leco) software. The deconvoluted spectra were compared with the NIST software library for correct matching. Group type classification was performed by use of the ChromaToF software. The quantification of 11 selected compounds was performed by means of a multiple-point external calibration curve. Afterwards, the pyrolysis oils were extracted with water, and the aqueous phase was analyzed both by GC-FID and, after proper change of solvent, by GC×GC-TOFMS. As previously, the selected compounds were quantified by both techniques, by means of multiple point external calibration curves. The parameters of the calibration curves were calculated by weighted linear regression analysis. The limit of detection, limit of quantitation and linearity range for each standard compound with each method are presented. The potency of GC×GC-TOFMS for an efficient mapping of the pyrolysis oil is undisputable, and the possibility of using it for quantification as well has been demonstrated. On the other hand, the GC-FID analysis provides reliable results that allow for a rapid screening of the pyrolysis oil. To the best of our knowledge, very few papers have been reported with quantification attempts on pyrolysis oil samples using GC×GC-TOFMS most of which make use of the internal standard method. This work provides the ground for further analysis of pyrolysis oils of diverse sources for a rational design of both their production and utilization process. Copyright © 2010 Elsevier B.V. All rights reserved.
Magnetic Resonance Fingerprinting of Adult Brain Tumors: Initial Experience
Badve, Chaitra; Yu, Alice; Dastmalchian, Sara; Rogers, Matthew; Ma, Dan; Jiang, Yun; Margevicius, Seunghee; Pahwa, Shivani; Lu, Ziang; Schluchter, Mark; Sunshine, Jeffrey; Griswold, Mark; Sloan, Andrew; Gulani, Vikas
2016-01-01
Background Magnetic resonance fingerprinting (MRF) allows rapid simultaneous quantification of T1 and T2 relaxation times. This study assesses the utility of MRF in differentiating between common types of adult intra-axial brain tumors. Methods MRF acquisition was performed in 31 patients with untreated intra-axial brain tumors: 17 glioblastomas, 6 WHO grade II lower-grade gliomas and 8 metastases. T1, T2 of the solid tumor (ST), immediate peritumoral white matter (PW), and contralateral white matter (CW) were summarized within each region of interest. Statistical comparisons on mean, standard deviation, skewness and kurtosis were performed using univariate Wilcoxon rank sum test across various tumor types. Bonferroni correction was used to correct for multiple comparisons testing. Multivariable logistic regression analysis was performed for discrimination between glioblastomas and metastases and area under the receiver operator curve (AUC) was calculated. Results Mean T2 values could differentiate solid tumor regions of lower-grade gliomas from metastases (mean±sd: 172±53ms and 105±27ms respectively, p =0.004, significant after Bonferroni correction). Mean T1 of PW surrounding lower-grade gliomas differed from PW around glioblastomas (mean±sd: 1066±218ms and 1578±331ms respectively, p=0.004, significant after Bonferroni correction). Logistic regression analysis revealed that mean T2 of ST offered best separation between glioblastomas and metastases with AUC of 0.86 (95% CI 0.69–1.00, p<0.0001). Conclusion MRF allows rapid simultaneous T1, T2 measurement in brain tumors and surrounding tissues. MRF based relaxometry can identify quantitative differences between solid-tumor regions of lower grade gliomas and metastases and between peritumoral regions of glioblastomas and lower grade gliomas. PMID:28034994
Psifidi, Androniki; Dovas, Chrysostomos; Banos, Georgios
2011-01-19
Single nucleotide polymorphisms (SNP) have proven to be powerful genetic markers for genetic applications in medicine, life science and agriculture. A variety of methods exist for SNP detection but few can quantify SNP frequencies when the mutated DNA molecules correspond to a small fraction of the wild-type DNA. Furthermore, there is no generally accepted gold standard for SNP quantification, and, in general, currently applied methods give inconsistent results in selected cohorts. In the present study we sought to develop a novel method for accurate detection and quantification of SNP in DNA pooled samples. The development and evaluation of a novel Ligase Chain Reaction (LCR) protocol that uses a DNA-specific fluorescent dye to allow quantitative real-time analysis is described. Different reaction components and thermocycling parameters affecting the efficiency and specificity of LCR were examined. Several protocols, including gap-LCR modifications, were evaluated using plasmid standard and genomic DNA pools. A protocol of choice was identified and applied for the quantification of a polymorphism at codon 136 of the ovine PRNP gene that is associated with susceptibility to a transmissible spongiform encephalopathy in sheep. The real-time LCR protocol developed in the present study showed high sensitivity, accuracy, reproducibility and a wide dynamic range of SNP quantification in different DNA pools. The limits of detection and quantification of SNP frequencies were 0.085% and 0.35%, respectively. The proposed real-time LCR protocol is applicable when sensitive detection and accurate quantification of low copy number mutations in DNA pools is needed. Examples include oncogenes and tumour suppressor genes, infectious diseases, pathogenic bacteria, fungal species, viral mutants, drug resistance resulting from point mutations, and genetically modified organisms in food.
Pocock, Tessa; Król, Marianna; Huner, Norman P A
2004-01-01
Chorophylls and carotenoids are functionally important pigment molecules in photosynthetic organisms. Methods for the determination of chlorophylls a and b, beta-carotene, neoxanthin, and the pigments that are involved in photoprotective cycles such as the xanthophylls are discussed. These cycles involve the reversible de-epoxidation of violaxanthin into antheraxanthin and zeaxanthin, as well as the reversible de-epoxidation of lutein-5,6-epoxide into lutein. This chapter describes pigment extraction procedures from higher plants and green algae. Methods for the determination and quantification using high-performance liquid chromatograpy (HPLC) are described as well as methods for the separation and purification of pigments for use as standards using thin-layer chromatography (TLC). In addition, several spectrophotometric methods for the quantification of chlorophylls a and b are described.
Metering error quantification under voltage and current waveform distortion
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran
2017-09-01
With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.
A simple and fast method for extraction and quantification of cryptophyte phycoerythrin.
Thoisen, Christina; Hansen, Benni Winding; Nielsen, Søren Laurentius
2017-01-01
The microalgal pigment phycoerythrin (PE) is of commercial interest as natural colorant in food and cosmetics, as well as fluoroprobes for laboratory analysis. Several methods for extraction and quantification of PE are available but they comprise typically various extraction buffers, repetitive freeze-thaw cycles and liquid nitrogen, making extraction procedures more complicated. A simple method for extraction of PE from cryptophytes is described using standard laboratory materials and equipment. The cryptophyte cells on the filters were disrupted at -80 °C and added phosphate buffer for extraction at 4 °C followed by absorbance measurement. The cryptophyte Rhodomonas salina was used as a model organism. •Simple method for extraction and quantification of phycoerythrin from cryptophytes.•Minimal usage of equipment and chemicals, and low labor costs.•Applicable for industrial and biological purposes.
Witte, Anna Kristina; Fister, Susanne; Mester, Patrick; Schoder, Dagmar; Rossmanith, Peter
2016-11-01
Fast and reliable pathogen detection is an important issue for human health. Since conventional microbiological methods are rather slow, there is growing interest in detection and quantification using molecular methods. The droplet digital polymerase chain reaction (ddPCR) is a relatively new PCR method for absolute and accurate quantification without external standards. Using the Listeria monocytogenes specific prfA assay, we focused on the questions of whether the assay was directly transferable to ddPCR and whether ddPCR was suitable for samples derived from heterogeneous matrices, such as foodstuffs that often included inhibitors and a non-target bacterial background flora. Although the prfA assay showed suboptimal cluster formation, use of ddPCR for quantification of L. monocytogenes from pure bacterial cultures, artificially contaminated cheese, and naturally contaminated foodstuff was satisfactory over a relatively broad dynamic range. Moreover, results demonstrated the outstanding detection limit of one copy. However, while poorer DNA quality, such as resulting from longer storage, can impair ddPCR, internal amplification control (IAC) of prfA by ddPCR, that is integrated in the genome of L. monocytogenes ΔprfA, showed even slightly better quantification over a broader dynamic range. Graphical Abstract Evaluating the absolute quantification potential of ddPCR targeting Listeria monocytogenes prfA.
Gaubert, Alexandra; Jeudy, Jérémy; Rougemont, Blandine; Bordes, Claire; Lemoine, Jérôme; Casabianca, Hervé; Salvador, Arnaud
2016-07-01
In a stricter legislative context, greener detergent formulations are developed. In this way, synthetic surfactants are frequently replaced by bio-sourced surfactants and/or used at lower concentrations in combination with enzymes. In this paper, a LC-MS/MS method was developed for the identification and quantification of enzymes in laundry detergents. Prior to the LC-MS/MS analyses, a specific sample preparation protocol was developed due to matrix complexity (high surfactant percentages). Then for each enzyme family mainly used in detergent formulations (protease, amylase, cellulase, and lipase), specific peptides were identified on a high resolution platform. A LC-MS/MS method was then developed in selected reaction monitoring (SRM) MS mode for the light and corresponding heavy peptides. The method was linear on the peptide concentration ranges 25-1000 ng/mL for protease, lipase, and cellulase; 50-1000 ng/mL for amylase; and 5-1000 ng/mL for cellulase in both water and laundry detergent matrices. The application of the developed analytical strategy to real commercial laundry detergents enabled enzyme identification and absolute quantification. For the first time, identification and absolute quantification of enzymes in laundry detergent was realized by LC-MS/MS in a single run. Graphical Abstract Identification and quantification of enzymes by LC-MS/MS.
Paul B. Alaback; Duncan C. Lutes
1997-01-01
Methods for the quantification of coarse woody debris volume and the description of spatial patterning were studied in the Tenderfoot Creek Experimental Forest, Montana. The line transect method was found to be an accurate, unbiased estimator of down debris volume (> 10cm diameter) on 1/4 hectare fixed-area plots, when perpendicular lines were used. The Fischer...
Kuich, P. Henning J. L.; Hoffmann, Nils; Kempa, Stefan
2015-01-01
A current bottleneck in GC–MS metabolomics is the processing of raw machine data into a final datamatrix that contains the quantities of identified metabolites in each sample. While there are many bioinformatics tools available to aid the initial steps of the process, their use requires both significant technical expertise and a subsequent manual validation of identifications and alignments if high data quality is desired. The manual validation is tedious and time consuming, becoming prohibitively so as sample numbers increase. We have, therefore, developed Maui-VIA, a solution based on a visual interface that allows experts and non-experts to simultaneously and quickly process, inspect, and correct large numbers of GC–MS samples. It allows for the visual inspection of identifications and alignments, facilitating a unique and, due to its visualization and keyboard shortcuts, very fast interaction with the data. Therefore, Maui-Via fills an important niche by (1) providing functionality that optimizes the component of data processing that is currently most labor intensive to save time and (2) lowering the threshold of expertise required to process GC–MS data. Maui-VIA projects are initiated with baseline-corrected raw data, peaklists, and a database of metabolite spectra and retention indices used for identification. It provides functionality for retention index calculation, a targeted library search, the visual annotation, alignment, correction interface, and metabolite quantification, as well as the export of the final datamatrix. The high quality of data produced by Maui-VIA is illustrated by its comparison to data attained manually by an expert using vendor software on a previously published dataset concerning the response of Chlamydomonas reinhardtii to salt stress. In conclusion, Maui-VIA provides the opportunity for fast, confident, and high-quality data processing validation of large numbers of GC–MS samples by non-experts. PMID:25654076
Magnetic anisotropy in the Kitaev model systems Na2IrO3 and RuCl3
NASA Astrophysics Data System (ADS)
Chaloupka, Jiří; Khaliullin, Giniyat
2016-08-01
We study the ordered moment direction in the extended Kitaev-Heisenberg model relevant to honeycomb lattice magnets with strong spin-orbit coupling. We utilize numerical diagonalization and analyze the exact cluster ground states using a particular set of spin-coherent states, obtaining thereby quantum corrections to the magnetic anisotropy beyond conventional perturbative methods. It is found that the quantum fluctuations strongly modify the moment direction obtained at a classical level and are thus crucial for a precise quantification of the interactions. The results show that the moment direction is a sensitive probe of the model parameters in real materials. Focusing on the experimentally relevant zigzag phases of the model, we analyze the currently available neutron-diffraction and resonant x-ray-diffraction data on Na2IrO3 and RuCl3 and discuss the parameter regimes plausible in these Kitaev-Heisenberg model systems.
Strategy for determination of LOD and LOQ values--some basic aspects.
Uhrovčík, Jozef
2014-02-01
The paper is devoted to the evaluation of limit of detection (LOD) and limit of quantification (LOQ) values in concentration domain by using 4 different approaches; namely 3σ and 10σ approaches, ULA2 approach, PBA approach and MDL approach. Brief theoretical analyses of all above mentioned approaches are given together with directions for their practical use. Calculations and correct calibration design are exemplified by using of electrothermal atomic absorption spectrometry for determination of lead in drinking water sample. These validation parameters reached 1.6 μg L(-1) (LOD) and 5.4 μg L(-1) (LOQ) by using 3σ and 10σ approaches. For obtaining relevant values of analyte concentration the influence of calibration design and measurement methodology were examined. The most preferred technique has proven to be a method of preconcentration of the analyte on the surface of the graphite cuvette (boost cycle). © 2013 Elsevier B.V. All rights reserved.
Misra, Ankita; Shukla, Pushpendra Kumar; Kumar, Bhanu; Chand, Jai; Kushwaha, Poonam; Khalid, Md.; Singh Rawat, Ajay Kumar; Srivastava, Sharad
2017-01-01
Background: Gloriosa superba L. (Colchicaceae) is used as adjuvant therapy in gout for its potential antimitotic activity due to high colchicine(s) alkaloids. Objective: This study aimed to develop an easy, cheap, precise, and accurate high-performance thin-layer chromatographic (HPTLC) validated method for simultaneous quantification of bioactive alkaloids (colchicine and gloriosine) in G. superba L. and to identify its elite chemotype(s) from Sikkim Himalayas (India). Methods: The HPTLC chromatographic method was developed using mobile phase of chloroform: acetone: diethyl amine (5:4:1) at λmax of 350 nm. Results: Five germplasms were collected from targeted region, and on morpho-anatomical inspection, no significant variation was observed among them. Quantification data reveal that content of colchicine (Rf: 0.72) and gloriosine (Rf: 0.61) varies from 0.035%–0.150% to 0.006%–0.032% (dry wt. basis). Linearity of method was obtained in the concentration range of 100–400 ng/spot of marker(s), exhibiting regression coefficient of 0.9987 (colchicine) and 0.9983 (gloriosine) with optimum recovery of 97.79 ± 3.86 and 100.023% ± 0.01%, respectively. Limit of detection and limit of quantification were analyzed, respectively, as 6.245, 18.926 and 8.024, 24.316 (ng). Two germplasms, namely NBG-27 and NBG-26, were found to be elite chemotype of both the markers. Conclusion: The developed method is validated in terms of accuracy, recovery, and precision studies as per the ICH guidelines (2005) and can be adopted for the simultaneous quantification of colchicine and gloriosine in phytopharmaceuticals. In addition, this study is relevant to explore the chemotypic variability in metabolite content for commercial and medicinal purposes. SUMMARY An easy, cheap, precise, and accurate high performance thin layer chromatographic (HPTLC) validated method for simultaneous quantification of bioactive alkaloids (colchicine and gloriosine) in G. superba L.Five germplasms were collected from targeted region, and on morpho anatomical inspection, no significant variation was observed among themQuantification data reveal that content of colchicine (Rf: 0.72) and gloriosine (Rf: 0.61) varies from 0.035%–0.150% to 0.006%–0.032% (dry wt. basis)Two germplasms, namely NBG 27 and NBG 26, were found to be elite chemotype of both the markers. PMID:29142436
Bihan, Kevin; Sauzay, Chloé; Goldwirt, Lauriane; Charbonnier-Beaupel, Fanny; Hulot, Jean-Sebastien; Funck-Brentano, Christian; Zahr, Noël
2015-02-01
Vemurafenib (Zelboraf) is a new tyrosine kinase inhibitor that selectively targets activated BRAF V600E gene and is indicated for the treatment of advanced BRAF mutation-positive melanoma. We developed a simple method for vemurafenib quantification using liquid chromatography-tandem mass spectrometry. A stability study of vemurafenib in human plasma was also performed. (13)C(6)-vemurafenib was used as the internal standard. A single-step protein precipitation was used for plasma sample preparation. Chromatography was performed on an Acquity UPLC system (Waters) with chromatographic separation by the use of an Acquity UPLC BEH C18 column (2.1 × 50 mm, 1.7-mm particle size; Waters). Quantification was performed using the monitoring of multiple reactions of following transitions: m/z 488.2 → 381.0 for vemurafenib and m/z 494.2 → 387.0 for internal standard. This method was linear over the range from 1.0 to 100.0 mcg/mL. The lower limit of quantification was 0.1 mcg/mL for vemurafenib in plasma. Vemurafenib remained stable for 1 month at all levels tested, when stored indifferently at room temperature (20 °C), at +4 °C, or at -20 °C. This method was used successfully to perform a plasma pharmacokinetic study of vemurafenib in a patient after oral administration at a steady state. This liquid chromatography-tandem mass spectrometry method for vemurafenib quantification in human plasma is simple, rapid, specific, sensitive, accurate, precise, and reliable.
Uncertainty Quantification for Robust Control of Wind Turbines using Sliding Mode Observer
NASA Astrophysics Data System (ADS)
Schulte, Horst
2016-09-01
A new quantification method of uncertain models for robust wind turbine control using sliding-mode techniques is presented with the objective to improve active load mitigation. This approach is based on the so-called equivalent output injection signal, which corresponds to the average behavior of the discontinuous switching term, establishing and maintaining a motion on a so-called sliding surface. The injection signal is directly evaluated to obtain estimates of the uncertainty bounds of external disturbances and parameter uncertainties. The applicability of the proposed method is illustrated by the quantification of a four degree-of-freedom model of the NREL 5MW reference turbine containing uncertainties.
NASA Astrophysics Data System (ADS)
Buongiorno, J.; Lloyd, K. G.; Shumaker, A.; Schippers, A.; Webster, G.; Weightman, A.; Turner, S.
2015-12-01
Nearly 75% of the Earth's surface is covered by marine sediment that is home to an estimated 2.9 x 1029 microbial cells. A substantial impediment to understanding the abundance and distribution of cells within marine sediment is the lack of a consistent and reliable method for their taxon-specific quantification. Catalyzed reporter fluorescent in situ hybridization (CARD-FISH) provides taxon-specific enumeration, but this process requires passing a large enzyme through cell membranes, decreasing its precision relative to general cell counts using a small DNA stain. In 2015, Yamaguchi et al. developed FISH hybridization chain reaction (FISH-HCR) as an in situ whole cell detection method for environmental microorganisms. FISH-HCR amplifies the fluorescent signal, as does CARD-FISH, but it allows for milder cell permeation methods that might prevent yield loss. To compare FISH-HCR to CARD-FISH, we examined bacteria and archaea cell counts within two sediment cores, Lille Belt (~78 meters deep) and Landsort Deep (90 meters deep), which were retrieved from the Baltic Sea Basin during IODP Expedition 347. Preliminary analysis shows that CARD-FISH counts are below the quantification limit for most depths across both cores. By contrast, quantification of cells was possible with FISH-HCR in all examined depths. When quantification with CARD-FISH was above the limit of detection, counts with FISH-HCR were up to 11 fold higher for Bacteria and 3 fold higher for Archaea from the same sediment sample. Further, FISH-HCR counts follow the trends of on board counts nicely, indicating that FISH-HCR may better reflect the cellular abundance within marine sediment than other quantification methods, including qPCR. Using FISH-HCR, we found that archaeal cell counts were on average greater than bacterial cell counts, but within the same order of magnitude.
Eriksen, Jane N; Madsen, Pia L; Dragsted, Lars O; Arrigoni, Eva
2017-02-01
An improved UHPLC-DAD-based method was developed and validated for quantification of major carotenoids present in spinach, serum, chylomicrons, and feces. Separation was achieved with gradient elution within 12.5 min for six dietary carotenoids and the internal standard, echinenone. The proposed method provides, for all standard components, resolution > 1.1, linearity covering the target range (R > 0.99), LOQ < 0.035 mg/L, and intraday and interday RSDs < 2 and 10%, respectively. Suitability of the method was tested on biological matrices. Method precision (RSD%) for carotenoid quantification in serum, chylomicrons, and feces was below 10% for intra- and interday analysis, except for lycopene. Method accuracy was consistent with mean recoveries ranging from 78.8 to 96.9% and from 57.2 to 96.9% for all carotenoids, except for lycopene, in serum and feces, respectively. Additionally, an interlaboratory validation study on spinach at two institutions showed no significant differences in lutein or β-carotene content, when evaluated on four occasions.
Nahar, Limon Khatun; Cordero, Rosa Elena; Nutt, David; Lingford-Hughes, Anne; Turton, Samuel; Durant, Claire; Wilson, Sue; Paterson, Sue
2016-01-01
Abstract A highly sensitive and fully validated method was developed for the quantification of baclofen in human plasma. After adjusting the pH of the plasma samples using a phosphate buffer solution (pH 4), baclofen was purified using mixed mode (C8/cation exchange) solid-phase extraction (SPE) cartridges. Endogenous water-soluble compounds and lipids were removed from the cartridges before the samples were eluted and concentrated. The samples were analyzed using triple-quadrupole liquid chromatography–tandem mass spectrometry (LC–MS-MS) with triggered dynamic multiple reaction monitoring mode for simultaneous quantification and confirmation. The assay was linear from 25 to 1,000 ng/mL (r2 > 0.999; n = 6). Intraday (n = 6) and interday (n = 15) imprecisions (% relative standard deviation) were <5%, and the average recovery was 30%. The limit of detection of the method was 5 ng/mL, and the limit of quantification was 25 ng/mL. Plasma samples from healthy male volunteers (n = 9, median age: 22) given two single oral doses of baclofen (10 and 60 mg) on nonconsecutive days were analyzed to demonstrate method applicability. PMID:26538544
Arashida, Naoko; Nishimoto, Rumi; Harada, Masashi; Shimbo, Kazutaka; Yamada, Naoyuki
2017-02-15
Amino acids and their related metabolites play important roles in various physiological processes and have consequently become biomarkers for diseases. However, accurate quantification methods have only been established for major compounds, such as amino acids and a limited number of target metabolites. We previously reported a highly sensitive high-throughput method for the simultaneous quantification of amines using 3-aminopyridyl-N-succinimidyl carbamate as a derivatization reagent combined with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Herein, we report the successful development of a practical and accurate LC-MS/MS method to analyze low concentrations of 40 physiological amines in 19 min. Thirty-five of these amines showed good linearity, limits of quantification, accuracy, precision, and recovery characteristics in plasma, with scheduled selected reaction monitoring acquisitions. Plasma samples from 10 healthy volunteers were evaluated using our newly developed method. The results revealed that 27 amines were detected in one of the samples, and that 24 of these compounds could be quantified. Notably, this new method successfully quantified metabolites with high accuracy across three orders of magnitude, with lowest and highest averaged concentrations of 31.7 nM (for spermine) and 18.3 μM (for α-aminobutyric acid), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Barco, Sebastiano; Castagnola, Elio; Moscatelli, Andrea; Rudge, James; Tripodi, Gino; Cangemi, Giuliana
2017-10-25
In this paper we show the development and validation of a volumetric absorptive microsampling (VAMS™)-LC-MS/MS method for the simultaneous quantification of four antibiotics: piperacillin-tazobactam, meropenem, linezolid and ceftazidime in 10μL human blood. The novel VAMS-LC-MS/MS method has been compared with a dried blood spot (DBS)-based method in terms of impact of hematocrit (HCT) on accuracy, reproducibility, recovery and matrix effect. Antibiotics were extracted from VAMS and DBS by protein precipitation with methanol after a re-hydration step at 37°C for 10min. LC-MS/MS was carried out on a Thermo Scientific™ TSQ Quantum™ Access MAX triple quadrupole coupled to an Accela ™UHPLC system. The VAMS-LC-MS/MS method is selective, precise and reproducible. In contrast to DBS, it allows an accurate quantification without any HCT influence. It has been applied to samples derived from pediatric patients under therapy. VAMS is a valid alternative sampling strategy for the quantification of antibiotics and is valuable in support of clinical PK/PD studies and consequently therapeutic drug monitoring (TDM) in pediatrics. Copyright © 2017 Elsevier B.V. All rights reserved.
GMO quantification: valuable experience and insights for the future.
Milavec, Mojca; Dobnik, David; Yang, Litao; Zhang, Dabing; Gruden, Kristina; Zel, Jana
2014-10-01
Cultivation and marketing of genetically modified organisms (GMOs) have been unevenly adopted worldwide. To facilitate international trade and to provide information to consumers, labelling requirements have been set up in many countries. Quantitative real-time polymerase chain reaction (qPCR) is currently the method of choice for detection, identification and quantification of GMOs. This has been critically assessed and the requirements for the method performance have been set. Nevertheless, there are challenges that should still be highlighted, such as measuring the quantity and quality of DNA, and determining the qPCR efficiency, possible sequence mismatches, characteristics of taxon-specific genes and appropriate units of measurement, as these remain potential sources of measurement uncertainty. To overcome these problems and to cope with the continuous increase in the number and variety of GMOs, new approaches are needed. Statistical strategies of quantification have already been proposed and expanded with the development of digital PCR. The first attempts have been made to use new generation sequencing also for quantitative purposes, although accurate quantification of the contents of GMOs using this technology is still a challenge for the future, and especially for mixed samples. New approaches are needed also for the quantification of stacks, and for potential quantification of organisms produced by new plant breeding techniques.
Bostijn, N; Hellings, M; Van Der Veen, M; Vervaet, C; De Beer, T
2018-07-12
UltraViolet (UV) spectroscopy was evaluated as an innovative Process Analytical Technology (PAT) - tool for the in-line and real-time quantitative determination of low-dosed active pharmaceutical ingredients (APIs) in a semi-solid (gel) and a liquid (suspension) pharmaceutical formulation during their batch production process. The performance of this new PAT-tool (i.e., UV spectroscopy) was compared with an already more established PAT-method based on Raman spectroscopy. In-line UV measurements were carried out with an immersion probe while for the Raman measurements a non-contact PhAT probe was used. For both studied formulations, an in-line API quantification model was developed and validated per spectroscopic technique. The known API concentrations (Y) were correlated with the corresponding in-line collected preprocessed spectra (X) through a Partial Least Squares (PLS) regression. Each developed quantification method was validated by calculating the accuracy profile on the basis of the validation experiments. Furthermore, the measurement uncertainty was determined based on the data generated for the determination of the accuracy profiles. From the accuracy profile of the UV- and Raman-based quantification method for the gel, it was concluded that at the target API concentration of 2% (w/w), 95 out of 100 future routine measurements given by the Raman method will not deviate more than 10% (relative error) from the true API concentration, whereas for the UV method the acceptance limits of 10% were exceeded. For the liquid formulation, the Raman method was not able to quantify the API in the low-dosed suspension (0.09% (w/w) API). In contrast, the in-line UV method was able to adequately quantify the API in the suspension. This study demonstrated that UV spectroscopy can be adopted as a novel in-line PAT-technique for low-dose quantification purposes in pharmaceutical processes. Important is that none of the two spectroscopic techniques was superior to the other for both formulations: the Raman method was more accurate in quantifying the API in the gel (2% (w/w) API), while the UV method performed better for API quantification in the suspension (0.09% (w/w) API). Copyright © 2018 Elsevier B.V. All rights reserved.
Lautié, Emmanuelle; Rasse, Catherine; Rozet, Eric; Mourgues, Claire; Vanhelleputte, Jean-Paul; Quetin-Leclercq, Joëlle
2013-02-01
The aim of this study was to find if fast microwave-assisted extraction could be an alternative to the conventional Soxhlet extraction for the quantification of rotenone in yam bean seeds by SPE and HPLC-UV. For this purpose, an experimental design was used to determine the optimal conditions of the microwave extraction. Then the values of the quantification on three accessions from two different species of yam bean seeds were compared using the two different kinds of extraction. A microwave extraction of 11 min at 55°C using methanol/dichloromethane (50:50) allowed rotenone extraction either equivalently or more efficiently than the 8-h-Soxhlet extraction method and was less sensitive to moisture content. The selectivity, precision, trueness, accuracy, and limit of quantification of the method with microwave extraction were also demonstrated. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Idilman, Ilkay S; Keskin, Onur; Elhan, Atilla Halil; Idilman, Ramazan; Karcaaltincaba, Musturay
2014-05-01
To determine the utility of sequential MRI-estimated proton density fat fraction (MRI-PDFF) for quantification of the longitudinal changes in liver fat content in individuals with nonalcoholic fatty liver disease (NAFLD). A total of 18 consecutive individuals (M/F: 10/8, mean age: 47.7±9.8 years) diagnosed with NAFLD, who underwent sequential PDFF calculations for the quantification of hepatic steatosis at two different time points, were included in the study. All patients underwent T1-independent volumetric multi-echo gradient-echo imaging with T2* correction and spectral fat modeling. A close correlation for quantification of hepatic steatosis between the initial MRI-PDFF and liver biopsy was observed (rs=0.758, p<0.001). The median interval between two sequential MRI-PDFF measurements was 184 days. From baseline to the end of the follow-up period, serum GGT level and homeostasis model assessment score were significantly improved (p=0.015, p=0.006, respectively), whereas BMI, serum AST, and ALT levels were slightly decreased. MRI-PDFFs were significantly improved (p=0.004). A good correlation between two sequential MRI-PDFF calculations was observed (rs=0.714, p=0.001). With linear regression analyses, only delta serum ALT levels had a significant effect on delta MRI-PDFF calculations (r2=38.6%, p=0.006). At least 5.9% improvement in MRI-PDFF is needed to achieve a normalized abnormal ALT level. The improvement of MRI-PDFF score was associated with the improvement of biochemical parameters in patients who had improvement in delta MRI-PDFF (p<0.05). MRI-PDFF can be used for the quantification of the longitudinal changes of hepatic steatosis. The changes in serum ALT levels significantly reflected changes in MRI-PDFF in patients with NAFLD.
Da Silva, Eric; Kirkham, Brian; Heyd, Darrick V; Pejović-Milić, Ana
2013-10-01
Plaster of Paris [poP, CaSO4·(1)/(2) H2O] is the standard phantom material used for the calibration of in vivo X-ray fluorescence (IVXRF)-based systems of bone metal quantification (i.e bone strontium and lead). Calibration of IVXRF systems of bone metal quantification employs the use of a coherent normalization procedure which requires the application of a coherent correction factor (CCF) to the data, calculated as the ratio of the relativistic form factors of the phantom material and bone mineral. Various issues have been raised as to the suitability of poP for the calibration of IVXRF systems of bone metal quantification which include its chemical purity and its chemical difference from bone mineral (a calcium phosphate). This work describes the preparation of a chemically pure hydroxyapatite phantom material, of known composition and stoichiometry, proposed for the purpose of calibrating IVXRF systems of bone strontium and lead quantification as a replacement for poP. The issue with contamination by the analyte was resolved by preparing pure Ca(OH)2 by hydroxide precipitation, which was found to bring strontium and lead levels to <0.7 and <0.3 μg/g Ca, respectively. HAp phantoms were prepared from known quantities of chemically pure Ca(OH)2, CaHPO4·2H2O prepared from pure Ca(OH)2, the analyte, and a HPO4(2-) containing setting solution. The final crystal structure of the material was found to be similar to that of the bone mineral component of NIST SRM 1486 (bone meal), as determined by powder X-ray diffraction spectrometry.