Dual Approach To Superquantile Estimation And Applications To Density Fitting
2016-06-01
incorporate additional constraints to improve the fidelity of density estimates in tail regions. We limit our investigation to data with heavy tails, where...samples of various heavy -tailed distributions. 14. SUBJECT TERMS probability density estimation, epi-splines, optimization, risk quantification...limit our investigation to data with heavy tails, where risk quantification is typically the most difficult. Demonstrations are provided in the form of
Zorko, Benjamin; Korun, Matjaž; Mora Canadas, Juan Carlos; Nicoulaud-Gouin, Valerie; Chyly, Pavol; Blixt Buhr, Anna Maria; Lager, Charlotte; Aquilonius, Karin; Krajewski, Pawel
2016-07-01
Several methods for reporting outcomes of gamma-ray spectrometric measurements of environmental samples for dose calculations are presented and discussed. The measurement outcomes can be reported as primary measurement results, primary measurement results modified according to the quantification limit, best estimates obtained by the Bayesian posterior (ISO 11929), best estimates obtained by the probability density distribution resembling shifting, and the procedure recommended by the European Commission (EC). The annual dose is calculated from the arithmetic average using any of these five procedures. It was shown that the primary measurement results modified according to the quantification limit could lead to an underestimation of the annual dose. On the other hand the best estimates lead to an overestimation of the annual dose. The annual doses calculated from the measurement outcomes obtained according to the EC's recommended procedure, which does not cope with the uncertainties, fluctuate between an under- and overestimation, depending on the frequency of the measurement results that are larger than the limit of detection. In the extreme case, when no measurement results above the detection limit occur, the average over primary measurement results modified according to the quantification limit underestimates the average over primary measurement results for about 80%. The average over best estimates calculated according the procedure resembling shifting overestimates the average over primary measurement results for 35%, the average obtained by the Bayesian posterior for 85% and the treatment according to the EC recommendation for 89%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lesion Quantification in Dual-Modality Mammotomography
NASA Astrophysics Data System (ADS)
Li, Heng; Zheng, Yibin; More, Mitali J.; Goodale, Patricia J.; Williams, Mark B.
2007-02-01
This paper describes a novel x-ray/SPECT dual modality breast imaging system that provides 3D structural and functional information. While only a limited number of views on one side of the breast can be acquired due to mechanical and time constraints, we developed a technique to compensate for the limited angle artifact in reconstruction images and accurately estimate both the lesion size and radioactivity concentration. Various angular sampling strategies were evaluated using both simulated and experimental data. It was demonstrated that quantification of lesion size to an accuracy of 10% and quantification of radioactivity to an accuracy of 20% are feasible from limited-angle data acquired with clinically practical dosage and acquisition time
NASA Astrophysics Data System (ADS)
Schwabe, O.; Shehab, E.; Erkoyuncu, J.
2015-08-01
The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.
Ostry, Vladimir; Malir, Frantisek; Dofkova, Marcela; Skarkova, Jarmila; Pfohl-Leszkowicz, Annie; Ruprich, Jiri
2015-09-10
Ochratoxin A is a nephrotoxic and renal carcinogenic mycotoxin and is a common contaminant of various food commodities. Eighty six kinds of foodstuffs (1032 food samples) were collected in 2011-2013. High-performance liquid chromatography with fluorescence detection was used for ochratoxin A determination. Limit of quantification of the method varied between 0.01-0.2 μg/kg depending on the food matrices. The most exposed population is children aged 4-6 years old. Globally for this group, the maximum ochratoxin A dietary exposure for "average consumer" was estimated at 3.3 ng/kg bw/day (lower bound, considering the analytical values below the limit of quantification as 0) and 3.9 ng/kg bw/day (middle bound, considering the analytical values below the limit of quantification as 1/2 limit of quantification). Important sources of exposure for this latter group include grain-based products, confectionery, meat products and fruit juice. The dietary intake for "high consumers" in the group 4-6 years old was estimated from grains and grain-based products at 19.8 ng/kg bw/day (middle bound), from tea at 12.0 ng/kg bw/day (middle bound) and from confectionery at 6.5 ng/kg bw/day (middle bound). For men aged 18-59 years old beer was the main contributor with an intake of 2.60 ng/kg bw/day ("high consumers", middle bound). Tea and grain-based products were identified to be the main contributors for dietary exposure in women aged 18-59 years old. Coffee and wine were identified as a higher contributor of the OTA intake in the population group of women aged 18-59 years old compared to the other population groups.
Adrait, Annie; Lebert, Dorothée; Trauchessec, Mathieu; Dupuis, Alain; Louwagie, Mathilde; Masselon, Christophe; Jaquinod, Michel; Chevalier, Benoît; Vandenesch, François; Garin, Jérôme; Bruley, Christophe; Brun, Virginie
2012-06-06
Enterotoxin A (SEA) is a staphylococcal virulence factor which is suspected to worsen septic shock prognosis. However, the presence of SEA in the blood of sepsis patients has never been demonstrated. We have developed a mass spectrometry-based assay for the targeted and absolute quantification of SEA in serum. To enhance sensitivity and specificity, we combined an immunoaffinity-based sample preparation with mass spectrometry analysis in the selected reaction monitoring (SRM) mode. Absolute quantification of SEA was performed using the PSAQ™ method (Protein Standard Absolute Quantification), which uses a full-length isotope-labeled SEA as internal standard. The lower limit of detection (LLOD) and lower limit of quantification (LLOQ) were estimated at 352pg/mL and 1057pg/mL, respectively. SEA recovery after immunocapture was determined to be 7.8±1.4%. Therefore, we assumed that less than 1femtomole of each SEA proteotypic peptide was injected on the liquid chromatography column before SRM analysis. From a 6-point titration experiment, quantification accuracy was determined to be 77% and precision at LLOQ was lower than 5%. With this sensitive PSAQ-SRM assay, we expect to contribute to decipher the pathophysiological role of SEA in severe sepsis. This article is part of a Special Issue entitled: Proteomics: The clinical link. Copyright © 2011 Elsevier B.V. All rights reserved.
Quantifying errors without random sampling.
Phillips, Carl V; LaPole, Luwanna M
2003-06-12
All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.
Hame, Yrjo; Angelini, Elsa D; Hoffman, Eric A; Barr, R Graham; Laine, Andrew F
2014-07-01
The extent of pulmonary emphysema is commonly estimated from CT scans by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols, and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the presented model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was applied on a longitudinal data set with 87 subjects and a total of 365 scans acquired with varying imaging protocols. The resulting emphysema estimates had very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. The generated emphysema delineations promise advantages for regional analysis of emphysema extent and progression.
Ramírez, Juan Carlos; Cura, Carolina Inés; Moreira, Otacilio da Cruz; Lages-Silva, Eliane; Juiz, Natalia; Velázquez, Elsa; Ramírez, Juan David; Alberti, Anahí; Pavia, Paula; Flores-Chávez, María Delmans; Muñoz-Calderón, Arturo; Pérez-Morales, Deyanira; Santalla, José; Guedes, Paulo Marcos da Matta; Peneau, Julie; Marcet, Paula; Padilla, Carlos; Cruz-Robles, David; Valencia, Edward; Crisante, Gladys Elena; Greif, Gonzalo; Zulantay, Inés; Costales, Jaime Alfredo; Alvarez-Martínez, Miriam; Martínez, Norma Edith; Villarroel, Rodrigo; Villarroel, Sandro; Sánchez, Zunilda; Bisio, Margarita; Parrado, Rudy; Galvão, Lúcia Maria da Cunha; da Câmara, Antonia Cláudia Jácome; Espinoza, Bertha; de Noya, Belkisyole Alarcón; Puerta, Concepción; Riarte, Adelina; Diosque, Patricio; Sosa-Estani, Sergio; Guhl, Felipe; Ribeiro, Isabela; Aznar, Christine; Britto, Constança; Yadón, Zaida Estela; Schijman, Alejandro G.
2015-01-01
An international study was performed by 26 experienced PCR laboratories from 14 countries to assess the performance of duplex quantitative real-time PCR (qPCR) strategies on the basis of TaqMan probes for detection and quantification of parasitic loads in peripheral blood samples from Chagas disease patients. Two methods were studied: Satellite DNA (SatDNA) qPCR and kinetoplastid DNA (kDNA) qPCR. Both methods included an internal amplification control. Reportable range, analytical sensitivity, limits of detection and quantification, and precision were estimated according to international guidelines. In addition, inclusivity and exclusivity were estimated with DNA from stocks representing the different Trypanosoma cruzi discrete typing units and Trypanosoma rangeli and Leishmania spp. Both methods were challenged against 156 blood samples provided by the participant laboratories, including samples from acute and chronic patients with varied clinical findings, infected by oral route or vectorial transmission. kDNA qPCR showed better analytical sensitivity than SatDNA qPCR with limits of detection of 0.23 and 0.70 parasite equivalents/mL, respectively. Analyses of clinical samples revealed a high concordance in terms of sensitivity and parasitic loads determined by both SatDNA and kDNA qPCRs. This effort is a major step toward international validation of qPCR methods for the quantification of T. cruzi DNA in human blood samples, aiming to provide an accurate surrogate biomarker for diagnosis and treatment monitoring for patients with Chagas disease. PMID:26320872
Ostry, Vladimir; Malir, Frantisek; Dofkova, Marcela; Skarkova, Jarmila; Pfohl-Leszkowicz, Annie; Ruprich, Jiri
2015-01-01
Ochratoxin A is a nephrotoxic and renal carcinogenic mycotoxin and is a common contaminant of various food commodities. Eighty six kinds of foodstuffs (1032 food samples) were collected in 2011–2013. High-performance liquid chromatography with fluorescence detection was used for ochratoxin A determination. Limit of quantification of the method varied between 0.01–0.2 μg/kg depending on the food matrices. The most exposed population is children aged 4–6 years old. Globally for this group, the maximum ochratoxin A dietary exposure for “average consumer” was estimated at 3.3 ng/kg bw/day (lower bound, considering the analytical values below the limit of quantification as 0) and 3.9 ng/kg bw/day (middle bound, considering the analytical values below the limit of quantification as 1/2 limit of quantification). Important sources of exposure for this latter group include grain-based products, confectionery, meat products and fruit juice. The dietary intake for “high consumers” in the group 4–6 years old was estimated from grains and grain-based products at 19.8 ng/kg bw/day (middle bound), from tea at 12.0 ng/kg bw/day (middle bound) and from confectionery at 6.5 ng/kg bw/day (middle bound). For men aged 18–59 years old beer was the main contributor with an intake of 2.60 ng/kg bw/day (“high consumers”, middle bound). Tea and grain-based products were identified to be the main contributors for dietary exposure in women aged 18–59 years old. Coffee and wine were identified as a higher contributor of the OTA intake in the population group of women aged 18–59 years old compared to the other population groups. PMID:26378578
Enault, Jérôme; Robert, Samuel; Schlosser, Olivier; de Thé, Catherine; Loret, Jean-François
2015-11-01
This study collated 254,441 analytical results from drinking water quality monitoring in order to compare levels of exposure of the French adult population from drinking water with that from total diet for 37 pesticides, 11 mineral elements, 11 polycyclic aromatic hydrocarbons (PAH), 6 non dioxin-like polychlorobiphenyls (NDL PCB), 5 ether polybromodiphenyl ethers (BDE), 2 perfluorinated compounds. It also compares levels of exposure from drinking water with that from inhalation of indoor air for 9 volatile organic compounds (VOC) and 3 phthalates. The vast majority of the water analysis results showed values below the limits of quantification and this comparison was primarily made on the basis of a highly pessimistic scenario consisting in considering the data below the limits of quantification as being equal to the limits of quantification. With this conservative scenario, it can be seen that tap water makes a minor but potentially non-negligible contribution for a few micropollutants, by comparison with diet and air. It also shows that exposure through drinking water remains below the toxicity reference values for these substances. Apart from a few extreme values reflecting exceptional local situations, the concentrations measured for the minority of positive samples (below the 95th percentile value) suggest a very low risk for human health. Lower limits of quantification would however be of use in better estimating the safety margin with regard to the toxicity reference values, in particular for BDE, PAH and NDL PCB. Copyright © 2015 Elsevier GmbH. All rights reserved.
Mottier, P; Parisod, V; Turesky, R J
2000-04-01
A method is described for the analysis of the 16 polycyclic aromatic hydrocarbons (PAHs) prioritized by the USA EPA in meat sausages grilled under common barbecue practices. Quantification was done by GC-MS using perdeuterated internal standards (IS). Validation was done by spiking the matrix at the 0.5 and 1.0 microg/kg levels. The average of expected values ranged from 60 to 134% (median 84%) at the 0.5 microg/kg level and from 69 to 121% (median 96%) at the 1.0 microg/kg level. The median of the limits of detection and quantification were 0.06 and 0.20 microg/kg, respectively, for a 4-g test portion. The carcinogenic PAHs were below the quantification limit in all products except one lamb sausage. Comparison of estimates when either 1, 5, or 16 perdeuterated PAHs were used as IS showed that the most accurate determination of PAHs required that each compound be quantified against its corresponding perdeuterated analogue.
Shivali, Garg; Praful, Lahorkar; Vijay, Gadgil
2012-01-01
Fourier transform infrared (FT-IR) spectroscopy is a technique widely used for detection and quantification of various chemical moieties. This paper describes the use of the FT-IR spectroscopy technique for the quantification of total lactones present in Inula racemosa and Andrographis paniculata. To validate the FT-IR spectroscopy method for quantification of total lactones in I. racemosa and A. paniculata. Dried and powdered I. racemosa roots and A. paniculata plant were extracted with ethanol and dried to remove ethanol completely. The ethanol extract was analysed in a KBr pellet by FT-IR spectroscopy. The FT-IR spectroscopy method was validated and compared with a known spectrophotometric method for quantification of lactones in A. paniculata. By FT-IR spectroscopy, the amount of total lactones was found to be 2.12 ± 0.47% (n = 3) in I. racemosa and 8.65 ± 0.51% (n = 3) in A. paniculata. The method showed comparable results with a known spectrophotometric method used for quantification of such lactones: 8.42 ± 0.36% (n = 3) in A. paniculata. Limits of detection and quantification for isoallantolactone were 1 µg and 10 µg respectively; for andrographolide they were 1.5 µg and 15 µg respectively. Recoveries were over 98%, with good intra- and interday repeatability: RSD ≤ 2%. The FT-IR spectroscopy method proved linear, accurate, precise and specific, with low limits of detection and quantification, for estimation of total lactones, and is less tedious than the UV spectrophotometric method for the compounds tested. This validated FT-IR spectroscopy method is readily applicable for the quality control of I. racemosa and A. paniculata. Copyright © 2011 John Wiley & Sons, Ltd.
Application of an energy balance method for estimating evapotranspiration in cropping systems
USDA-ARS?s Scientific Manuscript database
Accurate quantification of evapotranspiration (ET, consumptive water use) from planting through harvest is critical for managing the limited water resources for crop irrigation. Our objective was to develop and apply an improved land-crop surface residual energy balance (EB) method for quantifying E...
Integrative analysis with ChIP-seq advances the limits of transcript quantification from RNA-seq
Liu, Peng; Sanalkumar, Rajendran; Bresnick, Emery H.; Keleş, Sündüz; Dewey, Colin N.
2016-01-01
RNA-seq is currently the technology of choice for global measurement of transcript abundances in cells. Despite its successes, isoform-level quantification remains difficult because short RNA-seq reads are often compatible with multiple alternatively spliced isoforms. Existing methods rely heavily on uniquely mapping reads, which are not available for numerous isoforms that lack regions of unique sequence. To improve quantification accuracy in such difficult cases, we developed a novel computational method, prior-enhanced RSEM (pRSEM), which uses a complementary data type in addition to RNA-seq data. We found that ChIP-seq data of RNA polymerase II and histone modifications were particularly informative in this approach. In qRT-PCR validations, pRSEM was shown to be superior than competing methods in estimating relative isoform abundances within or across conditions. Data-driven simulations suggested that pRSEM has a greatly decreased false-positive rate at the expense of a small increase in false-negative rate. In aggregate, our study demonstrates that pRSEM transforms existing capacity to precisely estimate transcript abundances, especially at the isoform level. PMID:27405803
Sampaio, Francisco; Ladeiras-Lopes, Ricardo; Almeida, João; Fonseca, Paulo; Fontes-Carvalho, Ricardo; Ribeiro, José; Gama, Vasco
2017-07-01
Management of patients with mitral stenosis (MS) depends heavily on the accurate quantification of mitral valve area (MVA) using echocardiography. All currently used two-dimensional (2D) methods have limitations. Estimation of MVA using the proximal isovelocity surface area (PISA) method with real time three-dimensional (3D) echocardiography may circumvent those limitations. We aimed to evaluate the accuracy of 3D direct measurement of PISA in the estimation of MVA. Twenty-seven consecutive patients (median age of 63 years; 77.8% females) with rheumatic MS were prospectively studied. Transthoracic and transesophageal echocardiography with 2D and 3D acquisitions were performed on the same day. The reference method for MVA quantification was valve planimetry after 3D-volume multiplanar reconstruction. A semi-automated software was used to calculate the 3D flow convergence volume. Compared to MVA estimation using 3D planimetry, 3D PISA showed the best correlation (rho=0.78, P<.0001), followed by pressure half-time (PHT: rho=0.66, P<.001), continuity equation (CE: rho=0.61, P=.003), and 2D PISA (rho=0.26, P=.203). Bland-Altman analysis revealed a good agreement for MVA estimation with 3D PISA (mean difference -0.03 cm 2 ; limits of agreement (LOA) -0.40-0.35), in contrast to wider LOA for 2D methods: CE (mean difference 0.02 cm 2 , LOA -0.56-0.60); PHT (mean difference 0.31 cm 2 , LOA -0.32-0.95); 2D PISA (mean difference -0.03 cm 2 , LOA -0.92-0.86). MVA estimation using 3D PISA was feasible and more accurate than 2D methods. Its introduction in daily clinical practice seems possible and may overcome technical limitations of 2D methods. © 2017, Wiley Periodicals, Inc.
Lee, Seung Soo; Lee, Youngjoo; Kim, Namkug; Kim, Seong Who; Byun, Jae Ho; Park, Seong Ho; Lee, Moon-Gyu; Ha, Hyun Kwon
2011-06-01
To compare the accuracy of four chemical shift magnetic resonance imaging (MRI) (CS-MRI) analysis methods and MR spectroscopy (MRS) with and without T2-correction in fat quantification in the presence of excess iron. CS-MRI with six opposed- and in-phase acquisitions and MRS with five-echo acquisitions (TEs of 20, 30, 40, 50, 60 msec) were performed at 1.5 T on phantoms containing various fat fractions (FFs), on phantoms containing various iron concentrations, and in 18 patients with chronic liver disease. For CS-MRI, FFs were estimated with the dual-echo method, with two T2*-correction methods (triple- and multiecho), and with multiinterference methods that corrected for both T2* and spectral interference effects. For MRS, FF was estimated without T2-correction (single-echo MRS) and with T2-correction (multiecho MRS). In the phantoms, T2*- or T2-correction methods for CS-MRI and MRS provided unbiased estimations of FFs (mean bias, -1.1% to 0.5%) regardless of iron concentration, whereas the dual-echo method (-5.5% to -8.4%) and single-echo MRS (12.1% to 37.3%) resulted in large biases in FFs. In patients, the FFs estimated with triple-echo (R = 0.98), multiecho (R = 0.99), and multiinterference (R = 0.99) methods had stronger correlations with multiecho MRS FFs than with the dual-echo method (R = 0.86; P ≤ 0.011). The FFs estimated with multiinterference method showed the closest agreement with multiecho MRS FFs (the 95% limit-of-agreement, -0.2 ± 1.1). T2*- or T2-correction methods are effective in correcting the confounding effects of iron, enabling an accurate fat quantification throughout a wide range of iron concentrations. Spectral modeling of fat may further improve the accuracy of CS-MRI in fat quantification. Copyright © 2011 Wiley-Liss, Inc.
An Adaptive Nonlinear Aircraft Maneuvering Envelope Estimation Approach for Online Applications
NASA Technical Reports Server (NTRS)
Schuet, Stefan R.; Lombaerts, Thomas Jan; Acosta, Diana; Wheeler, Kevin; Kaneshige, John
2014-01-01
A nonlinear aircraft model is presented and used to develop an overall unified robust and adaptive approach to passive trim and maneuverability envelope estimation with uncertainty quantification. The concept of time scale separation makes this method suitable for the online characterization of altered safe maneuvering limitations after impairment. The results can be used to provide pilot feedback and/or be combined with flight planning, trajectory generation, and guidance algorithms to help maintain safe aircraft operations in both nominal and off-nominal scenarios.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Häme, Yrjö; Angelini, Elsa D.; Hoffman, Eric A.; Barr, R. Graham; Laine, Andrew F.
2014-01-01
The extent of pulmonary emphysema is commonly estimated from CT images by computing the proportional area of voxels below a predefined attenuation threshold. However, the reliability of this approach is limited by several factors that affect the CT intensity distributions in the lung. This work presents a novel method for emphysema quantification, based on parametric modeling of intensity distributions in the lung and a hidden Markov measure field model to segment emphysematous regions. The framework adapts to the characteristics of an image to ensure a robust quantification of emphysema under varying CT imaging protocols and differences in parenchymal intensity distributions due to factors such as inspiration level. Compared to standard approaches, the present model involves a larger number of parameters, most of which can be estimated from data, to handle the variability encountered in lung CT scans. The method was used to quantify emphysema on a cohort of 87 subjects, with repeated CT scans acquired over a time period of 8 years using different imaging protocols. The scans were acquired approximately annually, and the data set included a total of 365 scans. The results show that the emphysema estimates produced by the proposed method have very high intra-subject correlation values. By reducing sensitivity to changes in imaging protocol, the method provides a more robust estimate than standard approaches. In addition, the generated emphysema delineations promise great advantages for regional analysis of emphysema extent and progression, possibly advancing disease subtyping. PMID:24759984
An Uncertainty Quantification Framework for Remote Sensing Retrievals
NASA Astrophysics Data System (ADS)
Braverman, A. J.; Hobbs, J.
2017-12-01
Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.
Integrative analysis with ChIP-seq advances the limits of transcript quantification from RNA-seq.
Liu, Peng; Sanalkumar, Rajendran; Bresnick, Emery H; Keleş, Sündüz; Dewey, Colin N
2016-08-01
RNA-seq is currently the technology of choice for global measurement of transcript abundances in cells. Despite its successes, isoform-level quantification remains difficult because short RNA-seq reads are often compatible with multiple alternatively spliced isoforms. Existing methods rely heavily on uniquely mapping reads, which are not available for numerous isoforms that lack regions of unique sequence. To improve quantification accuracy in such difficult cases, we developed a novel computational method, prior-enhanced RSEM (pRSEM), which uses a complementary data type in addition to RNA-seq data. We found that ChIP-seq data of RNA polymerase II and histone modifications were particularly informative in this approach. In qRT-PCR validations, pRSEM was shown to be superior than competing methods in estimating relative isoform abundances within or across conditions. Data-driven simulations suggested that pRSEM has a greatly decreased false-positive rate at the expense of a small increase in false-negative rate. In aggregate, our study demonstrates that pRSEM transforms existing capacity to precisely estimate transcript abundances, especially at the isoform level. © 2016 Liu et al.; Published by Cold Spring Harbor Laboratory Press.
Compositional Solution Space Quantification for Probabilistic Software Analysis
NASA Technical Reports Server (NTRS)
Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem
2014-01-01
Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.
Li, Peng; Jia, Junwei; Bai, Lan; Pan, Aihu; Tang, Xueming
2013-07-01
Genetically modified carnation (Dianthus caryophyllus L.) Moonshade was approved for planting and commercialization in several countries from 2004. Developing methods for analyzing Moonshade is necessary for implementing genetically modified organism labeling regulations. In this study, the 5'-transgene integration sequence was isolated using thermal asymmetric interlaced (TAIL)-PCR. Based upon the 5'-transgene integration sequence, conventional and TaqMan real-time PCR assays were established. The relative limit of detection for the conventional PCR assay was 0.05 % for Moonshade using 100 ng total carnation genomic DNA, corresponding to approximately 79 copies of the carnation haploid genome, and the limits of detection and quantification of the TaqMan real-time PCR assay were estimated to be 51 and 254 copies of haploid carnation genomic DNA, respectively. These results are useful for identifying and quantifying Moonshade and its derivatives.
NASA Astrophysics Data System (ADS)
Dittes, Beatrice; Špačková, Olga; Ebrahimian, Negin; Kaiser, Maria; Rieger, Wolfgang; Disse, Markus; Straub, Daniel
2017-04-01
Flood risk estimates are subject to significant uncertainties, e.g. due to limited records of historic flood events, uncertainty in flood modeling, uncertain impact of climate change or uncertainty in the exposure and loss estimates. In traditional design of flood protection systems, these uncertainties are typically just accounted for implicitly, based on engineering judgment. In the AdaptRisk project, we develop a fully quantitative framework for planning of flood protection systems under current and future uncertainties using quantitative pre-posterior Bayesian decision analysis. In this contribution, we focus on the quantification of the uncertainties and study their relative influence on the flood risk estimate and on the planning of flood protection systems. The following uncertainty components are included using a Bayesian approach: 1) inherent and statistical (i.e. limited record length) uncertainty; 2) climate uncertainty that can be learned from an ensemble of GCM-RCM models; 3) estimates of climate uncertainty components not covered in 2), such as bias correction, incomplete ensemble, local specifics not captured by the GCM-RCM models; 4) uncertainty in the inundation modelling; 5) uncertainty in damage estimation. We also investigate how these uncertainties are possibly reduced in the future when new evidence - such as new climate models, observed extreme events, and socio-economic data - becomes available. Finally, we look into how this new evidence influences the risk assessment and effectivity of flood protection systems. We demonstrate our methodology for a pre-alpine catchment in southern Germany: the Mangfall catchment in Bavaria that includes the city of Rosenheim, which suffered significant losses during the 2013 flood event.
Chan, Tao
2012-01-01
CT has become an established method for calculating body composition, but it requires data from the whole body, which are not typically obtained in routine PET/CT examinations. A computerized scheme that evaluates whole-body lean body mass (LBM) based on CT data from limited-whole-body coverage was developed. The LBM so obtained was compared with results from conventional predictive equations. LBM can be obtained automatically from limited-whole-body CT data by 3 means: quantification of body composition from CT images in the limited-whole-body scan, based on thresholding of CT attenuation; determination of the range of coverage based on a characteristic trend of changing composition across different levels and pattern recognition of specific features at strategic positions; and estimation of the LBM of the whole body on the basis of a predetermined relationship between proportion of fat mass and extent of coverage. This scheme was validated using 18 whole-body PET/CT examinations truncated at different lengths to emulate limited-whole-body data. LBM was also calculated using predictive equations that had been reported for use in SUV normalization. LBM derived from limited-whole-body data using the proposed method correlated strongly with LBM derived from whole-body CT data, with correlation coefficients ranging from 0.991 (shorter coverage) to 0.998 (longer coverage) and SEMs of LBM ranging from 0.14 to 0.33 kg. These were more accurate than results from different predictive equations, which ranged in correlation coefficient from 0.635 to 0.970 and in SEM from 0.64 to 2.40 kg. LBM of the whole body could be automatically estimated from CT data of limited-whole-body coverage typically acquired in PET/CT examinations. This estimation allows more accurate and consistent quantification of metabolic activity of tumors based on LBM-normalized standardized uptake value.
Automated quantification of myocardial perfusion SPECT using simplified normal limits.
Slomka, Piotr J; Nishina, Hidetaka; Berman, Daniel S; Akincioglu, Cigdem; Abidov, Aiden; Friedman, John D; Hayes, Sean W; Germano, Guido
2005-01-01
To simplify development of normal limits for myocardial perfusion SPECT (MPS), we implemented a quantification scheme in which normal limits are derived without visual scoring of abnormal scans or optimization of regional thresholds. Normal limits were derived from same-day TI-201 rest/Tc-99m-sestamibi stress scans of male (n = 40) and female (n = 40) low-likelihood patients. Defect extent, total perfusion deficit (TPD), and regional perfusion extents were derived by comparison to normal limits in polar-map coordinates. MPS scans from 256 consecutive patients without known coronary artery disease, who underwent coronary angiography, were analyzed. The new method of quantification (TPD) was compared with our previously developed quantification system and visual scoring. The receiver operator characteristic area under the curve for detection of 50% or greater stenoses by TPD (0.88 +/- 0.02) was higher than by visual scoring (0.83 +/- 0.03) ( P = .039) or standard quantification (0.82 +/- 0.03) ( P = .004). For detection of 70% or greater stenoses, it was higher for TPD (0.89 +/- 0.02) than for standard quantification (0.85 +/- 0.02) ( P = .014). Sensitivity and specificity were 93% and 79%, respectively, for TPD; 81% and 85%, respectively, for visual scoring; and 80% and 73%, respectively, for standard quantification. The use of stress mode-specific normal limits did not improve performance. Simplified quantification achieves performance better than or equivalent to visual scoring or quantification based on per-segment visual optimization of abnormality thresholds.
Meeting Report: Tissue-based Image Analysis.
Saravanan, Chandra; Schumacher, Vanessa; Brown, Danielle; Dunstan, Robert; Galarneau, Jean-Rene; Odin, Marielle; Mishra, Sasmita
2017-10-01
Quantitative image analysis (IA) is a rapidly evolving area of digital pathology. Although not a new concept, the quantification of histological features on photomicrographs used to be cumbersome, resource-intensive, and limited to specialists and specialized laboratories. Recent technological advances like highly efficient automated whole slide digitizer (scanner) systems, innovative IA platforms, and the emergence of pathologist-friendly image annotation and analysis systems mean that quantification of features on histological digital images will become increasingly prominent in pathologists' daily professional lives. The added value of quantitative IA in pathology includes confirmation of equivocal findings noted by a pathologist, increasing the sensitivity of feature detection, quantification of signal intensity, and improving efficiency. There is no denying that quantitative IA is part of the future of pathology; however, there are also several potential pitfalls when trying to estimate volumetric features from limited 2-dimensional sections. This continuing education session on quantitative IA offered a broad overview of the field; a hands-on toxicologic pathologist experience with IA principles, tools, and workflows; a discussion on how to apply basic stereology principles in order to minimize bias in IA; and finally, a reflection on the future of IA in the toxicologic pathology field.
Jäpelt, Rie Bak; Jakobsen, Jette
2016-02-01
The objective of this study was to develop a rapid, sensitive, and specific analytical method to study vitamin K1 in fruits and vegetables. Accelerated solvent extraction and solid phase extraction was used for sample preparation. Quantification was done by liquid chromatography tandem mass spectrometry with atmospheric pressure chemical ionization in selected reaction monitoring mode with deuterium-labeled vitamin K1 as an internal standard. The precision was estimated as the pooled estimate of three replicates performed on three different days for spinach, peas, apples, banana, and beetroot. The repeatability was 5.2% and the internal reproducibility was 6.2%. Recovery was in the range 90-120%. No significant difference was observed between the results obtained by the present method and by a method using the same principle as the CEN-standard i.e. liquid-liquid extraction and post-column zinc reduction with fluorescence detection. Limit of quantification was estimated to 0.05 μg/100g fresh weight. Copyright © 2015 Elsevier Ltd. All rights reserved.
Uncertainty quantification applied to the radiological characterization of radioactive waste.
Zaffora, B; Magistris, M; Saporta, G; Chevalier, J-P
2017-09-01
This paper describes the process adopted at the European Organization for Nuclear Research (CERN) to quantify uncertainties affecting the characterization of very-low-level radioactive waste. Radioactive waste is a by-product of the operation of high-energy particle accelerators. Radioactive waste must be characterized to ensure its safe disposal in final repositories. Characterizing radioactive waste means establishing the list of radionuclides together with their activities. The estimated activity levels are compared to the limits given by the national authority of the waste disposal. The quantification of the uncertainty affecting the concentration of the radionuclides is therefore essential to estimate the acceptability of the waste in the final repository but also to control the sorting, volume reduction and packaging phases of the characterization process. The characterization method consists of estimating the activity of produced radionuclides either by experimental methods or statistical approaches. The uncertainties are estimated using classical statistical methods and uncertainty propagation. A mixed multivariate random vector is built to generate random input parameters for the activity calculations. The random vector is a robust tool to account for the unknown radiological history of legacy waste. This analytical technique is also particularly useful to generate random chemical compositions of materials when the trace element concentrations are not available or cannot be measured. The methodology was validated using a waste population of legacy copper activated at CERN. The methodology introduced here represents a first approach for the uncertainty quantification (UQ) of the characterization process of waste produced at particle accelerators. Copyright © 2017 Elsevier Ltd. All rights reserved.
Srivastava, Nishi; Srivastava, Amit; Srivastava, Sharad; Rawat, Ajay Kumar Singh; Khan, Abdul Rahman
2016-03-01
A rapid, sensitive, selective and robust quantitative densitometric high-performance thin-layer chromatographic method was developed and validated for separation and quantification of syringic acid (SYA) and kaempferol (KML) in the hydrolyzed extracts of Bergenia ciliata and Bergenia stracheyi. The separation was performed on silica gel 60F254 high-performance thin-layer chromatography plates using toluene : ethyl acetate : formic acid (5 : 4: 1, v/v/v) as the mobile phase. The quantification of SYA and KML was carried out using a densitometric reflection/absorption mode at 290 nm. A dense spot of SYA and KML appeared on the developed plate at a retention factor value of 0.61 ± 0.02 and 0.70 ± 0.01. A precise and accurate quantification was performed using linear regression analysis by plotting the peak area vs concentration 100-600 ng/band (correlation coefficient: r = 0.997, regression coefficient: R(2) = 0.996) for SYA and 100-600 ng/band (correlation coefficient: r = 0.995, regression coefficient: R(2) = 0.991) for KML. The developed method was validated in terms of accuracy, recovery and inter- and intraday study as per International Conference on Harmonisation guidelines. The limit of detection and limit of quantification of SYA and KML were determined, respectively, as 91.63, 142.26 and 277.67, 431.09 ng. The statistical data analysis showed that the method is reproducible and selective for the estimation of SYA and KML in extracts of B. ciliata and B. stracheyi. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Ivanova, V.; Surleva, A.; Koleva, B.
2018-06-01
An ion chromatographic method for determination of fluoride, chloride, nitrate and sulphate in untreated and treated drinking waters was described. An automated 850 IC Professional, Metrohm system equipped with conductivity detector and Metrosep A Supp 7-250 (250 x 4 mm) column was used. The validation of the method was performed for simultaneous determination of all studied analytes and the results have showed that the validated method fits the requirements of the current water legislation. The main analytical characteristics were estimated for each of studied analytes: limits of detection, limits of quantification, working and linear ranges, repeatability and intermediate precision, recovery. The trueness of the method was estimated by analysis of certified reference material for soft drinking water. Recovery test was performed on spiked drinking water samples. An uncertainty was estimated. The method was applied for analysis of drinking waters before and after chlorination.
NASA Astrophysics Data System (ADS)
Buongiorno, J.; Lloyd, K. G.; Shumaker, A.; Schippers, A.; Webster, G.; Weightman, A.; Turner, S.
2015-12-01
Nearly 75% of the Earth's surface is covered by marine sediment that is home to an estimated 2.9 x 1029 microbial cells. A substantial impediment to understanding the abundance and distribution of cells within marine sediment is the lack of a consistent and reliable method for their taxon-specific quantification. Catalyzed reporter fluorescent in situ hybridization (CARD-FISH) provides taxon-specific enumeration, but this process requires passing a large enzyme through cell membranes, decreasing its precision relative to general cell counts using a small DNA stain. In 2015, Yamaguchi et al. developed FISH hybridization chain reaction (FISH-HCR) as an in situ whole cell detection method for environmental microorganisms. FISH-HCR amplifies the fluorescent signal, as does CARD-FISH, but it allows for milder cell permeation methods that might prevent yield loss. To compare FISH-HCR to CARD-FISH, we examined bacteria and archaea cell counts within two sediment cores, Lille Belt (~78 meters deep) and Landsort Deep (90 meters deep), which were retrieved from the Baltic Sea Basin during IODP Expedition 347. Preliminary analysis shows that CARD-FISH counts are below the quantification limit for most depths across both cores. By contrast, quantification of cells was possible with FISH-HCR in all examined depths. When quantification with CARD-FISH was above the limit of detection, counts with FISH-HCR were up to 11 fold higher for Bacteria and 3 fold higher for Archaea from the same sediment sample. Further, FISH-HCR counts follow the trends of on board counts nicely, indicating that FISH-HCR may better reflect the cellular abundance within marine sediment than other quantification methods, including qPCR. Using FISH-HCR, we found that archaeal cell counts were on average greater than bacterial cell counts, but within the same order of magnitude.
The Effects of Statistical Multiplicity of Infection on Virus Quantification and Infectivity Assays.
Mistry, Bhaven A; D'Orsogna, Maria R; Chou, Tom
2018-06-19
Many biological assays are employed in virology to quantify parameters of interest. Two such classes of assays, virus quantification assays (VQAs) and infectivity assays (IAs), aim to estimate the number of viruses present in a solution and the ability of a viral strain to successfully infect a host cell, respectively. VQAs operate at extremely dilute concentrations, and results can be subject to stochastic variability in virus-cell interactions. At the other extreme, high viral-particle concentrations are used in IAs, resulting in large numbers of viruses infecting each cell, enough for measurable change in total transcription activity. Furthermore, host cells can be infected at any concentration regime by multiple particles, resulting in a statistical multiplicity of infection and yielding potentially significant variability in the assay signal and parameter estimates. We develop probabilistic models for statistical multiplicity of infection at low and high viral-particle-concentration limits and apply them to the plaque (VQA), endpoint dilution (VQA), and luciferase reporter (IA) assays. A web-based tool implementing our models and analysis is also developed and presented. We test our proposed new methods for inferring experimental parameters from data using numerical simulations and show improvement on existing procedures in all limits. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Fu, Yi; Yu, Guoqiang; Levine, Douglas A.; Wang, Niya; Shih, Ie-Ming; Zhang, Zhen; Clarke, Robert; Wang, Yue
2015-09-01
Most published copy number datasets on solid tumors were obtained from specimens comprised of mixed cell populations, for which the varying tumor-stroma proportions are unknown or unreported. The inability to correct for signal mixing represents a major limitation on the use of these datasets for subsequent analyses, such as discerning deletion types or detecting driver aberrations. We describe the BACOM2.0 method with enhanced accuracy and functionality to normalize copy number signals, detect deletion types, estimate tumor purity, quantify true copy numbers, and calculate average-ploidy value. While BACOM has been validated and used with promising results, subsequent BACOM analysis of the TCGA ovarian cancer dataset found that the estimated average tumor purity was lower than expected. In this report, we first show that this lowered estimate of tumor purity is the combined result of imprecise signal normalization and parameter estimation. Then, we describe effective allele-specific absolute normalization and quantification methods that can enhance BACOM applications in many biological contexts while in the presence of various confounders. Finally, we discuss the advantages of BACOM in relation to alternative approaches. Here we detail this revised computational approach, BACOM2.0, and validate its performance in real and simulated datasets.
Estimation of Tegaserod Maleate by Differential Pulse Polarography
Rajput, S. J.; Raj, H. A.
2009-01-01
A highly sensitive differential pulse polarographic method has been developed for the estimation of tegaserod maleate after treating it with hydrogen peroxide solution. The oxidation of tegaserod maleate is a reversible process as the oxidized product could be reduced at hanging mercury drop electrode in a quantitative manner using differential pulse polarography mode. The limit of quantification was 0.1ng/ml. The voltametric peak was obtained at -1.05 volts in presence of 0.1M potassium chloride as supporting electrolyte. The technique could be used successfully to analyze tegaserod maleate in its tablet formulation. PMID:20177456
Quantifying construction and demolition waste: An analytical review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Zezhou; Yu, Ann T.W., E-mail: bsannyu@polyu.edu.hk; Shen, Liyin
2014-09-15
Highlights: • Prevailing C and D waste quantification methodologies are identified and compared. • One specific methodology cannot fulfill all waste quantification scenarios. • A relevance tree for appropriate quantification methodology selection is proposed. • More attentions should be paid to civil and infrastructural works. • Classified information is suggested for making an effective waste management plan. - Abstract: Quantifying construction and demolition (C and D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C and D waste generation at both regional and projectmore » levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C and D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested.« less
Migheli, Francesca; Stoccoro, Andrea; Coppedè, Fabio; Wan Omar, Wan Adnan; Failli, Alessandra; Consolini, Rita; Seccia, Massimo; Spisni, Roberto; Miccoli, Paolo; Mathers, John C.; Migliore, Lucia
2013-01-01
There is increasing interest in the development of cost-effective techniques for the quantification of DNA methylation biomarkers. We analyzed 90 samples of surgically resected colorectal cancer tissues for APC and CDKN2A promoter methylation using methylation sensitive-high resolution melting (MS-HRM) and pyrosequencing. MS-HRM is a less expensive technique compared with pyrosequencing but is usually more limited because it gives a range of methylation estimates rather than a single value. Here, we developed a method for deriving single estimates, rather than a range, of methylation using MS-HRM and compared the values obtained in this way with those obtained using the gold standard quantitative method of pyrosequencing. We derived an interpolation curve using standards of known methylated/unmethylated ratio (0%, 12.5%, 25%, 50%, 75%, and 100% of methylation) to obtain the best estimate of the extent of methylation for each of our samples. We observed similar profiles of methylation and a high correlation coefficient between the two techniques. Overall, our new approach allows MS-HRM to be used as a quantitative assay which provides results which are comparable with those obtained by pyrosequencing. PMID:23326336
Censoring: a new approach for detection limits in total-reflection X-ray fluorescence
NASA Astrophysics Data System (ADS)
Pajek, M.; Kubala-Kukuś, A.; Braziewicz, J.
2004-08-01
It is shown that the detection limits in the total-reflection X-ray fluorescence (TXRF), which restrict quantification of very low concentrations of trace elements in the samples, can be accounted for using the statistical concept of censoring. We demonstrate that the incomplete TXRF measurements containing the so-called "nondetects", i.e. the non-measured concentrations falling below the detection limits and represented by the estimated detection limit values, can be viewed as the left random-censored data, which can be further analyzed using the Kaplan-Meier (KM) method correcting for nondetects. Within this approach, which uses the Kaplan-Meier product-limit estimator to obtain the cumulative distribution function corrected for the nondetects, the mean value and median of the detection limit censored concentrations can be estimated in a non-parametric way. The Monte Carlo simulations performed show that the Kaplan-Meier approach yields highly accurate estimates for the mean and median concentrations, being within a few percent with respect to the simulated, uncensored data. This means that the uncertainties of KM estimated mean value and median are limited in fact only by the number of studied samples and not by the applied correction procedure for nondetects itself. On the other hand, it is observed that, in case when the concentration of a given element is not measured in all the samples, simple approaches to estimate a mean concentration value from the data yield erroneous, systematically biased results. The discussed random-left censoring approach was applied to analyze the TXRF detection-limit-censored concentration measurements of trace elements in biomedical samples. We emphasize that the Kaplan-Meier approach allows one to estimate the mean concentrations being substantially below the mean level of detection limits. Consequently, this approach gives a new access to lower the effective detection limits for TXRF method, which is of prime interest for investigation of metallic impurities on the silicon wafers.
Lapidus, Nathanael; Chevret, Sylvie; Resche-Rigon, Matthieu
2014-12-30
Agreement between two assays is usually based on the concordance correlation coefficient (CCC), estimated from the means, standard deviations, and correlation coefficient of these assays. However, such data will often suffer from left-censoring because of lower limits of detection of these assays. To handle such data, we propose to extend a multiple imputation approach by chained equations (MICE) developed in a close setting of one left-censored assay. The performance of this two-step approach is compared with that of a previously published maximum likelihood estimation through a simulation study. Results show close estimates of the CCC by both methods, although the coverage is improved by our MICE proposal. An application to cytomegalovirus quantification data is provided. Copyright © 2014 John Wiley & Sons, Ltd.
MRMPlus: an open source quality control and assessment tool for SRM/MRM assay development.
Aiyetan, Paul; Thomas, Stefani N; Zhang, Zhen; Zhang, Hui
2015-12-12
Selected and multiple reaction monitoring involves monitoring a multiplexed assay of proteotypic peptides and associated transitions in mass spectrometry runs. To describe peptide and associated transitions as stable, quantifiable, and reproducible representatives of proteins of interest, experimental and analytical validation is required. However, inadequate and disparate analytical tools and validation methods predispose assay performance measures to errors and inconsistencies. Implemented as a freely available, open-source tool in the platform independent Java programing language, MRMPlus computes analytical measures as recommended recently by the Clinical Proteomics Tumor Analysis Consortium Assay Development Working Group for "Tier 2" assays - that is, non-clinical assays sufficient enough to measure changes due to both biological and experimental perturbations. Computed measures include; limit of detection, lower limit of quantification, linearity, carry-over, partial validation of specificity, and upper limit of quantification. MRMPlus streamlines assay development analytical workflow and therefore minimizes error predisposition. MRMPlus may also be used for performance estimation for targeted assays not described by the Assay Development Working Group. MRMPlus' source codes and compiled binaries can be freely downloaded from https://bitbucket.org/paiyetan/mrmplusgui and https://bitbucket.org/paiyetan/mrmplusgui/downloads respectively.
Modeling qRT-PCR dynamics with application to cancer biomarker quantification.
Chervoneva, Inna; Freydin, Boris; Hyslop, Terry; Waldman, Scott A
2017-01-01
Quantitative reverse transcription polymerase chain reaction (qRT-PCR) is widely used for molecular diagnostics and evaluating prognosis in cancer. The utility of mRNA expression biomarkers relies heavily on the accuracy and precision of quantification, which is still challenging for low abundance transcripts. The critical step for quantification is accurate estimation of efficiency needed for computing a relative qRT-PCR expression. We propose a new approach to estimating qRT-PCR efficiency based on modeling dynamics of polymerase chain reaction amplification. In contrast, only models for fluorescence intensity as a function of polymerase chain reaction cycle have been used so far for quantification. The dynamics of qRT-PCR efficiency is modeled using an ordinary differential equation model, and the fitted ordinary differential equation model is used to obtain effective polymerase chain reaction efficiency estimates needed for efficiency-adjusted quantification. The proposed new qRT-PCR efficiency estimates were used to quantify GUCY2C (Guanylate Cyclase 2C) mRNA expression in the blood of colorectal cancer patients. Time to recurrence and GUCY2C expression ratios were analyzed in a joint model for survival and longitudinal outcomes. The joint model with GUCY2C quantified using the proposed polymerase chain reaction efficiency estimates provided clinically meaningful results for association between time to recurrence and longitudinal trends in GUCY2C expression.
Quantification of differential gene expression by multiplexed targeted resequencing of cDNA
Arts, Peer; van der Raadt, Jori; van Gestel, Sebastianus H.C.; Steehouwer, Marloes; Shendure, Jay; Hoischen, Alexander; Albers, Cornelis A.
2017-01-01
Whole-transcriptome or RNA sequencing (RNA-Seq) is a powerful and versatile tool for functional analysis of different types of RNA molecules, but sample reagent and sequencing cost can be prohibitive for hypothesis-driven studies where the aim is to quantify differential expression of a limited number of genes. Here we present an approach for quantification of differential mRNA expression by targeted resequencing of complementary DNA using single-molecule molecular inversion probes (cDNA-smMIPs) that enable highly multiplexed resequencing of cDNA target regions of ∼100 nucleotides and counting of individual molecules. We show that accurate estimates of differential expression can be obtained from molecule counts for hundreds of smMIPs per reaction and that smMIPs are also suitable for quantification of relative gene expression and allele-specific expression. Compared with low-coverage RNA-Seq and a hybridization-based targeted RNA-Seq method, cDNA-smMIPs are a cost-effective high-throughput tool for hypothesis-driven expression analysis in large numbers of genes (10 to 500) and samples (hundreds to thousands). PMID:28474677
Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification
Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...
Quorum Sensing Inhibitors for Staphylococcus aureus from Italian Medicinal Plants
Quave, Cassandra L.; Plano, Lisa R.W.; Bennett, Bradley C.
2010-01-01
Morbidity and mortality estimates due to methicillin-resistant Staphylococcus aureus (MRSA) infections continue to rise. Therapeutic options are limited by antibiotic resistance. Anti-pathogenic compounds, which inhibit quorum sensing (QS) pathways, may be a useful alternative to antibiotics. Staphylococcal QS is encoded by the agr locus and is responsible for the production of δ-hemolysin. Quantification of δ-hemolysin found in culture supernatants permits the analysis of agr activity at the translational, rather than transcriptional, level. We employed RP-HPLC techniques to investigate the anti-QS activity of 168 extracts from 104 Italian plants through quantification of δ-hemolysin. Extracts from three medicinal plants (Ballota nigra, Castanea sativa, and Sambucus ebulus) exhibited a dose-dependent response in the production of δ-hemolysin, indicating strong anti-QS activity in a pathogenic MRSA isolate. PMID:20645243
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michaelsen, Kelly; Krishnaswamy, Venkat; Pogue, Brian W.
2012-07-15
Purpose: Design optimization and phantom validation of an integrated digital breast tomosynthesis (DBT) and near-infrared spectral tomography (NIRST) system targeting improvement in sensitivity and specificity of breast cancer detection is presented. Factors affecting instrumentation design include minimization of cost, complexity, and examination time while maintaining high fidelity NIRST measurements with sufficient information to recover accurate optical property maps. Methods: Reconstructed DBT slices from eight patients with abnormal mammograms provided anatomical information for the NIRST simulations. A limited frequency domain (FD) and extensive continuous wave (CW) NIRST system was modeled. The FD components provided tissue scattering estimations used in the reconstructionmore » of the CW data. Scattering estimates were perturbed to study the effects on hemoglobin recovery. Breast mimicking agar phantoms with inclusions were imaged using the combined DBT/NIRST system for comparison with simulation results. Results: Patient simulations derived from DBT images show successful reconstruction of both normal and malignant lesions in the breast. They also demonstrate the importance of accurately quantifying tissue scattering. Specifically, 20% errors in optical scattering resulted in 22.6% or 35.1% error in quantification of total hemoglobin concentrations, depending on whether scattering was over- or underestimated, respectively. Limited frequency-domain optical signal sampling provided two regions scattering estimates (for fat and fibroglandular tissues) that led to hemoglobin concentrations that reduced the error in the tumor region by 31% relative to when a single estimate of optical scattering was used throughout the breast volume of interest. Acquiring frequency-domain data with six wavelengths instead of three did not significantly improve the hemoglobin concentration estimates. Simulation results were confirmed through experiments in two-region breast mimicking gelatin phantoms. Conclusions: Accurate characterization of scattering is necessary for quantification of hemoglobin. Based on this study, a system design is described to optimally combine breast tomosynthesis with NIRST.« less
Eigenspace perturbations for structural uncertainty estimation of turbulence closure models
NASA Astrophysics Data System (ADS)
Jofre, Lluis; Mishra, Aashwin; Iaccarino, Gianluca
2017-11-01
With the present state of computational resources, a purely numerical resolution of turbulent flows encountered in engineering applications is not viable. Consequently, investigations into turbulence rely on various degrees of modeling. Archetypal amongst these variable resolution approaches would be RANS models in two-equation closures, and subgrid-scale models in LES. However, owing to the simplifications introduced during model formulation, the fidelity of all such models is limited, and therefore the explicit quantification of the predictive uncertainty is essential. In such scenario, the ideal uncertainty estimation procedure must be agnostic to modeling resolution, methodology, and the nature or level of the model filter. The procedure should be able to give reliable prediction intervals for different Quantities of Interest, over varied flows and flow conditions, and at diametric levels of modeling resolution. In this talk, we present and substantiate the Eigenspace perturbation framework as an uncertainty estimation paradigm that meets these criteria. Commencing from a broad overview, we outline the details of this framework at different modeling resolution. Thence, using benchmark flows, along with engineering problems, the efficacy of this procedure is established. This research was partially supported by NNSA under the Predictive Science Academic Alliance Program (PSAAP) II, and by DARPA under the Enabling Quantification of Uncertainty in Physical Systems (EQUiPS) project (technical monitor: Dr Fariba Fahroo).
A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data
He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei
2017-01-01
This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148
Lehahn, Yoav; Koren, Ilan; Schatz, Daniella; Frada, Miguel; Sheyn, Uri; Boss, Emmanuel; Efrati, Shai; Rudich, Yinon; Trainic, Miri; Sharoni, Shlomit; Laber, Christian; DiTullio, Giacomo R; Coolen, Marco J L; Martins, Ana Maria; Van Mooy, Benjamin A S; Bidle, Kay D; Vardi, Assaf
2014-09-08
Phytoplankton blooms are ephemeral events of exceptionally high primary productivity that regulate the flux of carbon across marine food webs [1-3]. Quantification of bloom turnover [4] is limited by a fundamental difficulty to decouple between physical and biological processes as observed by ocean color satellite data. This limitation hinders the quantification of bloom demise and its regulation by biological processes [5, 6], which has important consequences on the efficiency of the biological pump of carbon to the deep ocean [7-9]. Here, we address this challenge and quantify algal blooms' turnover using a combination of satellite and in situ data, which allows identification of a relatively stable oceanic patch that is subject to little mixing with its surroundings. Using a newly developed multisatellite Lagrangian diagnostic, we decouple the contributions of physical and biological processes, allowing quantification of a complete life cycle of a mesoscale (∼10-100 km) bloom of coccolithophores in the North Atlantic, from exponential growth to its rapid demise. We estimate the amount of organic carbon produced during the bloom to be in the order of 24,000 tons, of which two-thirds were turned over within 1 week. Complimentary in situ measurements of the same patch area revealed high levels of specific viruses infecting coccolithophore cells, therefore pointing at the importance of viral infection as a possible mortality agent. Application of the newly developed satellite-based approaches opens the way for large-scale quantification of the impact of diverse environmental stresses on the fate of phytoplankton blooms and derived carbon in the ocean. Copyright © 2014 Elsevier Ltd. All rights reserved.
Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist
NASA Astrophysics Data System (ADS)
Tummala, Sudhakar; Dam, Erik B.
2010-03-01
Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.
NASA Astrophysics Data System (ADS)
Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen
2017-06-01
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.
NASA Astrophysics Data System (ADS)
Prudhomme, G.; Berthe, L.; Bénier, J.; Bozier, O.; Mercier, P.
2017-01-01
Photonic Doppler Velocimetry is a plug-and-play and versatile diagnostic used in dynamic physic experiments to measure velocities. When signals are analyzed using a Short-Time Fourier Transform, multiple velocities can be distinguished: for example, the velocities of moving particle-cloud appear on spectrograms. In order to estimate the back-scattering fluxes of target, we propose an original approach "PDV Radiometric analysis" resulting in an expression of time-velocity spectrograms coded in power units. Experiments involving micron-sized particles raise the issue of detection limit; particle-size limit is very difficult to evaluate. From the quantification of noise sources, we derive an estimation of the spectrogram noise leading to a detectivity limit, which may be compared to the fraction of the incoming power which has been back-scattered by the particle and then collected by the probe. This fraction increases with their size. At last, some results from laser-shock accelerated particles using two different PDV systems are compared: it shows the improvement of detectivity with respect to the Effective Number of Bits (ENOB) of the digitizer.
Wang, Hongrui; Wang, Cheng; Wang, Ying; ...
2017-04-05
This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less
Cai, Yicun; He, Yuping; Lv, Rong; Chen, Hongchao; Wang, Qiang; Pan, Liangwen
2017-01-01
Meat products often consist of meat from multiple animal species, and inaccurate food product adulteration and mislabeling can negatively affect consumers. Therefore, a cost-effective and reliable method for identification and quantification of animal species in meat products is required. In this study, we developed a duplex droplet digital PCR (dddPCR) detection and quantification system to simultaneously identify and quantify the source of meat in samples containing a mixture of beef (Bos taurus) and pork (Sus scrofa) in a single digital PCR reaction tube. Mixed meat samples of known composition were used to test the accuracy and applicability of this method. The limit of detection (LOD) and the limit of quantification (LOQ) of this detection and quantification system were also identified. We conclude that our dddPCR detection and quantification system is suitable for quality control and routine analyses of meat products.
Honda, Akira; Yamashita, Kouwa; Ikegami, Tadashi; Hara, Takashi; Miyazaki, Teruo; Hirayama, Takeshi; Numazawa, Mitsuteru; Matsuzaki, Yasushi
2009-01-01
We describe a new sensitive and specific method for the quantification of serum malonate (malonic acid, MA), which could be a new biomarker for de novo lipogenesis (fatty acid synthesis). This method is based upon a stable isotope-dilution technique using LC-MS/MS. MA from 50 μl of serum was derivatized into di-(1-methyl-3-piperidinyl)malonate (DMP-MA) and quantified by LC-MS/MS using the positive electrospray ionization mode. The detection limit of the DMP-MA was approximately 4.8 fmol (500 fg) (signal-to-noise ratio = 10), which was more than 100 times more sensitive compared with that of MA by LC-MS/MS using the negative electrospray ionization mode. The relative standard deviations between sample preparations and measurements made using the present method were 4.4% and 3.2%, respectively, by one-way ANOVA. Recovery experiments were performed using 50 μl aliquots of normal human serum spiked with 9.6 pmol (1 ng) to 28.8 pmol (3 ng) of MA and were validated by orthogonal regression analysis. The results showed that the estimated amount within a 95% confidence limit was 14.1 ± 1.1 pmol, which was in complete agreement with the observed X¯0 = 15.0 ± 0.6 pmol, with a mean recovery of 96.0%. This method provides reliable and reproducible results for the quantification of MA in human serum. PMID:19403942
NASA Astrophysics Data System (ADS)
Pegion, K.; DelSole, T. M.; Becker, E.; Cicerone, T.
2016-12-01
Predictability represents the upper limit of prediction skill if we had an infinite member ensemble and a perfect model. It is an intrinsic limit of the climate system associated with the chaotic nature of the atmosphere. Producing a forecast system that can make predictions very near to this limit is the ultimate goal of forecast system development. Estimates of predictability together with calculations of current prediction skill are often used to define the gaps in our prediction capabilities on subseasonal to seasonal timescales and to inform the scientific issues that must be addressed to build the next forecast system. Quantification of the predictability is also important for providing a scientific basis for relaying to stakeholders what kind of climate information can be provided to inform decision-making and what kind of information is not possible given the intrinsic predictability of the climate system. One challenge with predictability estimates is that different prediction systems can give different estimates of the upper limit of skill. How do we know which estimate of predictability is most representative of the true predictability of the climate system? Previous studies have used the spread-error relationship and the autocorrelation to evaluate the fidelity of the signal and noise estimates. Using a multi-model ensemble prediction system, we can quantify whether these metrics accurately indicate an individual model's ability to properly estimate the signal, noise, and predictability. We use this information to identify the best estimates of predictability for 2-meter temperature, precipitation, and sea surface temperature from the North American Multi-model Ensemble and compare with current skill to indicate the regions with potential for improving skill.
Inverse models: A necessary next step in ground-water modeling
Poeter, E.P.; Hill, M.C.
1997-01-01
Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares repression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.Inverse models using, for example, nonlinear least-squares regression, provide capabilities that help modelers take full advantage of the insight available from ground-water models. However, lack of information about the requirements and benefits of inverse models is an obstacle to their widespread use. This paper presents a simple ground-water flow problem to illustrate the requirements and benefits of the nonlinear least-squares regression method of inverse modeling and discusses how these attributes apply to field problems. The benefits of inverse modeling include: (1) expedited determination of best fit parameter values; (2) quantification of the (a) quality of calibration, (b) data shortcomings and needs, and (c) confidence limits on parameter estimates and predictions; and (3) identification of issues that are easily overlooked during nonautomated calibration.
Leveraging transcript quantification for fast computation of alternative splicing profiles.
Alamancos, Gael P; Pagès, Amadís; Trincado, Juan L; Bellora, Nicolás; Eyras, Eduardo
2015-09-01
Alternative splicing plays an essential role in many cellular processes and bears major relevance in the understanding of multiple diseases, including cancer. High-throughput RNA sequencing allows genome-wide analyses of splicing across multiple conditions. However, the increasing number of available data sets represents a major challenge in terms of computation time and storage requirements. We describe SUPPA, a computational tool to calculate relative inclusion values of alternative splicing events, exploiting fast transcript quantification. SUPPA accuracy is comparable and sometimes superior to standard methods using simulated as well as real RNA-sequencing data compared with experimentally validated events. We assess the variability in terms of the choice of annotation and provide evidence that using complete transcripts rather than more transcripts per gene provides better estimates. Moreover, SUPPA coupled with de novo transcript reconstruction methods does not achieve accuracies as high as using quantification of known transcripts, but remains comparable to existing methods. Finally, we show that SUPPA is more than 1000 times faster than standard methods. Coupled with fast transcript quantification, SUPPA provides inclusion values at a much higher speed than existing methods without compromising accuracy, thereby facilitating the systematic splicing analysis of large data sets with limited computational resources. The software is implemented in Python 2.7 and is available under the MIT license at https://bitbucket.org/regulatorygenomicsupf/suppa. © 2015 Alamancos et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Two-dimensional grid-free compressive beamforming.
Yang, Yang; Chu, Zhigang; Xu, Zhongming; Ping, Guoli
2017-08-01
Compressive beamforming realizes the direction-of-arrival (DOA) estimation and strength quantification of acoustic sources by solving an underdetermined system of equations relating microphone pressures to a source distribution via compressive sensing. The conventional method assumes DOAs of sources to lie on a grid. Its performance degrades due to basis mismatch when the assumption is not satisfied. To overcome this limitation for the measurement with plane microphone arrays, a two-dimensional grid-free compressive beamforming is developed. First, a continuum based atomic norm minimization is defined to denoise the measured pressure and thus obtain the pressure from sources. Next, a positive semidefinite programming is formulated to approximate the atomic norm minimization. Subsequently, a reasonably fast algorithm based on alternating direction method of multipliers is presented to solve the positive semidefinite programming. Finally, the matrix enhancement and matrix pencil method is introduced to process the obtained pressure and reconstruct the source distribution. Both simulations and experiments demonstrate that under certain conditions, the grid-free compressive beamforming can provide high-resolution and low-contamination imaging, allowing accurate and fast estimation of two-dimensional DOAs and quantification of source strengths, even with non-uniform arrays and noisy measurements.
Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE
NASA Astrophysics Data System (ADS)
Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.
2015-12-01
Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.
Quantifying construction and demolition waste: an analytical review.
Wu, Zezhou; Yu, Ann T W; Shen, Liyin; Liu, Guiwen
2014-09-01
Quantifying construction and demolition (C&D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C&D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C&D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Liguori, Sara; O'Loughlin, Fiachra; Souvignet, Maxime; Coxon, Gemma; Freer, Jim; Woods, Ross
2014-05-01
This research presents a newly developed observed sub-daily gridded precipitation product for England and Wales. Importantly our analysis specifically allows a quantification of rainfall errors from grid to the catchment scale, useful for hydrological model simulation and the evaluation of prediction uncertainties. Our methodology involves the disaggregation of the current one kilometre daily gridded precipitation records available for the United Kingdom[1]. The hourly product is created using information from: 1) 2000 tipping-bucket rain gauges; and 2) the United Kingdom Met-Office weather radar network. These two independent datasets provide rainfall estimates at temporal resolutions much smaller than the current daily gridded rainfall product; thus allowing the disaggregation of the daily rainfall records to an hourly timestep. Our analysis is conducted for the period 2004 to 2008, limited by the current availability of the datasets. We analyse the uncertainty components affecting the accuracy of this product. Specifically we explore how these uncertainties vary spatially, temporally and with climatic regimes. Preliminary results indicate scope for improvement of hydrological model performance by the utilisation of this new hourly gridded rainfall product. Such product will improve our ability to diagnose and identify structural errors in hydrological modelling by including the quantification of input errors. References [1] Keller V, Young AR, Morris D, Davies H (2006) Continuous Estimation of River Flows. Technical Report: Estimation of Precipitation Inputs. in Agency E (ed.). Environmental Agency.
Hamidi, Dachriyanus; Aulia, Hilyatul; Susanti, Meri
2017-01-01
Garcinia cowa is a medicinal plant widely grown in Southeast Asia and tropical countries. Various parts of this plant have been used in traditional folk medicine. The bark, latex, and root have been used as an antipyretic agent, while fruit and leaves have been used as an expectorant, for indigestion and improvement of blood circulation. This study aims to determine the concentration of rubraxanthone found in ethyl acetate extract of the stem bark of G. cowa by the high-performance thin-layer chromatography (HPTLC). HPTLC method was performed on precoated silica gel G 60 F254 plates using an HPTLC system with a developed mobile-phase system of chloroform: ethyl acetate: methanol: formic acid (86:6:3:5). A volume of 5 μL of standard and sample solutions was applied to the chromatographic plates. The plates were developed in saturated mode of twin trough chamber at room temperature. The method was validated based on linearity, accuracy, precision, limit of detection (LOD), limit of quantification (LOQ), and specificity. The spots were observed at ultraviolet 243 nm. The linearity of rubraxanthone was obtained between 52.5 and 157.5 ppm/spot. The LOD and LOQ were found to be 4.03 and 13.42 ppm/spot, respectively. The proposed method showed good linearity, precision, accuracy, and high sensitivity. Therefore, it may be applied for the quantification of rubraxanthone in ethyl acetate extract of the stem bark of G. cowa . High performance thin layer chromatography (HPTLC) method provides rapid qualitative and quantitative estimation of rubraxanthone as a marker com¬pound in G. cowa extract used for commercial productRubraxanthone found in ethyl acetate extracts of G. cowa was successfully quantified using HPTLC method. Abbreviations Used : TLC: Thin-layer chromatography, HPTLC: High-performance thin-layer chromatography, LOD: Limit of detection, LOQ: Limit of quantification, ICH: International Conference on Harmonization.
den Braver, Michiel W; Vermeulen, Nico P E; Commandeur, Jan N M
2017-03-01
Modification of cellular macromolecules by reactive drug metabolites is considered to play an important role in the initiation of tissue injury by many drugs. Detection and identification of reactive intermediates is often performed by analyzing the conjugates formed after trapping by glutathione (GSH). Although sensitivity of modern mass spectrometrical methods is extremely high, absolute quantification of GSH-conjugates is critically dependent on the availability of authentic references. Although 1 H NMR is currently the method of choice for quantification of metabolites formed biosynthetically, its intrinsically low sensitivity can be a limiting factor in quantification of GSH-conjugates which generally are formed at low levels. In the present study, a simple but sensitive and generic method for absolute quantification of GSH-conjugates is presented. The method is based on quantitative alkaline hydrolysis of GSH-conjugates and subsequent quantification of glutamic acid and glycine by HPLC after precolumn derivatization with o-phthaldialdehyde/N-acetylcysteine (OPA/NAC). Because of the lower stability of the glycine OPA/NAC-derivate, quantification of the glutamic acid OPA/NAC-derivate appeared most suitable for quantification of GSH-conjugates. The novel method was used to quantify the concentrations of GSH-conjugates of diclofenac, clozapine and acetaminophen and quantification was consistent with 1 H NMR, but with a more than 100-fold lower detection limit for absolute quantification. Copyright © 2017. Published by Elsevier B.V.
Persistence behavior of metamifop and its metabolite in rice ecosystem.
Barik, Suhrid Ranjan; Ganguly, Pritam; Patra, Sandip; Dutta, Swaraj Kumar; Goon, Arnab; Bhattacharyya, Anjan
2018-02-01
A field experiment was conducted to determine the persistence of metamifop in transplanted rice crop for two seasons. Metamifop 10% EC was applied at two doses: 100 g a.i. ha -1 and 200 g a.i. ha -1 at 2-3 leaf stage of Echinochloa crusgalli. The residues of metamifop along with its major metabolite, N-(2-fluorophenyl)-2-hydroxy-N-methylpropionamide (HFMPA), were estimated in rice plant, field water and soil using Liquid Chromatography Mass Spectrometry. Limit of detection and limit of quantification of the method for both the compounds were set at 0.003 μg g -1 and 0.010 μg g -1 respectively. Metamifop showed less persistence in field water and rice plant as compared to soil samples. Presence of HFMPA was recorded in rice plant and soil. Both the compounds were found below level of quantification in harvest samples of straw, grains, husk and soil. A safe waiting period of 52 d was suggested for harvesting of rice when metamifop was applied at 100 g a.i. ha -1 (recommended dose). Copyright © 2017 Elsevier Ltd. All rights reserved.
The Challenges of Credible Thermal Protection System Reliability Quantification
NASA Technical Reports Server (NTRS)
Green, Lawrence L.
2013-01-01
The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.
NASA Astrophysics Data System (ADS)
Tajcmanova, L.; Moulas, E.; Vrijmoed, J.; Podladchikov, Y.
2016-12-01
Estimation of pressure-temperature (P-T) from petrographic observations in metamorphic rocks has become a common practice in petrology studies during the last 50 years. This data often serves as a key input in geodynamic reconstructions and thus directly influences our understanding of lithospheric processes. Such an approach might have led the metamorphic geology field to a certain level of quiescence. In the classical view of metamorphic quantification approaches, fast viscous relaxation (and therefore constant pressure across the rock microstructure) is assumed, with chemical diffusion being the limiting factor in equilibration. Recently, we have focused on the other possible scenario - fast chemical diffusion and slow viscous relaxation - and brings an alternative interpretation of chemical zoning found in high-grade rocks. The aim has been to provide insight into the role of mechanically maintained pressure variations on multi-component chemical zoning in minerals. Furthermore, we used the pressure information from the mechanically-controlled microstructure for rheological constrains. We show an unconventional way of relating the direct microstructural observations in rocks to the nonlinearity of rheology at time scales unattainable by laboratory measurements. Our analysis documents that mechanically controlled microstructures that have been preserved over geological times can be used to deduce flow-law parameters and in turn estimate stress levels of minerals in their natural environment. The development of the new quantification approaches has opened new horizons in understanding the phase transformations in the Earth's lithosphere. Furthermore, the new data generated can serve as a food for thought for the next generation of fully coupled numerical codes that involve reacting materials while respecting conservation of mass, momentum and energy.
Normal Databases for the Relative Quantification of Myocardial Perfusion
Rubeaux, Mathieu; Xu, Yuan; Germano, Guido; Berman, Daniel S.; Slomka, Piotr J.
2016-01-01
Purpose of review Myocardial perfusion imaging (MPI) with SPECT is performed clinically worldwide to detect and monitor coronary artery disease (CAD). MPI allows an objective quantification of myocardial perfusion at stress and rest. This established technique relies on normal databases to compare patient scans against reference normal limits. In this review, we aim to introduce the process of MPI quantification with normal databases and describe the associated perfusion quantitative measures that are used. Recent findings New equipment and new software reconstruction algorithms have been introduced which require the development of new normal limits. The appearance and regional count variations of normal MPI scan may differ between these new scanners and standard Anger cameras. Therefore, these new systems may require the determination of new normal limits to achieve optimal accuracy in relative myocardial perfusion quantification. Accurate diagnostic and prognostic results rivaling those obtained by expert readers can be obtained by this widely used technique. Summary Throughout this review, we emphasize the importance of the different normal databases and the need for specific databases relative to distinct imaging procedures. use of appropriate normal limits allows optimal quantification of MPI by taking into account subtle image differences due to the hardware and software used, and the population studied. PMID:28138354
Ahamad, Javed; Amin, Saima; Mir, Showkat R
2015-08-01
Gymnemic acid and charantin are well-established antidiabetic phytosterols found in Gymnema sylvestre and Momordica charantia, respectively. The fact that these plants are often used together in antidiabetic poly-herbal formulations lured us to develop an HPTLC densitometric method for the simultaneous quantification of their bioactive compounds. Indirect estimation of gymnemic acid as gymnemagenin and charantin as β-sitosterol after hydrolysis has been proposed. Aluminum-backed silica gel 60 F254 plates (20 × 10 cm) were used as stationary phase and toluene-ethyl acetate-methanol-formic acid (60 : 20 : 15 : 5, v/v) as mobile phase. Developed chromatogram was scanned at 550 nm after derivatization with modified vanillin-sulfuric acid reagent. Regression analysis of the calibration data showed an excellent linear relationship between peak area versus concentration of the analytes. Linearity was found to be in the range of 500-2,500 and 100-500 ng/band for gymnemagenin and β-sitosterol, respectively. The suitability of the developed HPTLC method for simultaneous estimation of analytes was established by validating it as per the ICH guidelines. The limits of detection and quantification for gymnemagenin were found to be ≈60 and ≈190 ng/band, and those for β-sitosterol ≈30 and ≈90 ng/band, respectively. The developed method was found to be linear (r(2) = 0.9987 and 0.9943), precise (relative standard deviation <1.5 and <2% for intra- and interday precision) and accurate (mean recovery ranged between 98.43-101.44 and 98.68-100.20%) for gymnemagenin and β-sitosterol, respectively. The proposed method was also found specific and robust for quantification of both the analytes and was successfully applied to herbal drugs and in-house herbal formulation without any interference. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lowering the quantification limit of the QubitTM RNA HS assay using RNA spike-in.
Li, Xin; Ben-Dov, Iddo Z; Mauro, Maurizio; Williams, Zev
2015-05-06
RNA quantification is often a prerequisite for most RNA analyses such as RNA sequencing. However, the relatively low sensitivity and large sample consumption of traditional RNA quantification methods such as UV spectrophotometry and even the much more sensitive fluorescence-based RNA quantification assays, such as the Qubit™ RNA HS Assay, are often inadequate for measuring minute levels of RNA isolated from limited cell and tissue samples and biofluids. Thus, there is a pressing need for a more sensitive method to reliably and robustly detect trace levels of RNA without interference from DNA. To improve the quantification limit of the Qubit™ RNA HS Assay, we spiked-in a known quantity of RNA to achieve the minimum reading required by the assay. Samples containing trace amounts of RNA were then added to the spike-in and measured as a reading increase over RNA spike-in baseline. We determined the accuracy and precision of reading increases between 1 and 20 pg/μL as well as RNA-specificity in this range, and compared to those of RiboGreen(®), another sensitive fluorescence-based RNA quantification assay. We then applied Qubit™ Assay with RNA spike-in to quantify plasma RNA samples. RNA spike-in improved the quantification limit of the Qubit™ RNA HS Assay 5-fold, from 25 pg/μL down to 5 pg/μL while maintaining high specificity to RNA. This enabled quantification of RNA with original concentration as low as 55.6 pg/μL compared to 250 pg/μL for the standard assay and decreased sample consumption from 5 to 1 ng. Plasma RNA samples that were not measurable by the Qubit™ RNA HS Assay were measurable by our modified method. The Qubit™ RNA HS Assay with RNA spike-in is able to quantify RNA with high specificity at 5-fold lower concentration and uses 5-fold less sample quantity than the standard Qubit™ Assay.
Tracking unaccounted water use in data sparse arid environment
NASA Astrophysics Data System (ADS)
Hafeez, M. M.; Edraki, M.; Ullah, M. K.; Chemin, Y.; Sixsmith, J.; Faux, R.
2009-12-01
Hydrological knowledge of irrigated farms within the inundation plains of the Murray Darling Basin (MDB) is very limited in quality and reliability of the observation network that has been declining rapidly over the past decade. This paper focuses on Land Surface Diversions (LSD) that encompass all forms of surface water diversion except the direct extraction of water from rivers, watercourses and lakes by farmers for the purposes of irrigation and stock and domestic supply. Its accurate measurement is very challenging, due to the practical difficulties associated with separating the different components of LSD and estimating them accurately for a large catchment. The inadequacy of current methods of measuring and monitoring LSD poses severe limitations on existing and proposed policies for managing such diversions. It is commonly believed that LSD comprises 20-30% of total diversions from river valleys in the MDB areas. But, scientific estimates of LSD do not exist, because they were considered unimportant prior the onset of recent draught in Australia. There is a need to develop hydrological water balance models through the coupling of hydrological variables derived from on ground hydrological measurements and remote sensing techniques to accurately model LSD. Typically, the hydrological water balance components for farm/catchment scale models includes: irrigation inflow, outflow, rainfall, runoff, evapotranspiration, soil moisture change and deep percolation. The actual evapotranspiration (ETa) is the largest and single most important component of hydrological water balance model. An accurate quantification of all components of hydrological water balance model at farm/catchment scale is of prime importance to estimate the volume of LSD. A hydrological water balance model is developed to calculate LSD at 6 selected pilot farms. The catchment hydrological water balance model is being developed by using selected parameters derived from hydrological water balance model at farm scale. LSD results obtained through the modelling process have been compared with LSD estimates measured with the ground observed data at 6 pilot farms. The differences between the values are between 3 to 5 percent of the water inputs which is within the confidence limit expected from such analysis. Similarly, the LSD values at the catchment scale have been estimated with a great confidence. The hydrological water balance models at farm and catchment scale provide reliable quantification of LSD. Improved LSD estimates can guide water management decisions at farm to catchment scale and could be instrumental for enhancing the integrity of the water allocation process and making them fairer and equitable across stakeholders.
Gwinn, Maureen R; Craig, Jeneva; Axelrad, Daniel A; Cook, Rich; Dockins, Chris; Fann, Neal; Fegley, Robert; Guinnup, David E; Helfand, Gloria; Hubbell, Bryan; Mazur, Sarah L; Palma, Ted; Smith, Roy L; Vandenberg, John; Sonawane, Babasaheb
2011-01-01
Quantifying the benefits of reducing hazardous air pollutants (HAPs, or air toxics) has been limited by gaps in toxicological data, uncertainties in extrapolating results from high-dose animal experiments to estimate human effects at lower doses, limited ambient and personal exposure monitoring data, and insufficient economic research to support valuation of the health impacts often associated with exposure to individual air toxics. To address some of these issues, the U.S. Environmental Protection Agency held the Workshop on Estimating the Benefits of Reducing Hazardous Air Pollutants (HAPs) in Washington, DC, from 30 April to 1 May 2009. Experts from multiple disciplines discussed how best to move forward on air toxics benefits assessment, with a focus on developing near-term capability to conduct quantitative benefits assessment. Proposed methodologies involved analysis of data-rich pollutants and application of this analysis to other pollutants, using dose-response modeling of animal data for estimating benefits to humans, determining dose-equivalence relationships for different chemicals with similar health effects, and analysis similar to that used for criteria pollutants. Limitations and uncertainties in economic valuation of benefits assessment for HAPS were discussed as well. These discussions highlighted the complexities in estimating the benefits of reducing air toxics, and participants agreed that alternative methods for benefits assessment of HAPs are needed. Recommendations included clearly defining the key priorities of the Clean Air Act air toxics program to identify the most effective approaches for HAPs benefits analysis, focusing on susceptible and vulnerable populations, and improving dose-response estimation for quantification of benefits.
Maublanc, Julie; Dulaurent, Sylvain; Morichon, Julien; Lachâtre, Gérard; Gaulier, Jean-michel
2015-03-01
Despite a non-invasive sampling, hair samples are generally collected in limited amounts for an obvious esthetic reason. In order to reduce the required quantity of samples, a multianalytes method allowing simultaneous identification and quantification of 35 psychoactive drugs was developed. After incubation of 50 mg of hair in a phosphate buffer pH 5 for one night at room temperature, the substances of interest were extracted by a simple liquid-liquid extraction step, with a dichloromethane/ether mixture (70:30, v/v). After evaporation under a gentle stream of nitrogen and reconstitution in formate buffer (2 mM, pH 3)/acetonitrile (90:10, v/v), twenty microliter were injected into the LC-MS/MS system for a chromatographic run of 29 min using an Atlantis T3 column (150 × 2.1 mm, 3 μm) (Waters Corp, Milford, USA) and a gradient mixture of 2 mM, pH 3.0 ammonium formate, and 2 mM, pH 3.0 ammonium formate/acetonitrile. The data acquisition was performed in scheduled MRM mode. Intra- and inter-day precisions, estimated using the coefficient of variation and relative bias, were lower than 20 % for all concentration levels, except for two compounds. The limits of detection and quantification ranged from 0.5 to 10 pg/mg. After complete validation, this method has been successfully used in several forensic cases, three of which are reported.
Radio-frequency energy quantification in magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Alon, Leeor
Mapping of radio frequency (RF) energy deposition has been challenging for 50+ years, especially, when scanning patients in the magnetic resonance imaging (MRI) environment. As result, electromagnetic simulation software is often used for estimating the specific absorption rate (SAR), the rate of RF energy deposition in tissue. The thesis work presents challenges associated with aligning information provided by electromagnetic simulation and MRI experiments. As result of the limitations of simulations, experimental methods for the quantification of SAR were established. A system for quantification of the total RF energy deposition was developed for parallel transmit MRI (a system that uses multiple antennas to excite and image the body). The system is capable of monitoring and predicting channel-by-channel RF energy deposition, whole body SAR and capable of tracking potential hardware failures that occur in the transmit chain and may cause the deposition of excessive energy into patients. Similarly, we demonstrated that local RF power deposition can be mapped and predicted for parallel transmit systems based on a series of MRI temperature mapping acquisitions. Resulting from the work, we developed tools for optimal reconstruction temperature maps from MRI acquisitions. The tools developed for temperature mapping paved the way for utilizing MRI as a diagnostic tool for evaluation of RF/microwave emitting device safety. Quantification of the RF energy was demonstrated for both MRI compatible and non-MRI-compatible devices (such as cell phones), while having the advantage of being noninvasive, of providing millimeter resolution and high accuracy.
Soriano, Brian D; Tam, Lei-Ting T; Lu, Hsieng S; Valladares, Violeta G
2012-01-01
Recombinant proteins expressed in Escherichia coli are often produced as unfolded, inactive forms accumulated in inclusion bodies. Redox-coupled thiols are typically employed in the refolding process in order to catalyze the formation of correct disulfide bonds at maximal folding efficiency. These thiols and the recombinant proteins can form mixed disulfide bonds to generate thiol-protein adducts. In this work, we apply a fluorescent-based assay for the quantification of cysteine and cysteamine adducts as observed in E. coli-derived proteins. The thiols are released by reduction of the adducted protein, collected and labeled with a fluorescent reagent, 6-aminoquinolyl-N-hydroxysuccinimidyl carbamate. The derivatized thiols are separated by reversed-phase HPLC and can be accurately quantified after method optimization. The estimated thiol content represents total amount of adducted forms present in the analyzed samples. The limit of quantification (LOQ) was established; specifically, the lowest amount of quantifiable cysteine adduction is 30 picograms and the lowest amount of quantifiable cysteamine adduction is 60 picograms. The assay is useful for quantification of adducts in final purified products as well as in-process samples from various purification steps. The assay indicates that the purification process accomplishes a decrease in cysteine adduction from 0.19 nmol adduct/nmol protein to 0.03 nmol adduct/nmol protein as well as a decrease in cysteamine adduction from 0.24 nmol adduct/nmol protein to 0.14 nmol adduct/nmol protein. Copyright © 2011. Published by Elsevier B.V.
Sánchez-García, L; Bolea, E; Laborda, F; Cubel, C; Ferrer, P; Gianolio, D; da Silva, I; Castillo, J R
2016-03-18
Facing the lack of studies on characterization and quantification of cerium oxide nanoparticles (CeO2 NPs), whose consumption and release is greatly increasing, this work proposes a method for their sizing and quantification by Flow Field-flow Fractionation (FFFF) coupled to Inductively Coupled Plasma-Mass Spectrometry (ICP-MS). Two modalities of FFFF (Asymmetric Flow- and Hollow Fiber-Flow Field Flow Fractionation, AF4 and HF5, respectively) are compared, and their advantages and limitations discussed. Experimental conditions (carrier composition, pH, ionic strength, crossflow and carrier flow rates) are studied in detail in terms of NP separation, recovery, and repeatability. Size characterization of CeO2 NPs was addressed by different approaches. In the absence of feasible size standards of CeO2 NPs, suspensions of Ag, Au, and SiO2 NPs of known size were investigated. Ag and Au NPs failed to show a comparable behavior to that of the CeO2 NPs, whereas the use of SiO2 NPs provided size estimations in agreement to those predicted by the theory. The latter approach was thus used for characterizing the size of CeO2 NPs in a commercial suspension. Results were in adequate concordance with those achieved by transmission electron microscopy, X-ray diffraction and dynamic light scattering. The quantification of CeO2 NPs in the commercial suspension by AF4-ICP-MS required the use of a CeO2 NPs standards, since the use of ionic cerium resulted in low recoveries (99 ± 9% vs. 73 ± 7%, respectively). A limit of detection of 0.9 μg L(-1) CeO2 corresponding to a number concentration of 1.8 × 1012 L(-1) for NPs of 5 nm was achieved for an injection volume of 100 μL. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Duggin, M. J. (Principal Investigator); Piwinski, D.
1982-01-01
The use of NOAA AVHRR data to map and monitor vegetation types and conditions in near real-time can be enhanced by using a portion of each GAC image that is larger than the central 25% now considered. Enlargement of the cloud free image data set can permit development of a series of algorithms for correcting imagery for ground reflectance and for atmospheric scattering anisotropy within certain accuracy limits. Empirical correction algorithms used to normalize digital radiance or VIN data must contain factors for growth stage and for instrument spectral response. While it is not possible to correct for random fluctuations in target radiance, it is possible to estimate the necessary radiance difference between targets in order to provide target discrimination and quantification within predetermined limits of accuracy. A major difficulty lies in the lack of documentation of preprocessing algorithms used on AVHRR digital data.
Prasad, Thatipamula R; Joseph, Siji; Kole, Prashant; Kumar, Anoop; Subramanian, Murali; Rajagopalan, Sudha; Kr, Prabhakar
2017-11-01
Objective of the current work was to develop a 'green chemistry' compliant selective and sensitive supercritical fluid chromatography-tandem mass spectrometry method for simultaneous estimation of risperidone (RIS) and its chiral metabolites in rat plasma. Methodology & results: Agilent 1260 Infinity analytical supercritical fluid chromatography system resolved RIS and its chiral metabolites within runtime of 6 min using a gradient chromatography method. Using a simple protein precipitation sample preparation followed by mass spectrometric detection achieved a sensitivity of 0.92 nM (lower limit of quantification). With linearity over four log units (0.91-7500 nM), the method was found to be selective, accurate, precise and robust. The method was validated and was successfully applied for simultaneous estimation of RIS and 9-hydroxyrisperidone metabolites (R & S individually) after intravenous and per oral administration to rats.
Issues connected with indirect cost quantification: a focus on the transportation system
NASA Astrophysics Data System (ADS)
Křivánková, Zuzana; Bíl, Michal; Kubeček, Jan; Vodák, Rostislav
2017-04-01
Transportation and communication networks in general are vital parts of modern society. The economy relies heavily on transportation system performance. A number of people commutes to work regularly. Stockpiles in many companies are being reduced as the just-in-time production process is able to supply resources via the transportation network on time. Natural hazards have the potential to disturb transportation systems. Earthquakes, flooding or landsliding are examples of high-energetic processes which are capable of causing direct losses (i.e. physical damage to the infrastructure). We have focused on quantification of the indirect cost of natural hazards which are not easy to estimate. Indirect losses can also emerge as a result of meteorological hazards with low energy which only seldom cause direct losses, e.g. glaze, snowfall. Whereas evidence of repair work and general direct costs usually exist or can be estimated, indirect costs are much more difficult to identify particularly when they are not covered by insurance agencies. Delimitations of alternative routes (detours) are the most frequent responses to blocked road links. Indirect costs can then be related to increased fuel consumption and additional operating costs. Detours usually result in prolonged travel times. Indirect costs quantification has to therefore cover the value of the time. The costs from the delay are a nonlinear function of travel time, however. The existence of an alternative transportation pattern may also result in an increased number of traffic crashes. This topic has not been studied in depth but an increase in traffic crashes has been reported when people suddenly changed their traffic modes, e.g. when air traffic was not possible. The lost user benefit from those trips that were cancelled or suppressed is also difficult to quantify. Several approaches, based on post-event questioner surveys, have been applied to communities and companies affected by transportation accessibility cut-off. No widely accepted methodology is available, however. In this presentation we will discuss current approaches, and their limitations related to indirect cost estimation which can be applied to estimation of natural hazard impacts.
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
Theoretical limitations of quantification for noncompetitive sandwich immunoassays.
Woolley, Christine F; Hayes, Mark A; Mahanti, Prasun; Douglass Gilman, S; Taylor, Tom
2015-11-01
Immunoassays exploit the highly selective interaction between antibodies and antigens to provide a vital method for biomolecule detection at low concentrations. Developers and practitioners of immunoassays have long known that non-specific binding often restricts immunoassay limits of quantification (LOQs). Aside from non-specific binding, most efforts by analytical chemists to reduce the LOQ for these techniques have focused on improving the signal amplification methods and minimizing the limitations of the detection system. However, with detection technology now capable of sensing single-fluorescence molecules, this approach is unlikely to lead to dramatic improvements in the future. Here, fundamental interactions based on the law of mass action are analytically connected to signal generation, replacing the four- and five-parameter fittings commercially used to approximate sigmoidal immunoassay curves and allowing quantitative consideration of non-specific binding and statistical limitations in order to understand the ultimate detection capabilities of immunoassays. The restrictions imposed on limits of quantification by instrumental noise, non-specific binding, and counting statistics are discussed based on equilibrium relations for a sandwich immunoassay. Understanding the maximal capabilities of immunoassays for each of these regimes can greatly assist in the development and evaluation of immunoassay platforms. While many studies suggest that single molecule detection is possible through immunoassay techniques, here, it is demonstrated that the fundamental limit of quantification (precision of 10 % or better) for an immunoassay is approximately 131 molecules and this limit is based on fundamental and unavoidable statistical limitations.
The principles of quantification applied to in vivo proton MR spectroscopy.
Helms, Gunther
2008-08-01
Following the identification of metabolite signals in the in vivo MR spectrum, quantification is the procedure to estimate numerical values of their concentrations. The two essential steps are discussed in detail: analysis by fitting a model of prior knowledge, that is, the decomposition of the spectrum into the signals of singular metabolites; then, normalization of these signals to yield concentration estimates. Special attention is given to using the in vivo water signal as internal reference.
Varrone, Andrea; Gulyás, Balázs; Takano, Akihiro; Stabin, Michael G; Jonsson, Cathrine; Halldin, Christer
2012-02-01
[(18)F]FE-PE2I is a promising dopamine transporter (DAT) radioligand. In nonhuman primates, we examined the accuracy of simplified quantification methods and the estimates of radiation dose of [(18)F]FE-PE2I. In the quantification study, binding potential (BP(ND)) values previously reported in three rhesus monkeys using kinetic and graphical analyses of [(18)F]FE-PE2I were used for comparison. BP(ND) using the cerebellum as reference region was obtained with four reference tissue methods applied to the [(18)F]FE-PE2I data that were compared with the kinetic and graphical analyses. In the whole-body study, estimates of adsorbed radiation were obtained in two cynomolgus monkeys. All reference tissue methods provided BP(ND) values within 5% of the values obtained with the kinetic and graphical analyses. The shortest imaging time for stable BP(ND) estimation was 54 min. The average effective dose of [(18)F]FE-PE2I was 0.021 mSv/MBq, similar to 2-deoxy-2-[(18)F]fluoro-d-glucose. The results in nonhuman primates suggest that [(18)F]FE-PE2I is suitable for accurate and stable DAT quantification, and its radiation dose estimates would allow for a maximal administered radioactivity of 476 MBq in human subjects. Copyright © 2012 Elsevier Inc. All rights reserved.
Quantification of HTLV-1 Clonality and TCR Diversity
Laydon, Daniel J.; Melamed, Anat; Sim, Aaron; Gillet, Nicolas A.; Sim, Kathleen; Darko, Sam; Kroll, J. Simon; Douek, Daniel C.; Price, David A.; Bangham, Charles R. M.; Asquith, Becca
2014-01-01
Estimation of immunological and microbiological diversity is vital to our understanding of infection and the immune response. For instance, what is the diversity of the T cell repertoire? These questions are partially addressed by high-throughput sequencing techniques that enable identification of immunological and microbiological “species” in a sample. Estimators of the number of unseen species are needed to estimate population diversity from sample diversity. Here we test five widely used non-parametric estimators, and develop and validate a novel method, DivE, to estimate species richness and distribution. We used three independent datasets: (i) viral populations from subjects infected with human T-lymphotropic virus type 1; (ii) T cell antigen receptor clonotype repertoires; and (iii) microbial data from infant faecal samples. When applied to datasets with rarefaction curves that did not plateau, existing estimators systematically increased with sample size. In contrast, DivE consistently and accurately estimated diversity for all datasets. We identify conditions that limit the application of DivE. We also show that DivE can be used to accurately estimate the underlying population frequency distribution. We have developed a novel method that is significantly more accurate than commonly used biodiversity estimators in microbiological and immunological populations. PMID:24945836
Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles
Cortés, Camilo; Unzueta, Luis; de los Reyes-Guzmán, Ana; Ruiz, Oscar E.; Flórez, Julián
2016-01-01
In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR. PMID:27403044
Islam, Johirul; Zaman, Kamaruz; Chakrabarti, Srijita; Sharma Bora, Nilutpal; Mandal, Santa; Pratim Pathak, Manash; Srinivas Raju, Pakalapati; Chattopadhyay, Pronobesh
2017-07-01
A simple, accurate and sensitive reversed-phase high-performance liquid chromatographic (RP-HPLC) method has been developed for the estimation of ethyl 2-aminobenzoate (EAB) in a matrix type monolithic polymeric device and validated as per the International Conference on Harmonization guidelines. The analysis was performed isocratically on a ZORBAX Eclipse plus C18 analytical column (250 × 4.4 mm, 5 μm) and a diode array detector (DAD) using acetonitrile and water (75:25 v/v) as the mobile phase by keeping the flow-rate constant at 1.0 mL/min. Determination of EAB was not interfered in the presence of excipients. Inter- and intra-day relative standard deviations were not higher than 2%. Mean recovery was between 98.7 and 101.3%. Calibration curve was linear in the concentration range of 0.5-10 µg/mL. Limits of detection and quantification were 0.19 and 0.60 µg/mL, respectively. Thus, the present report put forward a novel method for the estimation of EAB, an emerging insect repellent, by using RP-HPLC technique. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Aniseikonia quantification: error rate of rule of thumb estimation.
Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P
1999-01-01
To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.
NASA Astrophysics Data System (ADS)
Ramanjaneyulu, P. S.; Sayi, Y. S.; Ramakumar, K. L.
2008-08-01
Quantification of boron in diverse materials of relevance in nuclear technology is essential in view of its high thermal neutron absorption cross section. A simple and sensitive method has been developed for the determination of boron in uranium-aluminum-silicon alloy, based on leaching of boron with 6 M HCl and H 2O 2, its selective separation by solvent extraction with 2-ethyl hexane 1,3-diol and quantification by spectrophotometry using curcumin. The method has been evaluated by standard addition method and validated by inductively coupled plasma-atomic emission spectroscopy. Relative standard deviation and absolute detection limit of the method are 3.0% (at 1 σ level) and 12 ng, respectively. All possible sources of uncertainties in the methodology have been individually assessed, following the International Organization for Standardization guidelines. The combined uncertainty is calculated employing uncertainty propagation formulae. The expanded uncertainty in the measurement at 95% confidence level (coverage factor 2) is 8.840%.
TAPAS: tools to assist the targeted protein quantification of human alternative splice variants.
Yang, Jae-Seong; Sabidó, Eduard; Serrano, Luis; Kiel, Christina
2014-10-15
In proteomes of higher eukaryotes, many alternative splice variants can only be detected by their shared peptides. This makes it highly challenging to use peptide-centric mass spectrometry to distinguish and to quantify protein isoforms resulting from alternative splicing events. We have developed two complementary algorithms based on linear mathematical models to efficiently compute a minimal set of shared and unique peptides needed to quantify a set of isoforms and splice variants. Further, we developed a statistical method to estimate the splice variant abundances based on stable isotope labeled peptide quantities. The algorithms and databases are integrated in a web-based tool, and we have experimentally tested the limits of our quantification method using spiked proteins and cell extracts. The TAPAS server is available at URL http://davinci.crg.es/tapas/. luis.serrano@crg.eu or christina.kiel@crg.eu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images
Frey, Eric C.; Humm, John L.; Ljungberg, Michael
2012-01-01
The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429
MPQ-cytometry: a magnetism-based method for quantification of nanoparticle-cell interactions
NASA Astrophysics Data System (ADS)
Shipunova, V. O.; Nikitin, M. P.; Nikitin, P. I.; Deyev, S. M.
2016-06-01
Precise quantification of interactions between nanoparticles and living cells is among the imperative tasks for research in nanobiotechnology, nanotoxicology and biomedicine. To meet the challenge, a rapid method called MPQ-cytometry is developed, which measures the integral non-linear response produced by magnetically labeled nanoparticles in a cell sample with an original magnetic particle quantification (MPQ) technique. MPQ-cytometry provides a sensitivity limit 0.33 ng of nanoparticles and is devoid of a background signal present in many label-based assays. Each measurement takes only a few seconds, and no complicated sample preparation or data processing is required. The capabilities of the method have been demonstrated by quantification of interactions of iron oxide nanoparticles with eukaryotic cells. The total amount of targeted nanoparticles that specifically recognized the HER2/neu oncomarker on the human cancer cell surface was successfully measured, the specificity of interaction permitting the detection of HER2/neu positive cells in a cell mixture. Moreover, it has been shown that MPQ-cytometry analysis of a HER2/neu-specific iron oxide nanoparticle interaction with six cell lines of different tissue origins quantitatively reflects the HER2/neu status of the cells. High correlation of MPQ-cytometry data with those obtained by three other commonly used in molecular and cell biology methods supports consideration of this method as a prospective alternative for both quantifying cell-bound nanoparticles and estimating the expression level of cell surface antigens. The proposed method does not require expensive sophisticated equipment or highly skilled personnel and it can be easily applied for rapid diagnostics, especially under field conditions.Precise quantification of interactions between nanoparticles and living cells is among the imperative tasks for research in nanobiotechnology, nanotoxicology and biomedicine. To meet the challenge, a rapid method called MPQ-cytometry is developed, which measures the integral non-linear response produced by magnetically labeled nanoparticles in a cell sample with an original magnetic particle quantification (MPQ) technique. MPQ-cytometry provides a sensitivity limit 0.33 ng of nanoparticles and is devoid of a background signal present in many label-based assays. Each measurement takes only a few seconds, and no complicated sample preparation or data processing is required. The capabilities of the method have been demonstrated by quantification of interactions of iron oxide nanoparticles with eukaryotic cells. The total amount of targeted nanoparticles that specifically recognized the HER2/neu oncomarker on the human cancer cell surface was successfully measured, the specificity of interaction permitting the detection of HER2/neu positive cells in a cell mixture. Moreover, it has been shown that MPQ-cytometry analysis of a HER2/neu-specific iron oxide nanoparticle interaction with six cell lines of different tissue origins quantitatively reflects the HER2/neu status of the cells. High correlation of MPQ-cytometry data with those obtained by three other commonly used in molecular and cell biology methods supports consideration of this method as a prospective alternative for both quantifying cell-bound nanoparticles and estimating the expression level of cell surface antigens. The proposed method does not require expensive sophisticated equipment or highly skilled personnel and it can be easily applied for rapid diagnostics, especially under field conditions. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr03507h
Direct and Absolute Quantification of over 1800 Yeast Proteins via Selected Reaction Monitoring*
Lawless, Craig; Holman, Stephen W.; Brownridge, Philip; Lanthaler, Karin; Harman, Victoria M.; Watkins, Rachel; Hammond, Dean E.; Miller, Rebecca L.; Sims, Paul F. G.; Grant, Christopher M.; Eyers, Claire E.; Beynon, Robert J.
2016-01-01
Defining intracellular protein concentration is critical in molecular systems biology. Although strategies for determining relative protein changes are available, defining robust absolute values in copies per cell has proven significantly more challenging. Here we present a reference data set quantifying over 1800 Saccharomyces cerevisiae proteins by direct means using protein-specific stable-isotope labeled internal standards and selected reaction monitoring (SRM) mass spectrometry, far exceeding any previous study. This was achieved by careful design of over 100 QconCAT recombinant proteins as standards, defining 1167 proteins in terms of copies per cell and upper limits on a further 668, with robust CVs routinely less than 20%. The selected reaction monitoring-derived proteome is compared with existing quantitative data sets, highlighting the disparities between methodologies. Coupled with a quantification of the transcriptome by RNA-seq taken from the same cells, these data support revised estimates of several fundamental molecular parameters: a total protein count of ∼100 million molecules-per-cell, a median of ∼1000 proteins-per-transcript, and a linear model of protein translation explaining 70% of the variance in translation rate. This work contributes a “gold-standard” reference yeast proteome (including 532 values based on high quality, dual peptide quantification) that can be widely used in systems models and for other comparative studies. PMID:26750110
Veronese, Mattia; Rizzo, Gaia; Bertoldo, Alessandra; Turkheimer, Federico E
2016-01-01
In Positron Emission Tomography (PET), spectral analysis (SA) allows the quantification of dynamic data by relating the radioactivity measured by the scanner in time to the underlying physiological processes of the system under investigation. Among the different approaches for the quantification of PET data, SA is based on the linear solution of the Laplace transform inversion whereas the measured arterial and tissue time-activity curves of a radiotracer are used to calculate the input response function of the tissue. In the recent years SA has been used with a large number of PET tracers in brain and nonbrain applications, demonstrating that it is a very flexible and robust method for PET data analysis. Differently from the most common PET quantification approaches that adopt standard nonlinear estimation of compartmental models or some linear simplifications, SA can be applied without defining any specific model configuration and has demonstrated very good sensitivity to the underlying kinetics. This characteristic makes it useful as an investigative tool especially for the analysis of novel PET tracers. The purpose of this work is to offer an overview of SA, to discuss advantages and limitations of the methodology, and to inform about its applications in the PET field.
NASA Astrophysics Data System (ADS)
Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.
2017-04-01
Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.
NASA Astrophysics Data System (ADS)
Zhang, Weiying; Lou, Inchio; Ung, Wai Kin; Kong, Yijun; Mok, Kai Meng
2014-06-01
Freshwater algal blooms have become a growing concern world-wide. They are caused by a high level of cyanobacteria, predominantly Microcystis spp. and Cylindrospermopsis raciborskii, which can produce microcystin and cylindrospermopsin, respectively. Longtime exposure to these cyanotoxins may affect public health, thus reliable detection, quantification, and enumeration of these harmful algae species has become a priority in water quality management. Traditional manual enumeration of algal bloom cells primarily involves microscopic identification which limited by inaccuracy and time-consumption.With the development of molecular techniques and an increasing number of microbial sequences available in the Genbank database, the use of molecular methods can be used for more rapid, reliable, and accurate detection and quantification. In this study, multiplex polymerase chain reaction (PCR) and real-time quantitative PCR (qPCR) techniques were developed and applied for monitoring cyanobacteria Microcystis spp. and C. raciborskii in the Macau Storage Reservoir (MSR). The results showed that the techniques were successful for identifying and quantifying the species in pure cultures and mixed cultures, and proved to be a potential application for water sampling in MSR. When the target species were above 1 million cells/L, similar cell numbers estimated by microscopic enumeration and qPCR were obtained. Further quantification in water samples indicated that the ratio of the estimated number of cell by microscopy and qPCR was 0.4-12.9 for cyanobacteria and 0.2-3.9 for C. raciborskii. However, Microcystis spp. was not observed by manual enumeration, while it was detected at low levels by qPCR, suggesting that qPCR is more sensitive and accurate. Thus the molecular approaches provide an additional reliable monitoring option to traditional microscopic enumeration for the ecosystems monitoring program.
Voltammetry as a Tool for Characterization of CdTe Quantum Dots
Sobrova, Pavlina; Ryvolova, Marketa; Hubalek, Jaromir; Adam, Vojtech; Kizek, Rene
2013-01-01
Electrochemical detection of quantum dots (QDs) has already been used in numerous applications. However, QDs have not been well characterized using voltammetry, with respect to their characterization and quantification. Therefore, the main aim was to characterize CdTe QDs using cyclic and differential pulse voltammetry. The obtained peaks were identified and the detection limit (3 S/N) was estimated down to 100 fg/mL. Based on the convincing results, a new method for how to study stability and quantify the dots was suggested. Thus, the approach was further utilized for the testing of QDs stability. PMID:23807507
NASA Technical Reports Server (NTRS)
Hetherington, N. W.; Rosenblatt, L. S.; Higgins, E. A.; Winget, C. M.
1973-01-01
A mathematical model previously presented by Rosenblatt et al. (1973) for estimating the rates of resynchronization of individual biorhythms following transmeridian flights or photoperiod shifts is extended to estimation of rates at which two biorythms resynchronize with respect to each other. Such quantification of the rate of restoration of the initial phase relationship of the two biorhythms is pointed out as a valuable tool in the study of internal desynchronosis.
Liu, Ruolin; Dickerson, Julie
2017-11-01
We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.
Migration studies of nickel and chromium from ceramic and glass tableware into food simulants.
Szynal, Tomasz; Rebeniak, Małgorzata; Mania, Monika
In addition to the release of lead and cadmium from ceramic and glass vessels, (acceptable limits being set by the EU 84/500/EC Directive), other harmful metals can migrate, such as nickel and chromium. Permissible migration limits for these latter metals however have not yet been set in the EU legislation. Both the toxic properties of nickel and chromium and the measures taken by the European Commission Working Group on Food Contact Materials for verifying permissible migration limits for lead, cadmium and other metals from ceramics have acted as drivers for studies on nickel and chromium release from ceramic and glass tableware. To investigate the migration of nickel and chromium into food simulants from ceramic and glassware, available on the Polish market, which are intended for coming into contact with food. Potential consumer exposure can thereby be estimated from the release of these elements into food. Tableware consisted of ceramics and glass vessels generally available on the domestic market, with inner surfaces being mainly coloured and with rim decorations. Migration of nickel and chromium studied from the ceramics was carried out in 4% acetic acid (24 ± 0.5 hrs at 22 ± 2°C), whilst that from glassware in 4% acetic acid (24 ± 0.5 hrs at 22 ± 2°C) and 0.5% citric acid (2 ± 0.1 hrs at 70 ± 2°C). The concentrations of metals which had migrated into the test solutions were measured by using flame atomic absorption spectrometry (FAAS). This analytical procedure had been previously validated by measuring nickel and chromium released into food simulants from ceramic and glass tableware where working ranges, detection limits, quantification limits, repeatability, accuracy, mean recovery and uncertainty were established. Migration of nickel and chromium was measured from 172 ceramic and 52 and glass vessels samples, with all results being below the limits of quantification (LOQ = 0.02 mg/L), excepting one instance where a 0.04 mg/L concentration of nickel was found. The validated methods for measuring chromium achieved the following parameters; 0.02 to 0.80 mg/L operating range, 0.01 mg/L detection limit, 0.02 mg/L limit of quantification, 6% repeatability, 2.8% accuracy, 102% average recovery and 11% uncertainty. For the nickel method the corresponding parameters were 0.02 to 0.80 mg/L work- ing range, 0.02 mg/L limit of quantification, 0.01 mg/L detection limit, 5% repeatability, 6.5% accuracy, 101% average recovery and 12% uncertainty. The tested ceramics and glassware did not pose a threat to human health regarding migration of nickel and chromium, and thus any potential exposure to these metals released from these products into food will be small. However, due to the toxicity of these metals, the migration of nickel and chromium is still required for articles coming into contact with food, which includes metalware. ceramic tableware, ceramics, glassware, food contact articles, nickel, chromium leaching, migration.
Singh, C L; Singh, A; Kumar, S; Kumar, M; Sharma, P K; Majumdar, D K
2015-01-01
In the present study a simple, accurate, precise, economical and specific UV-spectrophotometric method for estimation of besifloxacin in bulk and in different pharmaceutical formulation has been developed. The drug shows maximum λmax289 nm in distilled water, simulated tears and phosphate buffer saline. The linearity range of developed methods were in the range of 3-30 μg/ml of drug with a correlation coefficient (r(2)) 0.9992, 0.9989 and 0.9984 with respect to distilled water, simulated tears and phosphate buffer saline, respectively. Reproducibility by repeating methods as %RSD were found to be less than 2%. The limit of detection in different media was found to be 0.62, 0.72 and 0.88 μg/ml, respectively. The limit of quantification was found to be 1.88, 2.10, 2.60 μg/ml, respectively. The proposed method was validated statically according to International Conference on Harmonization guidelines with respect to specificity, linearity, range, accuracy, precision and robustness. The proposed methods of validation were found to be accurate and highly specific for the estimation of besifloxacin in different pharmaceutical formulations.
Multiplex Droplet Digital PCR Protocols for Quantification of GM Maize Events.
Dobnik, David; Spilsberg, Bjørn; Bogožalec Košir, Alexandra; Štebih, Dejan; Morisset, Dany; Holst-Jensen, Arne; Žel, Jana
2018-01-01
The standard-curve based simplex quantitative polymerase chain reaction (qPCR) has been the gold standard for DNA target quantification for more than a decade. The large and growing number of individual analyses needed to test for genetically modified organisms (GMOs) is reducing the cost-effectiveness of qPCR. Droplet digital PCR (ddPCR) enables absolute quantification without standard curves, avoids the amplification efficiency bias observed with qPCR, allows more accurate estimations at low target copy numbers and, in combination with multiplexing, significantly improves cost efficiency. Here we describe two protocols for multiplex quantification of GM maize events: (1) nondiscriminating, with multiplex quantification of targets as a group (12 GM maize lines) and (2) discriminating, with multiplex quantification of individual targets (events). The first enables the quantification of twelve European Union authorized GM maize events as a group with only two assays, but does not permit determination of the individual events present. The second protocol enables the quantification of four individual targets (three GM events and one endogene) in a single reaction. Both protocols can be modified for quantification of any other DNA target.
Quijano, Leyre; Yusà, Vicent; Font, Guillermina; McAllister, Claudia; Torres, Concepción; Pardo, Olga
2017-02-01
This study was carried out to determine current levels of nitrate in vegetables marketed in the Region of Valencia (Spain) and to estimate the toxicological risk associated with their intake. A total of 533 samples of seven vegetable species were studied. Nitrate levels were derived from the Valencia Region monitoring programme carried out from 2009 to 2013 and food consumption levels were taken from the first Valencia Food Consumption Survey, conducted in 2010. The exposure was estimated using a probabilistic approach and two scenarios were assumed for left-censored data: the lower-bound scenario, in which unquantified results (below the limit of quantification) were set to zero and the upper-bound scenario, in which unquantified results were set to the limit of quantification value. The exposure of the Valencia consumers to nitrate through the consumption of vegetable products appears to be relatively low. In the adult population (16-95 years) the P99.9 was 3.13 mg kg -1 body weight day -1 and 3.15 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. On the other hand, for young people (6-15 years) the P99.9 of the exposure was 4.20 mg kg -1 body weight day -1 and 4.40 mg kg -1 body weight day -1 in the lower bound and upper bound scenario, respectively. The risk characterisation indicates that, under the upper bound scenario, 0.79% of adults and 1.39% of young people can exceed the Acceptable Daily Intake of nitrate. This percentage could join the vegetable extreme consumers (such as vegetarians) of vegetables. Overall, the estimated exposures to nitrate from vegetables are unlikely to result in appreciable health risks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Uebelacker, Michael; Lachenmeier, Dirk W.
2011-01-01
Acetaldehyde (ethanal) is a genotoxic carcinogen, which may occur naturally or as an added flavour in foods. We have developed an efficient method to analyze the compound in a wide variety of food matrices. The analysis is conducted using headspace (HS) gas chromatography (GC) with flame ionization detector. Using a robot autosampler, the samples are digested in full automation with simulated gastric fluid (1 h at 37°C) under shaking, which frees acetaldehyde loosely bound to matrix compounds. Afterwards, an aliquot of the HS is injected into the GC system. Standard addition was applied for quantification to compensate for matrix effects. The precision of the method was sufficient (<3% coefficient of variation). The limit of detection was 0.01 mg/L and the limit of quantification was 0.04 mg/L. 140 authentic samples were analyzed. The acetaldehyde content in apples was 0.97 ± 0.80 mg/kg, orange juice contained 3.86 ± 2.88 mg/kg. The highest concentration was determined in a yoghurt (17 mg/kg). A first-exposure estimation resulted in a daily acetaldehyde intake of less than 0.1 mg/kg bodyweight from food, which is considerably lower than the exposures from alcohol consumption or tobacco smoking. PMID:21747735
Li, P; Jia, J W; Jiang, L X; Zhu, H; Bai, L; Wang, J B; Tang, X M; Pan, A H
2012-04-27
To ensure the implementation of genetically modified organism (GMO)-labeling regulations, an event-specific detection method was developed based on the junction sequence of an exogenous integrant in the transgenic carnation variety Moonlite. The 5'-transgene integration sequence was isolated by thermal asymmetric interlaced PCR. Based upon the 5'-transgene integration sequence, the event-specific primers and TaqMan probe were designed to amplify the fragments, which spanned the exogenous DNA and carnation genomic DNA. Qualitative and quantitative PCR assays were developed employing the designed primers and probe. The detection limit of the qualitative PCR assay was 0.05% for Moonlite in 100 ng total carnation genomic DNA, corresponding to about 79 copies of the carnation haploid genome; the limit of detection and quantification of the quantitative PCR assay were estimated to be 38 and 190 copies of haploid carnation genomic DNA, respectively. Carnation samples with different contents of genetically modified components were quantified and the bias between the observed and true values of three samples were lower than the acceptance criterion (<25%) of the GMO detection method. These results indicated that these event-specific methods would be useful for the identification and quantification of the GMO carnation Moonlite.
NASA Astrophysics Data System (ADS)
Pisso, Ignacio; Patra, Prabir; Breivik, Knut
2015-04-01
Lagrangian transport models based on times series of Eulerian fields provide a computationally affordable way of achieving very high resolution for limited areas and time periods. This makes them especially suitable for the analysis of point-wise measurements of atmospheric tracers. We present an application illustrated with examples of greenhouse gases from anthropogenic emissions in urban areas and biogenic emissions in Japan and of pollutants in the Arctic. We asses the algorithmic complexity of the numerical implementation as well as the use of non-procedural techniques such as Object-Oriented programming. We discuss aspects related to the quantification of uncertainty from prior information in the presence of model error and limited number of observations. The case of non-linear constraints is explored using direct numerical optimisation methods.
NASA Astrophysics Data System (ADS)
Swinburne, Thomas D.; Perez, Danny
2018-05-01
A massively parallel method to build large transition rate matrices from temperature-accelerated molecular dynamics trajectories is presented. Bayesian Markov model analysis is used to estimate the expected residence time in the known state space, providing crucial uncertainty quantification for higher-scale simulation schemes such as kinetic Monte Carlo or cluster dynamics. The estimators are additionally used to optimize where exploration is performed and the degree of temperature acceleration on the fly, giving an autonomous, optimal procedure to explore the state space of complex systems. The method is tested against exactly solvable models and used to explore the dynamics of C15 interstitial defects in iron. Our uncertainty quantification scheme allows for accurate modeling of the evolution of these defects over timescales of several seconds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less
Direct qPCR quantification using the Quantifiler(®) Trio DNA quantification kit.
Liu, Jason Yingjie
2014-11-01
The effectiveness of a direct quantification assay is essential to the adoption of the combined direct quantification/direct STR workflow. In this paper, the feasibility of using the Quantifiler(®) Trio DNA quantification kit for the direct quantification of forensic casework samples was investigated. Both low-level touch DNA samples and blood samples were collected on PE swabs and quantified directly. The increased sensitivity of the Quantifiler(®) Trio kit enabled the detection of less than 10pg of DNA in unprocessed touch samples and also minimizes the stochastic effect experienced by different targets in the same sample. The DNA quantity information obtained from a direct quantification assay using the Quantifiler(®) Trio kit can also be used to accurately estimate the optimal input DNA quantity for a direct STR amplification reaction. The correlation between the direct quantification results (Quantifiler(®) Trio kit) and the direct STR results (GlobalFiler™ PCR amplification kit(*)) for low-level touch DNA samples indicates that direct quantification using the Quantifiler(®) Trio DNA quantification kit is more reliable than the Quantifiler(®) Duo DNA quantification kit for predicting the STR results of unprocessed touch DNA samples containing less than 10pg of DNA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita
As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statisticalmore » inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.« less
NASA Astrophysics Data System (ADS)
Malagò, Anna; Efstathiou, Dionissios; Bouraoui, Fayçal; Nikolaidis, Nikolaos P.; Franchini, Marco; Bidoglio, Giovanni; Kritsotakis, Marinos
2016-09-01
Crete Island (Greece) is a karst dominated region that faces limited water supply and increased seasonal demand, especially during summer for agricultural and touristic uses. In addition, due to the mountainous terrain, interbasin water transfer is very limited. The resulting water imbalance requires a correct quantification of available water resources in view of developing appropriate management plans to face the problem of water shortage. The aim of this work is the development of a methodology using the SWAT model and a karst-flow model (KSWAT, Karst SWAT model) for the quantification of a spatially and temporally explicit hydrologic water balance of karst-dominated geomorphology in order to assess the sustainability of the actual water use. The application was conducted in the Island of Crete using both hard (long time series of streamflow and spring monitoring stations) and soft data (i.e. literature information of individual processes). The KSWAT model estimated the water balance under normal hydrological condition as follows: 6400 Mm3/y of precipitation, of which 40% (2500 Mm3/y) was lost through evapotranspiration, 5% was surface runoff and 55% percolated into the soil contributing to lateral flow (2%), and recharging the shallow (9%) and deep aquifer (44%). The water yield was estimated as 22% of precipitation, of which about half was the contribution from spring discharges (9% of precipitation). The application of the KSWAT model increased our knowledge about water resources availability and distribution in Crete under different hydrologic conditions. The model was able to capture the hydrology of the karst areas allowing a better management and planning of water resources under scarcity.
Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra
2015-01-01
Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150
Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?
Ershadi, Saba; Shayanfar, Ali
2018-03-22
The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.
Whole farm quantification of GHG emissions within smallholder farms in developing countries
NASA Astrophysics Data System (ADS)
Seebauer, Matthias
2014-03-01
The IPCC has compiled the best available scientific methods into published guidelines for estimating greenhouse gas emissions and emission removals from the land-use sector. In order to evaluate existing GHG quantification tools to comprehensively quantify GHG emissions and removals in smallholder conditions, farm scale quantification was tested with farm data from Western Kenya. After conducting a cluster analysis to identify different farm typologies GHG quantification was exercised using the VCS SALM methodology complemented with IPCC livestock emission factors and the cool farm tool. The emission profiles of four farm clusters representing the baseline conditions in the year 2009 are compared with 2011 where farmers adopted sustainable land management practices (SALM). The results demonstrate the variation in both the magnitude of the estimated GHG emissions per ha between different smallholder farm typologies and the emissions estimated by applying two different accounting tools. The farm scale quantification further shows that the adoption of SALM has a significant impact on emission reduction and removals and the mitigation benefits range between 4 and 6.5 tCO2 ha-1 yr-1 with significantly different mitigation benefits depending on typologies of the crop-livestock systems, their different agricultural practices, as well as adoption rates of improved practices. However, the inherent uncertainty related to the emission factors applied by accounting tools has substantial implications for reported agricultural emissions. With regard to uncertainty related to activity data, the assessment confirms the high variability within different farm types as well as between different parameters surveyed to comprehensively quantify GHG emissions within smallholder farms.
An open tool for input function estimation and quantification of dynamic PET FDG brain scans.
Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro
2016-08-01
Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.
Naveen, P.; Lingaraju, H. B.; Prasad, K. Shyam
2017-01-01
Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica, is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica. RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography–mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica. SUMMARY The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica. The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica. Abbreviations Used: M. indica: Mangifera indica, RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification. PMID:28539748
Naveen, P; Lingaraju, H B; Prasad, K Shyam
2017-01-01
Mangiferin, a polyphenolic xanthone glycoside from Mangifera indica , is used as traditional medicine for the treatment of numerous diseases. The present study was aimed to develop and validate a reversed-phase high-performance liquid chromatography (RP-HPLC) method for the quantification of mangiferin from the bark extract of M. indica . RP-HPLC analysis was performed by isocratic elution with a low-pressure gradient using 0.1% formic acid: acetonitrile (87:13) as a mobile phase with a flow rate of 1.5 ml/min. The separation was done at 26°C using a Kinetex XB-C18 column as stationary phase and the detection wavelength at 256 nm. The proposed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification, and robustness by the International Conference on Harmonisation guidelines. In linearity, the excellent correlation coefficient more than 0.999 indicated good fitting of the curve and also good linearity. The intra- and inter-day precision showed < 1% of relative standard deviation of peak area indicated high reliability and reproducibility of the method. The recovery values at three different levels (50%, 100%, and 150%) of spiked samples were found to be 100.47, 100.89, and 100.99, respectively, and low standard deviation value < 1% shows high accuracy of the method. In robustness, the results remain unaffected by small variation in the analytical parameters, which shows the robustness of the method. Liquid chromatography-mass spectrometry analysis confirmed the presence of mangiferin with M/Z value of 421. The assay developed by HPLC method is a simple, rapid, and reliable for the determination of mangiferin from M. indica . The present study was intended to develop and validate an RP-HPLC method for the quantification of mangiferin from the bark extract of M. indica . The developed method was validated for linearity, precision, accuracy, limit of detection, limit of quantification and robustness by International Conference on Harmonization guidelines. This study proved that the developed assay by HPLC method is a simple, rapid and reliable for the quantification of the mangiferin from M. indica . Abbreviations Used: M. indica : Mangifera indica , RP-HPLC: Reversed-phase high-performance liquid chromatography, M/Z: Mass to charge ratio, ICH: International conference on harmonization, % RSD: Percentage of relative standard deviation, ppm: Parts per million, LOD: Limit of detection, LOQ: Limit of quantification.
Use of COD, TOC, and Fluorescence Spectroscopy to Estimate BOD in Wastewater.
Christian, Evelyn; Batista, Jacimaria R; Gerrity, Daniel
2017-02-01
Common to all National Pollutant Discharge Elimination System (NPDES) permits in the United States is a limit on biochemical oxygen demand (BOD). Chemical oxygen demand (COD), total organic carbon (TOC), and fluorescence spectroscopy are also capable of quantifying organic content, although the mechanisms of quantification and the organic fractions targeted differ for each test. This study explores correlations between BOD5 and these alternate test procedures using facility influent, primary effluent, and facility effluent samples from a full-scale water resource recovery facility. Relative reductions of the water quality parameters proved to be strong indicators of their suitability as surrogates for BOD5. Suitable correlations were generally limited to the combined datasets for the three sampling locations or the facility effluent alone. COD exhibited relatively strong linear correlations with BOD5 when considering the three sample points (r = 0.985) and the facility effluent alone (r = 0.914), while TOC exhibited a suitable linear correlation with BOD5 in the facility effluent (r = 0.902). Exponential regressions proved to be useful for estimating BOD5 based on TOC or fluorescence (r > 0.95).
Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A; Raut, Rahul P; Choudhari, Vishnu P; Kuchekar, Bhanudas S
2011-07-01
To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60-360 ng/band for LOR and 30-180 ng/band for THIO with correlation coefficients r(2) = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7-101.2 %. The proposed method was optimized and validated as per the ICH guidelines.
Advances in targeted proteomics and applications to biomedical research
Shi, Tujin; Song, Ehwang; Nie, Song; Rodland, Karin D.; Liu, Tao; Qian, Wei-Jun; Smith, Richard D.
2016-01-01
Targeted proteomics technique has emerged as a powerful protein quantification tool in systems biology, biomedical research, and increasing for clinical applications. The most widely used targeted proteomics approach, selected reaction monitoring (SRM), also known as multiple reaction monitoring (MRM), can be used for quantification of cellular signaling networks and preclinical verification of candidate protein biomarkers. As an extension to our previous review on advances in SRM sensitivity herein we review recent advances in the method and technology for further enhancing SRM sensitivity (from 2012 to present), and highlighting its broad biomedical applications in human bodily fluids, tissue and cell lines. Furthermore, we also review two recently introduced targeted proteomics approaches, parallel reaction monitoring (PRM) and data-independent acquisition (DIA) with targeted data extraction on fast scanning high-resolution accurate-mass (HR/AM) instruments. Such HR/AM targeted quantification with monitoring all target product ions addresses SRM limitations effectively in specificity and multiplexing; whereas when compared to SRM, PRM and DIA are still in the infancy with a limited number of applications. Thus, for HR/AM targeted quantification we focus our discussion on method development, data processing and analysis, and its advantages and limitations in targeted proteomics. Finally, general perspectives on the potential of achieving both high sensitivity and high sample throughput for large-scale quantification of hundreds of target proteins are discussed. PMID:27302376
Advances in targeted proteomics and applications to biomedical research
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Tujin; Song, Ehwang; Nie, Song
Targeted proteomics technique has emerged as a powerful protein quantification tool in systems biology, biomedical research, and increasing for clinical applications. The most widely used targeted proteomics approach, selected reaction monitoring (SRM), also known as multiple reaction monitoring (MRM), can be used for quantification of cellular signaling networks and preclinical verification of candidate protein biomarkers. As an extension to our previous review on advances in SRM sensitivity (Shi et al., Proteomics, 12, 1074–1092, 2012) herein we review recent advances in the method and technology for further enhancing SRM sensitivity (from 2012 to present), and highlighting its broad biomedical applications inmore » human bodily fluids, tissue and cell lines. Furthermore, we also review two recently introduced targeted proteomics approaches, parallel reaction monitoring (PRM) and data-independent acquisition (DIA) with targeted data extraction on fast scanning high-resolution accurate-mass (HR/AM) instruments. Such HR/AM targeted quantification with monitoring all target product ions addresses SRM limitations effectively in specificity and multiplexing; whereas when compared to SRM, PRM and DIA are still in the infancy with a limited number of applications. Thus, for HR/AM targeted quantification we focus our discussion on method development, data processing and analysis, and its advantages and limitations in targeted proteomics. Finally, general perspectives on the potential of achieving both high sensitivity and high sample throughput for large-scale quantification of hundreds of target proteins are discussed.« less
NASA Astrophysics Data System (ADS)
Santabarbara, Ignacio; Haas, Edwin; Kraus, David; Herrera, Saul; Klatt, Steffen; Kiese, Ralf
2014-05-01
When using biogeochemical models to estimate greenhouse gas emissions at site to regional/national levels, the assessment and quantification of the uncertainties of simulation results are of significant importance. The uncertainties in simulation results of process-based ecosystem models may result from uncertainties of the process parameters that describe the processes of the model, model structure inadequacy as well as uncertainties in the observations. Data for development and testing of uncertainty analisys were corp yield observations, measurements of soil fluxes of nitrous oxide (N2O) and carbon dioxide (CO2) from 8 arable sites across Europe. Using the process-based biogeochemical model LandscapeDNDC for simulating crop yields, N2O and CO2 emissions, our aim is to assess the simulation uncertainty by setting up a Bayesian framework based on Metropolis-Hastings algorithm. Using Gelman statistics convergence criteria and parallel computing techniques, enable multi Markov Chains to run independently in parallel and create a random walk to estimate the joint model parameter distribution. Through means distribution we limit the parameter space, get probabilities of parameter values and find the complex dependencies among them. With this parameter distribution that determines soil-atmosphere C and N exchange, we are able to obtain the parameter-induced uncertainty of simulation results and compare them with the measurements data.
Chalova, Vesela I.; Froelich, Clifford A.; Ricke, Steven C.
2010-01-01
Methionine is an essential amino acid for animals and is typically considered one of the first limiting amino acids in animal feed formulations. Methionine deficiency or excess in animal diets can lead to sub-optimal animal performance and increased environmental pollution, which necessitates its accurate quantification and proper dosage in animal rations. Animal bioassays are the current industry standard to quantify methionine bioavailability. However, animal-based assays are not only time consuming, but expensive and are becoming more scrutinized by governmental regulations. In addition, a variety of artifacts can hinder the variability and time efficacy of these assays. Microbiological assays, which are based on a microbial response to external supplementation of a particular nutrient such as methionine, appear to be attractive potential alternatives to the already established standards. They are rapid and inexpensive in vitro assays which are characterized with relatively accurate and consistent estimation of digestible methionine in feeds and feed ingredients. The current review discusses the potential to develop Escherichia coli-based microbial biosensors for methionine bioavailability quantification. Methionine biosynthesis and regulation pathways are overviewed in relation to genetic manipulation required for the generation of a respective methionine auxotroph that could be practical for a routine bioassay. A prospective utilization of Escherichia coli methionine biosensor would allow for inexpensive and rapid methionine quantification and ultimately enable timely assessment of nutritional profiles of feedstuffs. PMID:22319312
Domènech, Albert; Cortés-Francisco, Nuria; Palacios, Oscar; Franco, José M; Riobó, Pilar; Llerena, José J; Vichi, Stefania; Caixach, Josep
2014-02-07
A multitoxin method has been developed for quantification and confirmation of lipophilic marine biotoxins in mussels by liquid chromatography coupled to high resolution mass spectrometry (HRMS), using an Orbitrap-Exactive HCD mass spectrometer. Okadaic acid (OA), yessotoxin, azaspiracid-1, gymnodimine, 13-desmethyl spirolide C, pectenotoxin-2 and Brevetoxin B were analyzed as representative compounds of each lipophilic toxin group. HRMS identification and confirmation criteria were established. Fragment and isotope ions and ion ratios were studied and evaluated for confirmation purpose. In depth characterization of full scan and fragmentation spectrum of the main toxins were carried out. Accuracy (trueness and precision), linearity, calibration curve check, limit of quantification (LOQ) and specificity were the parameters established for the method validation. The validation was performed at 0.5 times the current European Union permitted levels. The method performed very well for the parameters investigated. The trueness, expressed as recovery, ranged from 80% to 94%, the precision, expressed as intralaboratory reproducibility, ranged from 5% to 22% and the LOQs range from 0.9 to 4.8pg on column. Uncertainty of the method was also estimated for OA, using a certified reference material. A top-down approach considering two main contributions: those arising from the trueness studies and those coming from the precision's determination, was used. An overall expanded uncertainty of 38% was obtained. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Korram, S.
1977-01-01
The design of general remote sensing-aided methodologies was studied to provide the estimates of several important inputs to water yield forecast models. These input parameters are snow area extent, snow water content, and evapotranspiration. The study area is Feather River Watershed (780,000 hectares), Northern California. The general approach involved a stepwise sequence of identification of the required information, sample design, measurement/estimation, and evaluation of results. All the relevent and available information types needed in the estimation process are being defined. These include Landsat, meteorological satellite, and aircraft imagery, topographic and geologic data, ground truth data, and climatic data from ground stations. A cost-effective multistage sampling approach was employed in quantification of all the required parameters. The physical and statistical models for both snow quantification and evapotranspiration estimation was developed. These models use the information obtained by aerial and ground data through appropriate statistical sampling design.
NASA Astrophysics Data System (ADS)
Campos, M. S. G.; Sarkis, J. E. S.
2018-03-01
The present study presents a new analytical methodology for the determination of 11 compounds present in ethanol samples through the gas chromatography coupled to mass spectrometry (GC-MS) technique using a medium polarity chromatography column composed of 6% cyanopropyl-phenyl and 94% dimethyl polysiloxane. The validation parameters were determined according to NBR ISO 17025:2005. The recovery rates of the studied compounds were 100.4% to 114.7%. The limits of quantification are between 2.4 mg.kg-1 and 5.8 mg.kg-1. The uncertainty of the measurement was estimate in circa of 8%.
Vadas, P A; Good, L W; Moore, P A; Widman, N
2009-01-01
Nonpoint-source pollution of fresh waters by P is a concern because it contributes to accelerated eutrophication. Given the state of the science concerning agricultural P transport, a simple tool to quantify annual, field-scale P loss is a realistic goal. We developed new methods to predict annual dissolved P loss in runoff from surface-applied manures and fertilizers and validated the methods with data from 21 published field studies. We incorporated these manure and fertilizer P runoff loss methods into an annual, field-scale P loss quantification tool that estimates dissolved and particulate P loss in runoff from soil, manure, fertilizer, and eroded sediment. We validated the P loss tool using independent data from 28 studies that monitored P loss in runoff from a variety of agricultural land uses for at least 1 yr. Results demonstrated (i) that our new methods to estimate P loss from surface manure and fertilizer are an improvement over methods used in existing Indexes, and (ii) that it was possible to reliably quantify annual dissolved, sediment, and total P loss in runoff using relatively simple methods and readily available inputs. Thus, a P loss quantification tool that does not require greater degrees of complexity or input data than existing P Indexes could accurately predict P loss across a variety of management and fertilization practices, soil types, climates, and geographic locations. However, estimates of runoff and erosion are still needed that are accurate to a level appropriate for the intended use of the quantification tool.
Browne, Richard W; Whitcomb, Brian W
2010-07-01
Problems in the analysis of laboratory data commonly arise in epidemiologic studies in which biomarkers subject to lower detection thresholds are used. Various thresholds exist including limit of detection (LOD), limit of quantification (LOQ), and limit of blank (LOB). Choosing appropriate strategies for dealing with data affected by such limits relies on proper understanding of the nature of the detection limit and its determination. In this paper, we demonstrate experimental and statistical procedures generally used for estimating different detection limits according to standard procedures in the context of analysis of fat-soluble vitamins and micronutrients in human serum. Fat-soluble vitamins and micronutrients were analyzed by high-performance liquid chromatography with diode array detection. A simulated serum matrix blank was repeatedly analyzed for determination of LOB parametrically by using the observed blank distribution as well as nonparametrically by using ranks. The LOD was determined by combining information regarding the LOB with data from repeated analysis of standard reference materials (SRMs), diluted to low levels; from LOB to 2-3 times LOB. The LOQ was determined experimentally by plotting the observed relative standard deviation (RSD) of SRM replicates compared with the concentration, where the LOQ is the concentration at an RSD of 20%. Experimental approaches and example statistical procedures are given for determination of LOB, LOD, and LOQ. These quantities are reported for each measured analyte. For many analyses, there is considerable information available below the LOQ. Epidemiologic studies must understand the nature of these detection limits and how they have been estimated for appropriate treatment of affected data.
Hingmire, Sandip; Oulkar, Dasharath P; Utture, Sagar C; Ahammed Shabeer, T P; Banerjee, Kaushik
2015-06-01
A liquid chromatography tandem mass spectrometry (LC-MS/MS) based method is reported for simultaneous analysis of fipronil (plus its metabolites) and difenoconazole residues in okra. The sample preparation method involving extraction with ethyl acetate provided 80-107% recoveries for both the pesticides with precision RSD within 4-17% estimated at the limits of quantification (LOQ, fipronil=1ngg(-1), difenoconazole=5ngg(-1)) and higher fortification levels. In field, both the pesticides dissipated with half-life of 2.5days. The estimated pre-harvest intervals (PHI) for fipronil and difenoconazole were 15 and 19.5days, and 4 and 6.5days at single and double dose of field applications, respectively. Decontamination of incurred residues by washing and different cooking treatments was quite efficient in minimizing the residue load of both the chemicals. Okra samples harvested after the estimated PHIs were found safe for human consumption. Copyright © 2014 Elsevier Ltd. All rights reserved.
Spainhour, John Christian G; Janech, Michael G; Schwacke, John H; Velez, Juan Carlos Q; Ramakrishnan, Viswanathan
2014-01-01
Matrix assisted laser desorption/ionization time-of-flight (MALDI-TOF) coupled with stable isotope standards (SIS) has been used to quantify native peptides. This peptide quantification by MALDI-TOF approach has difficulties quantifying samples containing peptides with ion currents in overlapping spectra. In these overlapping spectra the currents sum together, which modify the peak heights and make normal SIS estimation problematic. An approach using Gaussian mixtures based on known physical constants to model the isotopic cluster of a known compound is proposed here. The characteristics of this approach are examined for single and overlapping compounds. The approach is compared to two commonly used SIS quantification methods for single compound, namely Peak Intensity method and Riemann sum area under the curve (AUC) method. For studying the characteristics of the Gaussian mixture method, Angiotensin II, Angiotensin-2-10, and Angiotenisn-1-9 and their associated SIS peptides were used. The findings suggest, Gaussian mixture method has similar characteristics as the two methods compared for estimating the quantity of isolated isotopic clusters for single compounds. All three methods were tested using MALDI-TOF mass spectra collected for peptides of the renin-angiotensin system. The Gaussian mixture method accurately estimated the native to labeled ratio of several isolated angiotensin peptides (5.2% error in ratio estimation) with similar estimation errors to those calculated using peak intensity and Riemann sum AUC methods (5.9% and 7.7%, respectively). For overlapping angiotensin peptides, (where the other two methods are not applicable) the estimation error of the Gaussian mixture was 6.8%, which is within the acceptable range. In summary, for single compounds the Gaussian mixture method is equivalent or marginally superior compared to the existing methods of peptide quantification and is capable of quantifying overlapping (convolved) peptides within the acceptable margin of error.
Yoshinaga, Kazuaki; Obi, Junji; Nagai, Toshiharu; Iioka, Hiroyuki; Yoshida, Akihiko; Beppu, Fumiaki; Gotoh, Naohiro
2017-03-01
In the present study, the resolution parameters and correction factors (CFs) of triacylglycerol (TAG) standards were estimated by gas chromatography-flame ionization detector (GC-FID) to achieve the precise quantification of the TAG composition in edible fats and oils. Forty seven TAG standards comprising capric acid, lauric acid, myristic acid, pentadecanoic acid, palmitic acid, palmitoleic acid, stearic acid, oleic acid, linoleic acid, and/or linolenic acid were analyzed, and the CFs of these TAGs were obtained against tripentadecanoyl glycerol as the internal standard. The capillary column was Ultra ALLOY + -65 (30 m × 0.25 mm i.d., 0.10 μm thickness) and the column temperature was programmed to rise from 250°C to 360°C at 4°C/min and then hold for 25 min. The limit of detection (LOD) and limit of quantification (LOQ) values of the TAG standards were > 0.10 mg and > 0.32 mg per 100 mg fat and oil, respectively, except for LnLnLn, and the LOD and LOQ values of LnLnLn were 0.55 mg and 1.84 mg per 100 mg fat and oil, respectively. The CFs of TAG standards decreased with increasing total acyl carbon number and degree of desaturation of TAG molecules. Also, there were no remarkable differences in the CFs between TAG positional isomers such as 1-palmitoyl-2-oleoyl-3-stearoyl-rac-glycerol, 1-stearoyl-2-palmitoyl-3-oleoyl-rac-glycerol, and 1-palmitoyl-2-stearoyl-3-oleoyl-rac-glycerol, which cannot be separated by GC-FID. Furthermore, this method was able to predict the CFs of heterogeneous (AAB- and ABC-type) TAGs from the CFs of homogenous (AAA-, BBB-, and CCC-type) TAGs. In addition, the TAG composition in cocoa butter, palm oil, and canola oil was determined using CFs, and the results were found to be in good agreement with those reported in the literature. Therefore, the GC-FID method using CFs can be successfully used for the quantification of TAG molecular species in natural fats and oils.
Soneja, Sutyajeet I; Tielsch, James M; Khatry, Subarna K; Curriero, Frank C; Breysse, Patrick N
2016-03-01
Black carbon (BC) is a major contributor to hydrological cycle change and glacial retreat within the Indo-Gangetic Plain (IGP) and surrounding region. However, significant variability exists for estimates of BC regional concentration. Existing inventories within the IGP suffer from limited representation of rural sources, reliance on idealized point source estimates (e.g., utilization of emission factors or fuel-use estimates for cooking along with demographic information), and difficulty in distinguishing sources. Inventory development utilizes two approaches, termed top down and bottom up, which rely on various sources including transport models, emission factors, and remote sensing applications. Large discrepancies exist for BC source attribution throughout the IGP depending on the approach utilized. Cooking with biomass fuels, a major contributor to BC production has great source apportionment variability. Areas requiring attention tied to research of cookstove and biomass fuel use that have been recognized to improve emission inventory estimates include emission factors, particulate matter speciation, and better quantification of regional/economic sectors. However, limited attention has been given towards understanding ambient small-scale spatial variation of BC between cooking and non-cooking periods in low-resource environments. Understanding the indoor to outdoor relationship of BC emissions due to cooking at a local level is a top priority to improve emission inventories as many health and climate applications rely upon utilization of accurate emission inventories.
Linscheid, Michael W
2018-03-30
To understand biological processes, not only reliable identification, but quantification of constituents in biological processes play a pivotal role. This is especially true for the proteome: protein quantification must follow protein identification, since sometimes minute changes in abundance tell the real tale. To obtain quantitative data, many sophisticated strategies using electrospray and MALDI mass spectrometry (MS) have been developed in recent years. All of them have advantages and limitations. Several years ago, we started to work on strategies, which are principally capable to overcome some of these limits. The fundamental idea is to use elemental signals as a measure for quantities. We began by replacing the radioactive 32 P with the "cold" natural 31 P to quantify modified nucleotides and phosphorylated peptides and proteins and later used tagging strategies for quantification of proteins more generally. To do this, we introduced Inductively Coupled Plasma Mass Spectrometry (ICP-MS) into the bioanalytical workflows, allowing not only reliable and sensitive detection but also quantification based on isotope dilution absolute measurements using poly-isotopic elements. The detection capability of ICP-MS becomes particularly attractive with heavy metals. The covalently bound proteins tags developed in our group are based on the well-known DOTA chelate complex (1,4,7,10-tetraazacyclododecane-N,N',N″,N‴-tetraacetic acid) carrying ions of lanthanoides as metal core. In this review, I will outline the development of this mutual assistance between molecular and elemental mass spectrometry and discuss the scope and limitations particularly of peptide and protein quantification. The lanthanoide tags provide low detection limits, but offer multiplexing capabilities due to the number of very similar lanthanoides and their isotopes. With isotope dilution comes previously unknown accuracy. Separation techniques such as electrophoresis and HPLC were used and just slightly adapted workflows, already in use for quantification in bioanalysis. Imaging mass spectrometry (MSI) with MALDI and laser ablation ICP-MS complemented the range of application in recent years. © 2018 Wiley Periodicals, Inc.
A Spanish model for quantification and management of construction waste.
Solís-Guzmán, Jaime; Marrero, Madelyn; Montes-Delgado, Maria Victoria; Ramírez-de-Arellano, Antonio
2009-09-01
Currently, construction and demolition waste (C&D waste) is a worldwide issue that concerns not only governments but also the building actors involved in construction activity. In Spain, a new national decree has been regulating the production and management of C&D waste since February 2008. The present work describes the waste management model that has inspired this decree: the Alcores model implemented with good results in Los Alcores Community (Seville, Spain). A detailed model is also provided to estimate the volume of waste that is expected to be generated on the building site. The quantification of C&D waste volume, from the project stage, is essential for the building actors to properly plan and control its disposal. This quantification model has been developed by studying 100 dwelling projects, especially their bill of quantities, and defining three coefficients to estimate the demolished volume (CT), the wreckage volume (CR) and the packaging volume (CE). Finally, two case studies are included to illustrate the usefulness of the model to estimate C&D waste volume in both new construction and demolition projects.
Smith, Kirsty F; de Salas, Miguel; Adamson, Janet; Rhodes, Lesley L
2014-03-07
The identification of toxin-producing dinoflagellates for monitoring programmes and bio-compound discovery requires considerable taxonomic expertise. It can also be difficult to morphologically differentiate toxic and non-toxic species or strains. Various molecular methods have been used for dinoflagellate identification and detection, and this study describes the development of eight real-time polymerase chain reaction (PCR) assays targeting the large subunit ribosomal RNA (LSU rRNA) gene of species from the genera Gymnodinium, Karenia, Karlodinium, and Takayama. Assays proved to be highly specific and sensitive, and the assay for G. catenatum was further developed for quantification in response to a bloom in Manukau Harbour, New Zealand. The assay estimated cell densities from environmental samples as low as 0.07 cells per PCR reaction, which equated to three cells per litre. This assay not only enabled conclusive species identification but also detected the presence of cells below the limit of detection for light microscopy. This study demonstrates the usefulness of real-time PCR as a sensitive and rapid molecular technique for the detection and quantification of micro-algae from environmental samples.
Land grabbing: a preliminary quantification of economic impacts on rural livelihoods.
Davis, Kyle F; D'Odorico, Paolo; Rulli, Maria Cristina
2014-01-01
Global demands on agricultural land are increasing due to population growth, dietary changes and the use of biofuels. Their effect on food security is to reduce humans' ability to cope with the uncertainties of global climate change. In light of the 2008 food crisis, to secure reliable future access to sufficient agricultural land, many nations and corporations have begun purchasing large tracts of land in the global South, a phenomenon deemed "land grabbing" by popular media. Because land investors frequently export crops without providing adequate employment, this represents an effective income loss for local communities. We study 28 countries targeted by large-scale land acquisitions [comprising 87 % of reported cases and 27 million hectares (ha)] and estimate the effects of such investments on local communities' incomes. We find that this phenomenon can potentially affect the incomes of ~12 million people globally with implications for food security, poverty levels and urbanization. While it is important to note that our study incorporates a number of assumptions and limitations, it provides a much needed initial quantification of the economic impacts of large-scale land acquisitions on rural livelihoods.
Methodology for quantification of waste generated in Spanish railway construction works.
de Guzmán Báez, Ana; Villoria Sáez, Paola; del Río Merino, Mercedes; García Navarro, Justo
2012-05-01
In the last years, the European Union (EU) has been focused on the reduction of construction and demolition (C&D) waste. Specifically, in 2006, Spain generated roughly 47million tons of C&D waste, of which only 13.6% was recycled. This situation has lead to the drawing up of many regulations on C&D waste during the past years forcing EU countries to include new measures for waste prevention and recycling. Among these measures, the mandatory obligation to quantify the C&D waste expected to be originated during a construction project is mandated. However, limited data is available on civil engineering projects. Therefore, the aim of this research study is to improve C&D waste management in railway projects, by developing a model for C&D waste quantification. For this purpose, we develop two equations which estimate in advance the amount, both in weight and volume, of the C&D waste likely to be generated in railway construction projects, including the category of C&D waste generated for the entire project. Copyright © 2012 Elsevier Ltd. All rights reserved.
Selective Data Acquisition in NMR. The Quantification of Anti-phase Scalar Couplings
NASA Astrophysics Data System (ADS)
Hodgkinson, P.; Holmes, K. J.; Hore, P. J.
Almost all time-domain NMR experiments employ "linear sampling," in which the NMR response is digitized at equally spaced times, with uniform signal averaging. Here, the possibilities of nonlinear sampling are explored using anti-phase doublets in the indirectly detected dimensions of multidimensional COSY-type experiments as an example. The Cramér-Rao lower bounds are used to evaluate and optimize experiments in which the sampling points, or the extent of signal averaging at each point, or both, are varied. The optimal nonlinear sampling for the estimation of the coupling constant J, by model fitting, turns out to involve just a few key time points, for example, at the first node ( t= 1/ J) of the sin(π Jt) modulation. Such sparse sampling patterns can be used to derive more practical strategies, in which the sampling or the signal averaging is distributed around the most significant time points. The improvements in the quantification of NMR parameters can be quite substantial especially when, as is often the case for indirectly detected dimensions, the total number of samples is limited by the time available.
How to Make Data a Blessing to Parametric Uncertainty Quantification and Reduction?
NASA Astrophysics Data System (ADS)
Ye, M.; Shi, X.; Curtis, G. P.; Kohler, M.; Wu, J.
2013-12-01
In a Bayesian point of view, probability of model parameters and predictions are conditioned on data used for parameter inference and prediction analysis. It is critical to use appropriate data for quantifying parametric uncertainty and its propagation to model predictions. However, data are always limited and imperfect. When a dataset cannot properly constrain model parameters, it may lead to inaccurate uncertainty quantification. While in this case data appears to be a curse to uncertainty quantification, a comprehensive modeling analysis may help understand the cause and characteristics of parametric uncertainty and thus turns data into a blessing. In this study, we illustrate impacts of data on uncertainty quantification and reduction using an example of surface complexation model (SCM) developed to simulate uranyl (U(VI)) adsorption. The model includes two adsorption sites, referred to as strong and weak sites. The amount of uranium adsorption on these sites determines both the mean arrival time and the long tail of the breakthrough curves. There is one reaction on the weak site but two reactions on the strong site. The unknown parameters include fractions of the total surface site density of the two sites and surface complex formation constants of the three reactions. A total of seven experiments were conducted with different geochemical conditions to estimate these parameters. The experiments with low initial concentration of U(VI) result in a large amount of parametric uncertainty. A modeling analysis shows that it is because the experiments cannot distinguish the relative adsorption affinity of the strong and weak sites on uranium adsorption. Therefore, the experiments with high initial concentration of U(VI) are needed, because in the experiments the strong site is nearly saturated and the weak site can be determined. The experiments with high initial concentration of U(VI) are a blessing to uncertainty quantification, and the experiments with low initial concentration help modelers turn a curse into a blessing. The data impacts on uncertainty quantification and reduction are quantified using probability density functions of model parameters obtained from Markov Chain Monte Carlo simulation using the DREAM algorithm. This study provides insights to model calibration, uncertainty quantification, experiment design, and data collection in groundwater reactive transport modeling and other environmental modeling.
Venkateswarlu, Kambham; Rangareddy, Ardhgeri; Narasimhaiah, Kanaka; Sharma, Hemraj; Bandi, Naga Mallikarjuna Raja
2017-01-01
The main objective of present study was to develop a RP-HPLC method for estimation of Armodafinil in pharmaceutical dosage forms and characterization of its base hydrolytic product. The method was developed for Armodafinil estimation and base hydrolytic products were characterized. The separation was carried out on C18 column by using mobile phase as mixture of water and methanol (45:55%v/v). Eluents were detected at 220nm at 1ml/min. Stress studies were performed with milder conditions followed by stronger conditions so as to get sufficient degradation around 20%. A total of five degradation products were detected and separated from analyte. The linearity of the proposed method was investigated in the range of 20-120µg/ml for Armodafinil. The detection limit and quantification limit was found to be 0.01183μg/ml and 0.035µg/ml respectively. The precision % RSD was found to be less than 2% and the recovery was between 98-102%. Armodafinil was found to be more sensitive to the base hydrolysis and yielded its carboxylic acid as degradant. The developed method was stability indicating assay, suitable to quantify Armodafinil in presence of possible degradants. The drug was sensitive to acid, base &photolytic stress and resistant to thermal &oxidation.
Uncertainty Quantification in Alchemical Free Energy Methods.
Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V
2018-06-12
Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.
Collewet, Guylaine; Moussaoui, Saïd; Deligny, Cécile; Lucas, Tiphaine; Idier, Jérôme
2018-06-01
Multi-tissue partial volume estimation in MRI images is investigated with a viewpoint related to spectral unmixing as used in hyperspectral imaging. The main contribution of this paper is twofold. It firstly proposes a theoretical analysis of the statistical optimality conditions of the proportion estimation problem, which in the context of multi-contrast MRI data acquisition allows to appropriately set the imaging sequence parameters. Secondly, an efficient proportion quantification algorithm based on the minimisation of a penalised least-square criterion incorporating a regularity constraint on the spatial distribution of the proportions is proposed. Furthermore, the resulting developments are discussed using empirical simulations. The practical usefulness of the spectral unmixing approach for partial volume quantification in MRI is illustrated through an application to food analysis on the proving of a Danish pastry. Copyright © 2018 Elsevier Inc. All rights reserved.
A fault tree model to assess probability of contaminant discharge from shipwrecks.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I
2014-11-15
Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.
De, Amit Kumar; Chowdhury, Partha Pratim; Chattapadhyay, Shyamaprasad
2016-01-01
The current study presents the simultaneous quantification of dexpanthenol and resorcinol from marketed hair care formulation. Dexpanthenol is often present as an active ingredient in personal care products for its beautifying and invigorating properties and restorative and smoothing properties. On the other hand resorcinol is mainly prescribed for the treatment of seborrheic dermatitis of scalp. The toxic side effects of resorcinol limit its use in dermatological preparations. Therefore an accurate quantification technique for the simultaneous estimation of these two components can be helpful for the formulation industries for the accurate analysis of their product quality. In the current study a high performance liquid chromatographic technique has been developed using a C18 column and a mobile phase consisting of phosphate buffer of pH = 2.8 following a gradient elution. The mobile phase flow rate was 0.6 mL per minute and the detection wavelength was 210 nm for dexpanthenol and 280 nm for resorcinol. The linearity study was carried out using five solutions having concentrations ranging between 10.34 μg·mL(-1) and 82.69 μg·mL(-1) (r (2) = 0.999) for resorcinol and 10.44 μg·mL(-1) and 83.50 μg·mL(-1) (r (2) = 0.998) for dexpanthenol. The method has been validated as per ICH Q2(R1) guidelines. The ease of single step sample preparation, accuracy, and precision (intraday and interday) study presents the method suitable for the simultaneous quantification of dexpanthenol and resorcinol from any personal care product and dermatological preparations containing these two ingredients.
De, Amit Kumar; Chowdhury, Partha Pratim; Chattapadhyay, Shyamaprasad
2016-01-01
The current study presents the simultaneous quantification of dexpanthenol and resorcinol from marketed hair care formulation. Dexpanthenol is often present as an active ingredient in personal care products for its beautifying and invigorating properties and restorative and smoothing properties. On the other hand resorcinol is mainly prescribed for the treatment of seborrheic dermatitis of scalp. The toxic side effects of resorcinol limit its use in dermatological preparations. Therefore an accurate quantification technique for the simultaneous estimation of these two components can be helpful for the formulation industries for the accurate analysis of their product quality. In the current study a high performance liquid chromatographic technique has been developed using a C18 column and a mobile phase consisting of phosphate buffer of pH = 2.8 following a gradient elution. The mobile phase flow rate was 0.6 mL per minute and the detection wavelength was 210 nm for dexpanthenol and 280 nm for resorcinol. The linearity study was carried out using five solutions having concentrations ranging between 10.34 μg·mL−1 and 82.69 μg·mL−1 (r 2 = 0.999) for resorcinol and 10.44 μg·mL−1 and 83.50 μg·mL−1 (r 2 = 0.998) for dexpanthenol. The method has been validated as per ICH Q2(R1) guidelines. The ease of single step sample preparation, accuracy, and precision (intraday and interday) study presents the method suitable for the simultaneous quantification of dexpanthenol and resorcinol from any personal care product and dermatological preparations containing these two ingredients. PMID:27042377
Chalcraft, Kenneth R; Lee, Richard; Mills, Casandra; Britz-McKibbin, Philip
2009-04-01
A major obstacle in metabolomics remains the identification and quantification of a large fraction of unknown metabolites in complex biological samples when purified standards are unavailable. Herein we introduce a multivariate strategy for de novo quantification of cationic/zwitterionic metabolites using capillary electrophoresis-electrospray ionization-mass spectrometry (CE-ESI-MS) based on fundamental molecular, thermodynamic, and electrokinetic properties of an ion. Multivariate calibration was used to derive a quantitative relationship between the measured relative response factor (RRF) of polar metabolites with respect to four physicochemical properties associated with ion evaporation in ESI-MS, namely, molecular volume (MV), octanol-water distribution coefficient (log D), absolute mobility (mu(o)), and effective charge (z(eff)). Our studies revealed that a limited set of intrinsic solute properties can be used to predict the RRF of various classes of metabolites (e.g., amino acids, amines, peptides, acylcarnitines, nucleosides, etc.) with reasonable accuracy and robustness provided that an appropriate training set is validated and ion responses are normalized to an internal standard(s). The applicability of the multivariate model to quantify micromolar levels of metabolites spiked in red blood cell (RBC) lysates was also examined by CE-ESI-MS without significant matrix effects caused by involatile salts and/or major co-ion interferences. This work demonstrates the feasibility for virtual quantification of low-abundance metabolites and their isomers in real-world samples using physicochemical properties estimated by computer modeling, while providing deeper insight into the wide disparity of solute responses in ESI-MS. New strategies for predicting ionization efficiency in silico allow for rapid and semiquantitative analysis of newly discovered biomarkers and/or drug metabolites in metabolomics research when chemical standards do not exist.
Go, Young-Mi; Walker, Douglas I; Liang, Yongliang; Uppal, Karan; Soltow, Quinlyn A; Tran, ViLinh; Strobel, Frederick; Quyyumi, Arshed A; Ziegler, Thomas R; Pennell, Kurt D; Miller, Gary W; Jones, Dean P
2015-12-01
The exposome is the cumulative measure of environmental influences and associated biological responses throughout the lifespan, including exposures from the environment, diet, behavior, and endogenous processes. A major challenge for exposome research lies in the development of robust and affordable analytic procedures to measure the broad range of exposures and associated biologic impacts occurring over a lifetime. Biomonitoring is an established approach to evaluate internal body burden of environmental exposures, but use of biomonitoring for exposome research is often limited by the high costs associated with quantification of individual chemicals. High-resolution metabolomics (HRM) uses ultra-high resolution mass spectrometry with minimal sample preparation to support high-throughput relative quantification of thousands of environmental, dietary, and microbial chemicals. HRM also measures metabolites in most endogenous metabolic pathways, thereby providing simultaneous measurement of biologic responses to environmental exposures. The present research examined quantification strategies to enhance the usefulness of HRM data for cumulative exposome research. The results provide a simple reference standardization protocol in which individual chemical concentrations in unknown samples are estimated by comparison to a concurrently analyzed, pooled reference sample with known chemical concentrations. The approach was tested using blinded analyses of amino acids in human samples and was found to be comparable to independent laboratory results based on surrogate standardization or internal standardization. Quantification was reproducible over a 13-month period and extrapolated to thousands of chemicals. The results show that reference standardization protocol provides an effective strategy that will enhance data collection for cumulative exposome research. In principle, the approach can be extended to other types of mass spectrometry and other analytical methods. © The Author 2015. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Go, Young-Mi; Walker, Douglas I.; Liang, Yongliang; Uppal, Karan; Soltow, Quinlyn A.; Tran, ViLinh; Strobel, Frederick; Quyyumi, Arshed A.; Ziegler, Thomas R.; Pennell, Kurt D.; Miller, Gary W.; Jones, Dean P.
2015-01-01
The exposome is the cumulative measure of environmental influences and associated biological responses throughout the lifespan, including exposures from the environment, diet, behavior, and endogenous processes. A major challenge for exposome research lies in the development of robust and affordable analytic procedures to measure the broad range of exposures and associated biologic impacts occurring over a lifetime. Biomonitoring is an established approach to evaluate internal body burden of environmental exposures, but use of biomonitoring for exposome research is often limited by the high costs associated with quantification of individual chemicals. High-resolution metabolomics (HRM) uses ultra-high resolution mass spectrometry with minimal sample preparation to support high-throughput relative quantification of thousands of environmental, dietary, and microbial chemicals. HRM also measures metabolites in most endogenous metabolic pathways, thereby providing simultaneous measurement of biologic responses to environmental exposures. The present research examined quantification strategies to enhance the usefulness of HRM data for cumulative exposome research. The results provide a simple reference standardization protocol in which individual chemical concentrations in unknown samples are estimated by comparison to a concurrently analyzed, pooled reference sample with known chemical concentrations. The approach was tested using blinded analyses of amino acids in human samples and was found to be comparable to independent laboratory results based on surrogate standardization or internal standardization. Quantification was reproducible over a 13-month period and extrapolated to thousands of chemicals. The results show that reference standardization protocol provides an effective strategy that will enhance data collection for cumulative exposome research. In principle, the approach can be extended to other types of mass spectrometry and other analytical methods. PMID:26358001
Li, J.S.; Chen, M.; Li, Z.C.
2012-01-01
A sensitive and reliable method of liquid chromatography–electrospray ionization/tandem mass spectrometry (LC-ESI/MS/ MS) was developed and validated for determining 1,3-dimethylamylamine (1,3-DMAA) and 1,4-dimethylamylamine (1,4-DMAA) in geranium plants (Pelargonium graveolens). The sample was extracted with 0.5 M HCl and purified by liquid-liquid partition with hexane. The parameters for reverse-phase (C18) LC and positive ESI/MS/MS were optimized. The matrix effect, specificity, linearity, precision, accuracy and reproducibility of the method were determined and evaluated. The method was linear over a range of 0.10–10.00 ng/mL examined, with R2 of 0.99 for both 1,3-DMAA and 1,4-DMAA. The recoveries from spiked concentrations between 5.00–40.00 ng/g were 85.1%–104.9% for 1,3-DMAA, with relative standard deviation (RSD) of 2.9%–11.0%, and 82.9%–101.8% for 1,4-DMAA, with RSD of 3.2%–11.7%. The instrument detection limit was 1–2 pg for both DMAAs. The quantification limit was estimated to be 1–2 ng/g for the plant sample. This method was successfully applied to the quantitative determination of 1,3- and 1,4-DMAA in both geranium plant and geranium oil. PMID:22915838
NASA Astrophysics Data System (ADS)
Marchamalo, Miguel; Bejarano, María-Dolores; García de Jalón, Diego; Martínez Marín, Rubén
2007-10-01
This study presents the application of LIDAR data to the evaluation and quantification of fluvial habitat in river systems, coupling remote sensing techniques with hydrological modeling and ecohydraulics. Fish habitat studies depend on the quality and continuity of the input topographic data. Conventional fish habitat studies are limited by the feasibility of field survey in time and budget. This limitation results in differences between the level of river management and the level of models. In order to facilitate upscaling processes from modeling to management units, meso-scale methods were developed (Maddock & Bird, 1996; Parasiewicz, 2001). LIDAR data of regulated River Cinca (Ebro Basin, Spain) were acquired in the low flow season, maximizing the recorded instream area. DTM meshes obtained from LIDAR were used as the input for hydraulic simulation for a range of flows using GUAD2D software. Velocity and depth outputs were combined with gradient data to produce maps reflecting the availability of each mesohabitat unit type for each modeled flow. Fish habitat was then estimated and quantified according to the preferences of main target species as brown trout (Salmo trutta). LIDAR data combined with hydraulic modeling allowed the analysis of fluvial habitat in long fluvial segments which would be time-consuming with traditional survey. LIDAR habitat assessment at mesoscale level avoids the problems of time efficiency and upscaling and is a recommended approach for large river basin management.
Assessment of cardiac fibrosis: a morphometric method comparison for collagen quantification.
Schipke, Julia; Brandenberger, Christina; Rajces, Alexandra; Manninger, Martin; Alogna, Alessio; Post, Heiner; Mühlfeld, Christian
2017-04-01
Fibrotic remodeling of the heart is a frequent condition linked to various diseases and cardiac dysfunction. Collagen quantification is an important objective in cardiac fibrosis research; however, a variety of different histological methods are currently used that may differ in accuracy. Here, frequently applied collagen quantification techniques were compared. A porcine model of early stage heart failure with preserved ejection fraction was used as an example. Semiautomated threshold analyses were imprecise, mainly due to inclusion of noncollagen structures or failure to detect certain collagen deposits. In contrast, collagen assessment by automated image analysis and light microscopy (LM)-stereology was more sensitive. Depending on the quantification method, the amount of estimated collagen varied and influenced intergroup comparisons. PicroSirius Red, Masson's trichrome, and Azan staining protocols yielded similar results, whereas the measured collagen area increased with increasing section thickness. Whereas none of the LM-based methods showed significant differences between the groups, electron microscopy (EM)-stereology revealed a significant collagen increase between cardiomyocytes in the experimental group, but not at other localizations. In conclusion, in contrast to the staining protocol, section thickness and the quantification method being used directly influence the estimated collagen content and thus, possibly, intergroup comparisons. EM in combination with stereology is a precise and sensitive method for collagen quantification if certain prerequisites are considered. For subtle fibrotic alterations, consideration of collagen localization may be necessary. Among LM methods, LM-stereology and automated image analysis are appropriate to quantify fibrotic changes, the latter depending on careful control of algorithm and comparable section staining. NEW & NOTEWORTHY Direct comparison of frequently applied histological fibrosis assessment techniques revealed a distinct relation of measured collagen and utilized quantification method as well as section thickness. Besides electron microscopy-stereology, which was precise and sensitive, light microscopy-stereology and automated image analysis proved to be appropriate for collagen quantification. Moreover, consideration of collagen localization might be important in revealing minor fibrotic changes. Copyright © 2017 the American Physiological Society.
Zhu, Haitao; Nie, Binbin; Liu, Hua; Guo, Hua; Demachi, Kazuyuki; Sekino, Masaki; Shan, Baoci
2016-05-01
Phase map cross-correlation detection and quantification may produce highlighted signal at superparamagnetic iron oxide nanoparticles, and distinguish them from other hypointensities. The method may quantify susceptibility change by performing least squares analysis between a theoretically generated magnetic field template and an experimentally scanned phase image. Because characteristic phase recognition requires the removal of phase wrap and phase background, additional steps of phase unwrapping and filtering may increase the chance of computing error and enlarge the inconsistence among algorithms. To solve problem, phase gradient cross-correlation and quantification method is developed by recognizing characteristic phase gradient pattern instead of phase image because phase gradient operation inherently includes unwrapping and filtering functions. However, few studies have mentioned the detectable limit of currently used phase gradient calculation algorithms. The limit may lead to an underestimation of large magnetic susceptibility change caused by high-concentrated iron accumulation. In this study, mathematical derivation points out the value of maximum detectable phase gradient calculated by differential chain algorithm in both spatial and Fourier domain. To break through the limit, a modified quantification method is proposed by using unwrapped forward differentiation for phase gradient generation. The method enlarges the detectable range of phase gradient measurement and avoids the underestimation of magnetic susceptibility. Simulation and phantom experiments were used to quantitatively compare different methods. In vivo application performs MRI scanning on nude mice implanted by iron-labeled human cancer cells. Results validate the limit of detectable phase gradient and the consequent susceptibility underestimation. Results also demonstrate the advantage of unwrapped forward differentiation compared with differential chain algorithms for susceptibility quantification at high-concentrated iron accumulation. Copyright © 2015 Elsevier Inc. All rights reserved.
Piñeiro, Zulema; Cantos-Villar, Emma; Palma, Miguel; Puertas, Belen
2011-11-09
A validated HPLC method with fluorescence detection for the simultaneous quantification of hydroxytyrosol and tyrosol in red wines is described. Detection conditions for both compounds were optimized (excitation at 279 and 278 and emission at 631 and 598 nm for hydroxytyrosol and tyrosol, respectively). The validation of the analytical method was based on selectivity, linearity, robustness, detection and quantification limits, repeatability, and recovery. The detection and quantification limits in red wines were set at 0.023 and 0.076 mg L(-1) for hydroxytyrosol and at 0.007 and 0.024 mg L(-1) for tyrosol determination, respectively. Precision values, both within-day and between-day (n = 5), remained below 3% for both compounds. In addition, a fractional factorial experimental design was developed to analyze the influence of six different conditions on analysis. The final optimized HPLC-fluorescence method allowed the analysis of 30 nonpretreated Spanish red wines to evaluate their hydroxytyrosol and tyrosol contents.
Jacchia, Sara; Nardini, Elena; Savini, Christian; Petrillo, Mauro; Angers-Loustau, Alexandre; Shim, Jung-Hyun; Trijatmiko, Kurniawan; Kreysa, Joachim; Mazzara, Marco
2015-02-18
In this study, we developed, optimized, and in-house validated a real-time PCR method for the event-specific detection and quantification of Golden Rice 2, a genetically modified rice with provitamin A in the grain. We optimized and evaluated the performance of the taxon (targeting rice Phospholipase D α2 gene)- and event (targeting the 3' insert-to-plant DNA junction)-specific assays that compose the method as independent modules, using haploid genome equivalents as unit of measurement. We verified the specificity of the two real-time PCR assays and determined their dynamic range, limit of quantification, limit of detection, and robustness. We also confirmed that the taxon-specific DNA sequence is present in single copy in the rice genome and verified its stability of amplification across 132 rice varieties. A relative quantification experiment evidenced the correct performance of the two assays when used in combination.
Zanderigo, Francesca; D'Agostino, Alexandra E; Joshi, Nandita; Schain, Martin; Kumar, Dileep; Parsey, Ramin V; DeLorenzo, Christine; Mann, J John
2018-02-08
Inhibition of the isoform A of monoamine oxidase (MAO-A), a mitochondrial enzyme catalyzing deamination of monoamine neurotransmitters, is useful in treatment of depression and anxiety disorders. [ 11 C]harmine, a MAO-A PET radioligand, has been used to study mood disorders and antidepressant treatment. However, [ 11 C]harmine binding test-retest characteristics have to date only been partially investigated. Furthermore, since MAO-A is ubiquitously expressed, no reference region is available, thus requiring arterial blood sampling during PET scanning. Here, we investigate [ 11 C]harmine binding measurements test-retest properties; assess effects of using a minimally invasive input function estimation on binding quantification and repeatability; and explore binding potentials estimation using a reference region-free approach. Quantification of [ 11 C]harmine distribution volume (V T ) via kinetic models and graphical analyses was compared based on absolute test-retest percent difference (TRPD), intraclass correlation coefficient (ICC), and identifiability. The optimal procedure was also used with a simultaneously estimated input function in place of the measured curve. Lastly, an approach for binding potentials quantification in absence of a reference region was evaluated. [ 11 C]harmine V T estimates quantified using arterial blood and kinetic modeling showed average absolute TRPD values of 7.7 to 15.6 %, and ICC values between 0.56 and 0.86, across brain regions. Using simultaneous estimation (SIME) of input function resulted in V T estimates close to those obtained using arterial input function (r = 0.951, slope = 1.073, intercept = - 1.037), with numerically but not statistically higher test-retest difference (range 16.6 to 22.0 %), but with overall poor ICC values, between 0.30 and 0.57. Prospective studies using [ 11 C]harmine are possible given its test-retest repeatability when binding is quantified using arterial blood. Results with SIME of input function show potential for simplifying data acquisition by replacing arterial catheterization with one arterial blood sample at 20 min post-injection. Estimation of [ 11 C]harmine binding potentials remains a challenge that warrants further investigation.
Shaikh, K A; Patil, S D; Devkhile, A B
2008-12-15
A simple, precise and accurate reversed-phase liquid chromatographic method has been developed for the simultaneous estimation of ambroxol hydrochloride and azithromycin in tablet formulations. The chromatographic separation was achieved on a Xterra RP18 (250 mm x 4.6 mm, 5 microm) analytical column. A Mixture of acetonitrile-dipotassium phosphate (30 mM) (50:50, v/v) (pH 9.0) was used as the mobile phase, at a flow rate of 1.7 ml/min and detector wavelength at 215 nm. The retention time of ambroxol and azithromycin was found to be 5.0 and 11.5 min, respectively. The validation of the proposed method was carried out for specificity, linearity, accuracy, precision, limit of detection, limit of quantitation and robustness. The linear dynamic ranges were from 30-180 to 250-1500 microg/ml for ambroxol hydrochloride and azithromycin, respectively. The percentage recovery obtained for ambroxol hydrochloride and azithromycin were 99.40 and 99.90%, respectively. Limit of detection and quantification for azithromycin were 0.8 and 2.3 microg/ml, for ambroxol hydrochloride 0.004 and 0.01 microg/ml, respectively. The developed method can be used for routine quality control analysis of titled drugs in combination in tablet formulation.
Deegan, Emily G; Stothers, Lynn; Kavanagh, Alex; Macnab, Andrew J
2018-01-01
There remains no gold standard for quantification of voluntary pelvic floor muscle (PFM) strength, despite international guidelines that recommend PFM assessment in females with urinary incontinence (UI). Methods currently reported for quantification of skeletal muscle strength across disciplines are systematically reviewed and their relevance for clinical and academic use related to the pelvic floor are described. A systematic review via Medline, PubMed, CINHAL, and the Cochrane database using key terms for pelvic floor anatomy and function were cross referenced with skeletal muscle strength quantification from 1946 to 2016. Full text peer-reviewed articles in English having female subjects with incontinence were identified. Each study was analyzed for use of controls, type of methodology as direct or indirect measures, benefits, and limitations of the technique. A total of 1586 articles were identified of which 50 met the inclusion criteria. Nine methodologies of determining PFM strength were described including: digital palpation, perineometer, dynamometry, EMG, vaginal cones, ultrasonography, magnetic resonance imaging, urine stream interruption test, and the Colpexin pull test. Thirty-two percent lacked a control group. Technical refinements in both direct and indirect instrumentation for PFM strength measurement are allowing for sensitivity. However, the most common methods of quantification remain digital palpation and perineometry; techniques that pose limitations and yield subjective or indirect measures of muscular strength. Dynamometry has potential as an accurate and sensitive tool, but is limited by inability to assess PFM strength during dynamic movements. © 2017 Wiley Periodicals, Inc.
Quantifying Hydrostatic Pressure in Plant Cells by Using Indentation with an Atomic Force Microscope
Beauzamy, Léna; Derr, Julien; Boudaoud, Arezki
2015-01-01
Plant cell growth depends on a delicate balance between an inner drive—the hydrostatic pressure known as turgor—and an outer restraint—the polymeric wall that surrounds a cell. The classical technique to measure turgor in a single cell, the pressure probe, is intrusive and cannot be applied to small cells. In order to overcome these limitations, we developed a method that combines quantification of topography, nanoindentation force measurements, and an interpretation using a published mechanical model for the pointlike loading of thin elastic shells. We used atomic force microscopy to estimate the elastic properties of the cell wall and turgor pressure from a single force-depth curve. We applied this method to onion epidermal peels and quantified the response to changes in osmolality of the bathing solution. Overall our approach is accessible and enables a straightforward estimation of the hydrostatic pressure inside a walled cell. PMID:25992723
Automated lobar quantification of emphysema in patients with severe COPD.
Revel, Marie-Pierre; Faivre, Jean-Baptiste; Remy-Jardin, Martine; Deken, Valérie; Duhamel, Alain; Marquette, Charles-Hugo; Tacelli, Nunzia; Bakai, Anne-Marie; Remy, Jacques
2008-12-01
Automated lobar quantification of emphysema has not yet been evaluated. Unenhanced 64-slice MDCT was performed in 47 patients evaluated before bronchoscopic lung-volume reduction. CT images reconstructed with a standard (B20) and high-frequency (B50) kernel were analyzed using a dedicated prototype software (MevisPULMO) allowing lobar quantification of emphysema extent. Lobar quantification was obtained following (a) a fully automatic delineation of the lobar limits by the software and (b) a semiautomatic delineation with manual correction of the lobar limits when necessary and was compared with the visual scoring of emphysema severity per lobe. No statistically significant difference existed between automated and semiautomated lobar quantification (p > 0.05 in the five lobes), with differences ranging from 0.4 to 3.9%. The agreement between the two methods (intraclass correlation coefficient, ICC) was excellent for left upper lobe (ICC = 0.94), left lower lobe (ICC = 0.98), and right lower lobe (ICC = 0.80). The agreement was good for right upper lobe (ICC = 0.68) and moderate for middle lobe (IC = 0.53). The Bland and Altman plots confirmed these results. A good agreement was observed between the software and visually assessed lobar predominance of emphysema (kappa 0.78; 95% CI 0.64-0.92). Automated and semiautomated lobar quantifications of emphysema are concordant and show good agreement with visual scoring.
Recent application of quantification II in Japanese medical research.
Suzuki, T; Kudo, A
1979-01-01
Hayashi's Quantification II is a method of multivariate discrimination analysis to manipulate attribute data as predictor variables. It is very useful in the medical research field for estimation, diagnosis, prognosis, evaluation of epidemiological factors, and other problems based on multiplicity of attribute data. In Japan, this method is so well known that most of the computer program packages include the Hayashi Quantification, but it seems to be yet unfamiliar with the method for researchers outside Japan. In view of this situation, we introduced 19 selected articles of recent applications of the Quantification II in Japanese medical research. In reviewing these papers, special mention is made to clarify how the researchers were satisfied with findings provided by the method. At the same time, some recommendations are made about terminology and program packages. Also a brief discussion of the background of the quantification methods is given with special reference to the Behaviormetric Society of Japan. PMID:540587
Limits of detection and decision. Part 3
NASA Astrophysics Data System (ADS)
Voigtman, E.
2008-02-01
It has been shown that the MARLAP (Multi-Agency Radiological Laboratory Analytical Protocols) for estimating the Currie detection limit, which is based on 'critical values of the non-centrality parameter of the non-central t distribution', is intrinsically biased, even if no calibration curve or regression is used. This completed the refutation of the method, begun in Part 2. With the field cleared of obstructions, the true theory underlying Currie's limits of decision, detection and quantification, as they apply in a simple linear chemical measurement system (CMS) having heteroscedastic, Gaussian measurement noise and using weighted least squares (WLS) processing, was then derived. Extensive Monte Carlo simulations were performed, on 900 million independent calibration curves, for linear, "hockey stick" and quadratic noise precision models (NPMs). With errorless NPM parameters, all the simulation results were found to be in excellent agreement with the derived theoretical expressions. Even with as much as 30% noise on all of the relevant NPM parameters, the worst absolute errors in rates of false positives and false negatives, was only 0.3%.
Quantification of Fluorine Content in AFFF Concentrates
2017-09-29
and quantitative integrations, a 100 ppm spectral window (FIDRes 0.215 Hz) was scanned using the following acquisition parameters: acquisition time ...Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/6120--17-9752 Quantification of Fluorine Content in AFFF Concentrates September 29, 2017...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
Quantitative CT: technique dependence of volume estimation on pulmonary nodules
NASA Astrophysics Data System (ADS)
Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan
2012-03-01
Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.
Quantitative proteome analysis using isobaric peptide termini labeling (IPTL).
Arntzen, Magnus O; Koehler, Christian J; Treumann, Achim; Thiede, Bernd
2011-01-01
The quantitative comparison of proteome level changes across biological samples has become an essential feature in proteomics that remains challenging. We have recently introduced isobaric peptide termini labeling (IPTL), a novel strategy for isobaric quantification based on the derivatization of peptide termini with complementary isotopically labeled reagents. Unlike non-isobaric quantification methods, sample complexity at the MS level is not increased, providing improved sensitivity and protein coverage. The distinguishing feature of IPTL when comparing it to more established isobaric labeling methods (iTRAQ and TMT) is the presence of quantification signatures in all sequence-determining ions in MS/MS spectra, not only in the low mass reporter ion region. This makes IPTL a quantification method that is accessible to mass spectrometers with limited capabilities in the low mass range. Also, the presence of several quantification points in each MS/MS spectrum increases the robustness of the quantification procedure.
Reiman, Mario; Laan, Maris; Rull, Kristiina; Sõber, Siim
2017-08-01
RNA degradation is a ubiquitous process that occurs in living and dead cells, as well as during handling and storage of extracted RNA. Reduced RNA quality caused by degradation is an established source of uncertainty for all RNA-based gene expression quantification techniques. RNA sequencing is an increasingly preferred method for transcriptome analyses, and dependence of its results on input RNA integrity is of significant practical importance. This study aimed to characterize the effects of varying input RNA integrity [estimated as RNA integrity number (RIN)] on transcript level estimates and delineate the characteristic differences between transcripts that differ in degradation rate. The study used ribodepleted total RNA sequencing data from a real-life clinically collected set ( n = 32) of human solid tissue (placenta) samples. RIN-dependent alterations in gene expression profiles were quantified by using DESeq2 software. Our results indicate that small differences in RNA integrity affect gene expression quantification by introducing a moderate and pervasive bias in expression level estimates that significantly affected 8.1% of studied genes. The rapidly degrading transcript pool was enriched in pseudogenes, short noncoding RNAs, and transcripts with extended 3' untranslated regions. Typical slowly degrading transcripts (median length, 2389 nt) represented protein coding genes with 4-10 exons and high guanine-cytosine content.-Reiman, M., Laan, M., Rull, K., Sõber, S. Effects of RNA integrity on transcript quantification by total RNA sequencing of clinically collected human placental samples. © FASEB.
Sobanska, Anna W; Pyzowski, Jaroslaw
2012-01-01
Ethylhexyl triazone (ET) was separated from other sunscreens such as avobenzone, octocrylene, octyl methoxycinnamate, and diethylamino hydroxybenzoyl hexyl benzoate and from parabens by normal-phase HPTLC on silica gel 60 as stationary phase. Two mobile phases were particularly effective: (A) cyclohexane-diethyl ether 1 : 1 (v/v) and (B) cyclohexane-diethyl ether-acetone 15 : 1 : 2 (v/v/v) since apart from ET analysis they facilitated separation and quantification of other sunscreens present in the formulations. Densitometric scanning was performed at 300 nm. Calibration curves for ET were nonlinear (second-degree polynomials), with R > 0.998. For both mobile phases limits of detection (LOD) were 0.03 and limits of quantification (LOQ) 0.1 μg spot(-1). Both methods were validated.
Improvement of Reliability of Diffusion Tensor Metrics in Thigh Skeletal Muscles.
Keller, Sarah; Chhabra, Avneesh; Ahmed, Shaheen; Kim, Anne C; Chia, Jonathan M; Yamamura, Jin; Wang, Zhiyue J
2018-05-01
Quantitative diffusion tensor imaging (DTI) of skeletal muscles is challenging due to the bias in DTI metrics, such as fractional anisotropy (FA) and mean diffusivity (MD), related to insufficient signal-to-noise ratio (SNR). This study compares the bias of DTI metrics in skeletal muscles via pixel-based and region-of-interest (ROI)-based analysis. DTI of the thigh muscles was conducted on a 3.0-T system in N = 11 volunteers using a fat-suppressed single-shot spin-echo echo planar imaging (SS SE-EPI) sequence with eight repetitions (number of signal averages (NSA) = 4 or 8 for each repeat). The SNR was calculated for different NSAs and estimated for the composite images combining all data (effective NSA = 48) as standard reference. The bias of MD and FA derived by pixel-based and ROI-based quantification were compared at different NSAs. An "intra-ROI diffusion direction dispersion angle (IRDDDA)" was calculated to assess the uniformity of diffusion within the ROI. Using our standard reference image with NSA = 48, the ROI-based and pixel-based measurements agreed for FA and MD. Larger disagreements were observed for the pixel-based quantification at NSA = 4. MD was less sensitive than FA to the noise level. The IRDDDA decreased with higher NSA. At NSA = 4, ROI-based FA showed a lower average bias (0.9% vs. 37.4%) and narrower 95% limits of agreement compared to the pixel-based method. The ROI-based estimation of FA is less prone to bias than the pixel-based estimations when SNR is low. The IRDDDA can be applied as a quantitative quality measure to assess reliability of ROI-based DTI metrics. Copyright © 2018 Elsevier B.V. All rights reserved.
Hashim, Suzana; Beh, Hooi Kheng; Hamil, Mohamad Shahrul Ridzuan; Ismail, Zhari; Majid, Amin Malik Shah Abdul
2016-01-01
Orthosiphon stamineus is a medicinal herb widely grown in Southeast Asia and tropical countries. It has been used traditionally as a diuretic, abdominal pain, kidney and bladder inflammation, gout, and hypertension. This study aims to develop and validate the high-performance thin layer chromatography (HPTLC) method for quantification of rosmarinic acid (RA), 3'-hydroxy-5,6,7,4'-tetramethoxyflavone (TMF), sinensitin (SIN) and eupatorin (EUP) found in ethanol, 50% ethanol and water extract of O. stamineus leaves. HPTLC method was conducted using an HPTLC system with a developed mobile phase system of toluene: ethyl acetate: formic acid (3:7:0.1) performed on precoated silica gel 60 F254 TLC plates. The method was validated based on linearity, accuracy, precision, limit of detection, limit of quantification (LOQ), and specificity, respectively. The detection of spots was observed at ultraviolet 254 nm and 366 nm. The linearity of RA, TMF, SIN, and EUP were obtained between 10 and 100 ng/spot with high correlation coefficient value (R 2 ) of more than 0.986. The limit of detection was found to be 122.47 ± 3.95 (RA), 43.38 ± 0.79 (SIN), 17.26 ± 1.16 (TMF), and 46.80 ± 1.33 ng/spot (EUP), respectively. Whereas the LOQ was found to be 376.44 ± 6.70 (RA), 131.45 ± 2.39 (SIN), 52.30 ± 2.01 (TMF), and 141.82 ± 1.58 ng/spot (EUP), respectively. The proposed method showed good linearity, precision, accuracy, and high sensitivity. Hence, it may be applied in a routine quantification of RA, SIN, TMF, and EUP found in ethanol, 50% of ethanol and water extract of O. stamineus leaves. HPTLC method provides rapid estimation of the marker compound for routine quality control analysis.The established HPTLC method is rapid for qualitative and quantitative fingerprinting of Orthosiphon stamineus extract used for commercial product.Four identified markers (RA, SIN, EUP and TMF) found in three a different type of O. stamineus extracts specifically ethanol, 50% ethanol and water extract were successfully quantified using HPTLC method. Abbreviations Used : HPTLC: High-performance thin layer chromatography; RA: Rosmarinic acid; TMF: 3'-hydroxy-5,6,7,4'-tetramethoxyflavone; SIN: Sinensitin; EUP: Eupatorin; E: Ethanol; EW: 50% ethanol; W: Water; BK: Batu Kurau; KB: Kepala Batas; S: Sik; CJ: Changkat Jering; SB: Sungai Buloh.
Bassereau, Maud; Chaintreau, Alain; Duperrex, Stéphanie; Joulain, Daniel; Leijs, Hans; Loesing, Gerd; Owen, Neil; Sherlock, Alan; Schippa, Christine; Thorel, Pierre-Jean; Vey, Matthias
2007-01-10
The performances of the GC-MS determination of suspected allergens in fragrance concentrates have been investigated. The limit of quantification was experimentally determined (10 mg/L), and the variability was investigated for three different data treatment strategies: (1) two columns and three quantification ions; (2) two columns and one quantification ion; and (3) one column and three quantification ions. The first strategy best minimizes the risk of determination bias due to coelutions. This risk was evaluated by calculating the probability of coeluting a suspected allergen with perfume constituents exhibiting ions in common. For hydroxycitronellal, when using a two-column strategy, this may statistically occur more than once every 36 analyses for one ion or once every 144 analyses for three ions in common.
The quantification of solute concentrations in laboratory aquifer models has been largely limited to the use of sampling ports, from which samples are collected for external analysis. One of the drawbacks to this method is that the act of sampling may disturb plume dynamics and ...
Current position of high-resolution MS for drug quantification in clinical & forensic toxicology.
Meyer, Markus R; Helfer, Andreas G; Maurer, Hans H
2014-08-01
This paper reviews high-resolution MS approaches published from January 2011 until March 2014 for the quantification of drugs (of abuse) and/or their metabolites in biosamples using LC-MS with time-of-flight or Orbitrap™ mass analyzers. Corresponding approaches are discussed including sample preparation and mass spectral settings. The advantages and limitations of high-resolution MS for drug quantification, as well as the demand for a certain resolution or a specific mass accuracy are also explored.
Brautigam, Chad A; Zhao, Huaying; Vargas, Carolyn; Keller, Sandro; Schuck, Peter
2016-05-01
Isothermal titration calorimetry (ITC) is a powerful and widely used method to measure the energetics of macromolecular interactions by recording a thermogram of differential heating power during a titration. However, traditional ITC analysis is limited by stochastic thermogram noise and by the limited information content of a single titration experiment. Here we present a protocol for bias-free thermogram integration based on automated shape analysis of the injection peaks, followed by combination of isotherms from different calorimetric titration experiments into a global analysis, statistical analysis of binding parameters and graphical presentation of the results. This is performed using the integrated public-domain software packages NITPIC, SEDPHAT and GUSSI. The recently developed low-noise thermogram integration approach and global analysis allow for more precise parameter estimates and more reliable quantification of multisite and multicomponent cooperative and competitive interactions. Titration experiments typically take 1-2.5 h each, and global analysis usually takes 10-20 min.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
Everatt, Kristoffer T.; Andresen, Leah; Somers, Michael J.
2014-01-01
The African lion (Panthera Leo) has suffered drastic population and range declines over the last few decades and is listed by the IUCN as vulnerable to extinction. Conservation management requires reliable population estimates, however these data are lacking for many of the continent's remaining populations. It is possible to estimate lion abundance using a trophic scaling approach. However, such inferences assume that a predator population is subject only to bottom-up regulation, and are thus likely to produce biased estimates in systems experiencing top-down anthropogenic pressures. Here we provide baseline data on the status of lions in a developing National Park in Mozambique that is impacted by humans and livestock. We compare a direct density estimate with an estimate derived from trophic scaling. We then use replicated detection/non-detection surveys to estimate the proportion of area occupied by lions, and hierarchical ranking of covariates to provide inferences on the relative contribution of prey resources and anthropogenic factors influencing lion occurrence. The direct density estimate was less than 1/3 of the estimate derived from prey resources (0.99 lions/100 km2 vs. 3.05 lions/100 km2). The proportion of area occupied by lions was Ψ = 0.439 (SE = 0.121), or approximately 44% of a 2 400 km2 sample of potential habitat. Although lions were strongly predicted by a greater probability of encountering prey resources, the greatest contributing factor to lion occurrence was a strong negative association with settlements. Finally, our empirical abundance estimate is approximately 1/3 of a published abundance estimate derived from opinion surveys. Altogether, our results describe a lion population held below resource-based carrying capacity by anthropogenic factors and highlight the limitations of trophic scaling and opinion surveys for estimating predator populations exposed to anthropogenic pressures. Our study provides the first empirical quantification of a population that future change can be measured against. PMID:24914934
Everatt, Kristoffer T; Andresen, Leah; Somers, Michael J
2014-01-01
The African lion (Panthera Leo) has suffered drastic population and range declines over the last few decades and is listed by the IUCN as vulnerable to extinction. Conservation management requires reliable population estimates, however these data are lacking for many of the continent's remaining populations. It is possible to estimate lion abundance using a trophic scaling approach. However, such inferences assume that a predator population is subject only to bottom-up regulation, and are thus likely to produce biased estimates in systems experiencing top-down anthropogenic pressures. Here we provide baseline data on the status of lions in a developing National Park in Mozambique that is impacted by humans and livestock. We compare a direct density estimate with an estimate derived from trophic scaling. We then use replicated detection/non-detection surveys to estimate the proportion of area occupied by lions, and hierarchical ranking of covariates to provide inferences on the relative contribution of prey resources and anthropogenic factors influencing lion occurrence. The direct density estimate was less than 1/3 of the estimate derived from prey resources (0.99 lions/100 km² vs. 3.05 lions/100 km²). The proportion of area occupied by lions was Ψ = 0.439 (SE = 0.121), or approximately 44% of a 2,400 km2 sample of potential habitat. Although lions were strongly predicted by a greater probability of encountering prey resources, the greatest contributing factor to lion occurrence was a strong negative association with settlements. Finally, our empirical abundance estimate is approximately 1/3 of a published abundance estimate derived from opinion surveys. Altogether, our results describe a lion population held below resource-based carrying capacity by anthropogenic factors and highlight the limitations of trophic scaling and opinion surveys for estimating predator populations exposed to anthropogenic pressures. Our study provides the first empirical quantification of a population that future change can be measured against.
NASA Astrophysics Data System (ADS)
Feng, Jinchao; Lansford, Joshua; Mironenko, Alexander; Pourkargar, Davood Babaei; Vlachos, Dionisios G.; Katsoulakis, Markos A.
2018-03-01
We propose non-parametric methods for both local and global sensitivity analysis of chemical reaction models with correlated parameter dependencies. The developed mathematical and statistical tools are applied to a benchmark Langmuir competitive adsorption model on a close packed platinum surface, whose parameters, estimated from quantum-scale computations, are correlated and are limited in size (small data). The proposed mathematical methodology employs gradient-based methods to compute sensitivity indices. We observe that ranking influential parameters depends critically on whether or not correlations between parameters are taken into account. The impact of uncertainty in the correlation and the necessity of the proposed non-parametric perspective are demonstrated.
Determination of Vitamin E in Cereal Products and Biscuits by GC-FID.
Pasias, Ioannis N; Kiriakou, Ioannis K; Papakonstantinou, Lila; Proestos, Charalampos
2018-01-01
A rapid, precise and accurate method for the determination of vitamin E (α-tocopherol) in cereal products and biscuits has been developed. The uncertainty was calculated for the first time, and the methods were performed for different cereal products and biscuits, characterized as "superfoods". The limits of detection and quantification were calculated. The accuracy and precision were estimated using the certified reference material FAPAS T10112QC, and the determined values were in good accordance with the certified values. The health claims according to the daily reference values for vitamin E were calculated, and the results proved that the majority of the samples examined showed a percentage daily value higher than 15%.
Determination of Vitamin E in Cereal Products and Biscuits by GC-FID
Kiriakou, Ioannis K.; Papakonstantinou, Lila
2018-01-01
A rapid, precise and accurate method for the determination of vitamin E (α-tocopherol) in cereal products and biscuits has been developed. The uncertainty was calculated for the first time, and the methods were performed for different cereal products and biscuits, characterized as “superfoods”. The limits of detection and quantification were calculated. The accuracy and precision were estimated using the certified reference material FAPAS T10112QC, and the determined values were in good accordance with the certified values. The health claims according to the daily reference values for vitamin E were calculated, and the results proved that the majority of the samples examined showed a percentage daily value higher than 15%. PMID:29301245
Katoh, Chietsugu; Yoshinaga, Keiichiro; Klein, Ran; Kasai, Katsuhiko; Tomiyama, Yuuki; Manabe, Osamu; Naya, Masanao; Sakakibara, Mamoru; Tsutsui, Hiroyuki; deKemp, Robert A; Tamaki, Nagara
2012-08-01
Myocardial blood flow (MBF) estimation with (82)Rubidium ((82)Rb) positron emission tomography (PET) is technically difficult because of the high spillover between regions of interest, especially due to the long positron range. We sought to develop a new algorithm to reduce the spillover in image-derived blood activity curves, using non-uniform weighted least-squares fitting. Fourteen volunteers underwent imaging with both 3-dimensional (3D) (82)Rb and (15)O-water PET at rest and during pharmacological stress. Whole left ventricular (LV) (82)Rb MBF was estimated using a one-compartment model, including a myocardium-to-blood spillover correction to estimate the corresponding blood input function Ca(t)(whole). Regional K1 values were calculated using this uniform global input function, which simplifies equations and enables robust estimation of MBF. To assess the robustness of the modified algorithm, inter-operator repeatability of 3D (82)Rb MBF was compared with a previously established method. Whole LV correlation of (82)Rb MBF with (15)O-water MBF was better (P < .01) with the modified spillover correction method (r = 0.92 vs r = 0.60). The modified method also yielded significantly improved inter-operator repeatability of regional MBF quantification (r = 0.89) versus the established method (r = 0.82) (P < .01). A uniform global input function can suppress LV spillover into the image-derived blood input function, resulting in improved precision for MBF quantification with 3D (82)Rb PET.
Srinubabu, Gedela; Ratnam, Bandaru Veera Venkata; Rao, Allam Appa; Rao, Medicherla Narasimha
2008-01-01
A rapid tandem mass spectrometric (MS-MS) method for the quantification of Oxcarbazepine (OXB) in human plasma using imipramine as an internal standard (IS) has been developed and validated. Chromatographic separation was achieved isocratically on a C18 reversed-phase column within 3.0 min, using a mobile phase of acetonitrile-10 mM ammonium formate (90 : 10 v/v) at a flow rate of 0.3 ml/min. Quantitation was achieved using multiple reaction monitoring (MRM) scan at MRM transitions m/z 253>208 and m/z 281>86 for OXB and the IS respectively. Calibration curves were linear over the concentration range of 0.2-16 mug/ml (r>0.999) with a limit of quantification of 0.2 mug/ml. Analytical recoveries of OXB from spiked human plasma were in the range of 74.9 to 76.3%. Plackett-Burman design was applied for screening of chromatographic and mass spectrometric factors; factorial design was applied for optimization of essential factors for the robustness study. A linear model was postulated and a 2(3) full factorial design was employed to estimate the model coefficients for intermediate precision. More specifically, experimental design helps the researcher to verify if changes in factor values produce a statistically significant variation of the observed response. The strategy is most effective if statistical design is used in most or all stages of the screening and optimizing process for future method validation of pharmacokinetic and bioequivalence studies.
Quantification Bias Caused by Plasmid DNA Conformation in Quantitative Real-Time PCR Assay
Lin, Chih-Hui; Chen, Yu-Chieh; Pan, Tzu-Ming
2011-01-01
Quantitative real-time PCR (qPCR) is the gold standard for the quantification of specific nucleic acid sequences. However, a serious concern has been revealed in a recent report: supercoiled plasmid standards cause significant over-estimation in qPCR quantification. In this study, we investigated the effect of plasmid DNA conformation on the quantification of DNA and the efficiency of qPCR. Our results suggest that plasmid DNA conformation has significant impact on the accuracy of absolute quantification by qPCR. DNA standard curves shifted significantly among plasmid standards with different DNA conformations. Moreover, the choice of DNA measurement method and plasmid DNA conformation may also contribute to the measurement error of DNA standard curves. Due to the multiple effects of plasmid DNA conformation on the accuracy of qPCR, efforts should be made to assure the highest consistency of plasmid standards for qPCR. Thus, we suggest that the conformation, preparation, quantification, purification, handling, and storage of standard plasmid DNA should be described and defined in the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) to assure the reproducibility and accuracy of qPCR absolute quantification. PMID:22194997
Direct quantification of long-term rock nitrogen inputs to temperate forest ecosystems.
Morford, Scott L; Houlton, Benjamin Z; Dahlgren, Randy A
2016-01-01
Sedimentary and metasedimentary rocks contain large reservoirs of fixed nitrogen (N), but questions remain over the importance of rock N weathering inputs in terrestrial ecosystems. Here we provide direct evidence for rock N weathering (i.e., loss of N from rock) in three temperate forest sites residing on a N-rich parent material (820-1050 mg N kg(-1); mica schist) in the Klamath Mountains (northern California and southern Oregon), USA. Our method combines a mass balance model of element addition/ depletion with a procedure for quantifying fixed N in rock minerals, enabling quantification of rock N inputs to bioavailable reservoirs in soil and regolith. Across all sites, -37% to 48% of the initial bedrock N content has undergone long-term weathering in the soil. Combined with regional denudation estimates (sum of physical + chemical erosion), these weathering fractions translate to 1.6-10.7 kg x ha(-1) x yr(-1) of rock N input to these forest ecosystems. These N input fluxes are substantial in light of estimates for atmospheric sources in these sites (4.5-7.0 kg x ha(-1) x yr(-1)). In addition, N depletion from rock minerals was greater than sodium, suggesting active biologically mediated weathering of growth-limiting nutrients compared to nonessential elements. These results point to regional tectonics, biologically mediated weathering effects, and rock N chemistry in shaping the magnitude of rock N inputs to the forest ecosystems examined.
Eriksson, Charlotta; Bodin, Theo; Selander, Jenny
2017-11-01
Objectives National quantifications of the health burden related to traffic noise are still rare. In this study, we use disability-adjusted life-years (DALY) measure to assess the burden of disease from road traffic and railway noise in Sweden. Methods The number of DALY was assessed for annoyance, sleep disturbance, hypertension, myocardial infarction (MI) and stroke using a method previously implemented by the World Health Organization (WHO). Population exposure to noise was obtained from the Swedish Environmental Protection Agency and the Swedish Transport Administration. Data on disease occurrence were gathered from registers held by the National Board of Health and Welfare and Statistics Sweden. Disability weights (DW) and duration were based on WHO definitions. Finally, we used research-based exposure-response functions or relative risks to estimate disease attributable to noise in each exposure category. Results The number of DALY attributed to traffic noise in Sweden was estimated to be 41 033 years; 36 711 (90%) related to road traffic and 4322 (10%) related to railway traffic. The most important contributor to the disease burden was sleep disturbances, accounting for 22 218 DALY (54%), followed by annoyance, 12 090 DALY (30%), and cardiovascular diseases, 6725 DALY (16%). Conclusions Road traffic and railway noise contribute significantly to the burden of disease in Sweden each year. The total number of DALY should, however, be interpreted with caution due to limitations in data quality.
Pedersen, S N; Lindholst, C
1999-12-09
Extraction methods were developed for quantification of the xenoestrogens 4-tert.-octylphenol (tOP) and bisphenol A (BPA) in water and in liver and muscle tissue from the rainbow trout (Oncorhynchus mykiss). The extraction of tOP and BPA from tissue samples was carried out using microwave-assisted solvent extraction (MASE) followed by solid-phase extraction (SPE). Water samples were extracted using only SPE. For the quantification of tOP and BPA, liquid chromatography mass spectrometry (LC-MS) equipped with an atmospheric pressure chemical ionisation interface (APCI) was applied. The combined methods for tissue extraction allow the use of small sample amounts of liver or muscle (typically 1 g), low volumes of solvent (20 ml), and short extraction times (25 min). Limits of quantification of tOP in tissue samples were found to be approximately 10 ng/g in muscle and 50 ng/g in liver (both based on 1 g of fresh tissue). The corresponding values for BPA were approximately 50 ng/g in both muscle and liver tissue. In water, the limit of quantification for tOP and BPA was approximately 0.1 microg/l (based on 100 ml sample size).
Quantification of Liver Fat in the Presence of Iron Overload
Horng, Debra E.; Hernando, Diego; Reeder, Scott B.
2017-01-01
Purpose To evaluate the accuracy of R2* models (1/T2* = R2*) for chemical shift-encoded magnetic resonance imaging (CSE-MRI)-based proton density fat-fraction (PDFF) quantification in patients with fatty liver and iron overload, using MR spectroscopy (MRS) as the reference standard. Materials and Methods Two Monte Carlo simulations were implemented to compare the root-mean-squared-error (RMSE) performance of single-R2* and dual-R2* correction in a theoretical liver environment with high iron. Fatty liver was defined as hepatic PDFF >5.6% based on MRS; only subjects with fatty liver were considered for analyses involving fat. From a group of 40 patients with known/suspected iron overload, nine patients were identified at 1.5T, and 13 at 3.0T with fatty liver. MRS linewidth measurements were used to estimate R2* values for water and fat peaks. PDFF was measured from CSE-MRI data using single-R2* and dual-R2* correction with magnitude and complex fitting. Results Spectroscopy-based R2* analysis demonstrated that the R2* of water and fat remain close in value, both increasing as iron overload increases: linear regression between R2*W and R2*F resulted in slope = 0.95 [0.79–1.12] (95% limits of agreement) at 1.5T and slope = 0.76 [0.49–1.03] at 3.0T. MRI-PDFF using dual-R2* correction had severe artifacts. MRI-PDFF using single-R2* correction had good agreement with MRS-PDFF: Bland–Altman analysis resulted in −0.7% (bias) ± 2.9% (95% limits of agreement) for magnitude-fit and −1.3% ± 4.3% for complex-fit at 1.5T, and −1.5% ± 8.4% for magnitude-fit and −2.2% ± 9.6% for complex-fit at 3.0T. Conclusion Single-R2* modeling enables accurate PDFF quantification, even in patients with iron overload. PMID:27405703
Bhusal, Prabhat; Sharma, Manisha; Harrison, Jeff; Procter, Georgina; Andrews, Gavin; Jones, David S; Hill, Andrew G; Svirskis, Darren
2017-09-01
An efficient and cost-effective quantification procedure for lidocaine by HPLC has been developed to estimate lidocaine from an EVA matrix, plasma, peritoneal fluid and intra-articular fluid (IAF). This method guarantees the resolution of lidocaine from the degradation products obtained from alkaline and oxidative stress. Chromatographic separation of lidocaine was achieved with a retention time of 7 min using a C18 column with a mobile phase comprising acetonitrile and potassium dihydrogen phosphate buffer (pH 5.5; 0.02 M) in the ratio of 26:74 at a flow rate of 1 mL min-1 with detection at 230 nm. Instability of lidocaine was observed to an oxidizing (0.02% H2O2) and alkaline environments (0.1 M NaOH). The calibration curve was found to be linear within the concentration range of 0.40-50.0 μg/mL. Intra-day and inter-day accuracy ranged between 95.9% and 99.1%, with precision (% RSD) below 6.70%. The limit of quantification and limit of detection were 0.40 μg/mL and 0.025 μg/mL, respectively. The simple extraction method described enabled the quantification of lidocaine from an EVA matrix using dichloromethane as a solvent. The assay and content uniformity of lidocaine within an EVA matrix were 103 ± 3.60% and 100 ± 2.60%, respectively. The ability of this method to quantify lidocaine release from EVA films was also demonstrated. Extraction of lidocaine from plasma, peritoneal fluid and IAF followed by HPLC analysis confirmed the utility of this method for ex vivo and in vivo studies where the calibration plot was found to be linear from 1.60 to 50.0 μg/mL. © Crown copyright 2017.
Sahoo, Madhusmita; Syal, Pratima; Hable, Asawaree A.; Raut, Rahul P.; Choudhari, Vishnu P.; Kuchekar, Bhanudas S.
2011-01-01
Aim: To develop a simple, precise, rapid and accurate HPTLC method for the simultaneous estimation of Lornoxicam (LOR) and Thiocolchicoside (THIO) in bulk and pharmaceutical dosage forms. Materials and Methods: The separation of the active compounds from pharmaceutical dosage form was carried out using methanol:chloroform:water (9.6:0.2:0.2 v/v/v) as the mobile phase and no immiscibility issues were found. The densitometric scanning was carried out at 377 nm. The method was validated for linearity, accuracy, precision, LOD (Limit of Detection), LOQ (Limit of Quantification), robustness and specificity. Results: The Rf values (±SD) were found to be 0.84 ± 0.05 for LOR and 0.58 ± 0.05 for THIO. Linearity was obtained in the range of 60–360 ng/band for LOR and 30–180 ng/band for THIO with correlation coefficients r2 = 0.998 and 0.999, respectively. The percentage recovery for both the analytes was in the range of 98.7–101.2 %. Conclusion: The proposed method was optimized and validated as per the ICH guidelines. PMID:23781452
Liquid Chromatography-Mass Spectrometry-based Quantitative Proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fang; Liu, Tao; Qian, Weijun
2011-07-22
Liquid chromatography-mass spectrometry (LC-MS)-based quantitative proteomics has become increasingly applied for a broad range of biological applications due to growing capabilities for broad proteome coverage and good accuracy in quantification. Herein, we review the current LC-MS-based quantification methods with respect to their advantages and limitations, and highlight their potential applications.
The quantification of solute concentrations in laboratory aquifer models has been largely limited to the use of sampling ports, from which samples are collected for external analysis. One of the drawbacks to this method is that the act of sampling may disturb plume dynamics and ...
The quantification of solute concentrations in laboratory aquifer models has been largely limited to the use of sampling ports, from which samples are collected for external analysis. One of the drawbacks to this method is that the act of sampling may disturb plume dynamics and ...
The quantification of solute concentrations in laboratory aquifer models has been largely limited to the use of sampling ports, from which samples are collected for external analysis. One of the drawbacks to this method is that the act of sampling may disturb plume dynamics and ...
Kim, Jaai; Lim, Juntaek; Lee, Changsoo
2013-12-01
Quantitative real-time PCR (qPCR) has been widely used in recent environmental microbial ecology studies as a tool for detecting and quantifying microorganisms of interest, which aids in better understandings of the complexity of wastewater microbial communities. Although qPCR can be used to provide more specific and accurate quantification than other molecular techniques, it does have limitations that must be considered when applying it in practice. This article reviews the principle of qPCR quantification and its applications to microbial ecology studies in various wastewater treatment environments. Here we also address several limitations of qPCR-based approaches that can affect the validity of quantification data: template nucleic acid quality, nucleic acid extraction efficiency, specificity of group-specific primers and probes, amplification of nonviable DNA, gene copy number variation, and limited number of sequences in the database. Even with such limitations, qPCR is reportedly among the best methods for quantitatively investigating environmental microbial communities. The application of qPCR is and will continue to be increasingly common in studies of wastewater treatment systems. To obtain reliable analyses, however, the limitations that have often been overlooked must be carefully considered when interpreting the results. Copyright © 2013 Elsevier Inc. All rights reserved.
Sobanska, Anna W.; Pyzowski, Jaroslaw
2012-01-01
Ethylhexyl triazone (ET) was separated from other sunscreens such as avobenzone, octocrylene, octyl methoxycinnamate, and diethylamino hydroxybenzoyl hexyl benzoate and from parabens by normal-phase HPTLC on silica gel 60 as stationary phase. Two mobile phases were particularly effective: (A) cyclohexane-diethyl ether 1 : 1 (v/v) and (B) cyclohexane-diethyl ether-acetone 15 : 1 : 2 (v/v/v) since apart from ET analysis they facilitated separation and quantification of other sunscreens present in the formulations. Densitometric scanning was performed at 300 nm. Calibration curves for ET were nonlinear (second-degree polynomials), with R > 0.998. For both mobile phases limits of detection (LOD) were 0.03 and limits of quantification (LOQ) 0.1 μg spot−1. Both methods were validated. PMID:22629203
Kootenai River Fisheries Investigations : Rainbow Trout Recruitment : Period Covered: 1997.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Downs, Chris
1999-02-02
The objective of this study was to determine if juvenile production is limiting the population of rainbow trout Oncorbynchus mykiss in the Idaho reach of the Kootenai River. We used snorkeling and electrofishing techniques to estimate juvenile rainbow trout abundance in, and outmigration from, the Deep, Boulder, and Myrtle creek drainages in Idaho. The total population estimates for the three drainages estimated in 1997 were 30,023; 763; and 235; respectively. A rotary-screw trap was utilized to capture juvenile outmigrants for quantification of age at outmigration and total outmigration from the Deep Creek drainage to the Kootenai River. The total outmigrantmore » estimate for 1997 from the Deep Creek drainage was 38,206 juvenile rainbow trout. Age determination based largely on scales suggests that most juvenile rainbow trout outmigration from the Deep Creek drainage occurs at age-l, during the spring runoff period. Forty-three adult rainbow trout captured in the Deep Creek drainage were tagged with $10.00 reward T-bar anchor tags in 1997. A total of three of these fish were harvested, all in Kootenay Lake, British Columbia. This suggests the possibility of an adfluvial component in the spawning population of the Deep Creek drainage.« less
Price, B; Gomez, A; Mathys, L; Gardi, O; Schellenberger, A; Ginzler, C; Thürig, E
2017-03-01
Trees outside forest (TOF) can perform a variety of social, economic and ecological functions including carbon sequestration. However, detailed quantification of tree biomass is usually limited to forest areas. Taking advantage of structural information available from stereo aerial imagery and airborne laser scanning (ALS), this research models tree biomass using national forest inventory data and linear least-square regression and applies the model both inside and outside of forest to create a nationwide model for tree biomass (above ground and below ground). Validation of the tree biomass model against TOF data within settlement areas shows relatively low model performance (R 2 of 0.44) but still a considerable improvement on current biomass estimates used for greenhouse gas inventory and carbon accounting. We demonstrate an efficient and easily implementable approach to modelling tree biomass across a large heterogeneous nationwide area. The model offers significant opportunity for improved estimates on land use combination categories (CC) where tree biomass has either not been included or only roughly estimated until now. The ALS biomass model also offers the advantage of providing greater spatial resolution and greater within CC spatial variability compared to the current nationwide estimates.
Frey, H Christopher; Zhao, Yuchao
2004-11-15
Probabilistic emission inventories were developed for urban air toxic emissions of benzene, formaldehyde, chromium, and arsenic for the example of Houston. Variability and uncertainty in emission factors were quantified for 71-97% of total emissions, depending upon the pollutant and data availability. Parametric distributions for interunit variability were fit using maximum likelihood estimation (MLE), and uncertainty in mean emission factors was estimated using parametric bootstrap simulation. For data sets containing one or more nondetected values, empirical bootstrap simulation was used to randomly sample detection limits for nondetected values and observations for sample values, and parametric distributions for variability were fit using MLE estimators for censored data. The goodness-of-fit for censored data was evaluated by comparison of cumulative distributions of bootstrap confidence intervals and empirical data. The emission inventory 95% uncertainty ranges are as small as -25% to +42% for chromium to as large as -75% to +224% for arsenic with correlated surrogates. Uncertainty was dominated by only a few source categories. Recommendations are made for future improvements to the analysis.
Mujawar, Sumaiyya; Utture, Sagar C; Fonseca, Eddie; Matarrita, Jessie; Banerjee, Kaushik
2014-05-01
A sensitive and rugged residue analysis method was validated for the estimation of dithiocarbamate fungicides in a variety of fruit and vegetable matrices. The sample preparation method involved reaction of dithiocarbamates with Tin(II) chloride in aqueous HCl. The CS2 produced was absorbed into an isooctane layer and estimated by GC-MS selected ion monitoring. Limit of quantification (LOQ) was ⩽40μgkg(-1) for grape, green chilli, tomato, potato, brinjal, pineapple and chayote and the recoveries were within 75-104% (RSD<15% at LOQ). The method could be satisfactorily applied for analysis of real world samples. Dissipation of mancozeb, the most-used dithiocarbamate fungicide, in field followed first+first order kinetics with pre-harvest intervals of 2 and 4days in brinjal, 7 and 10days in grapes and 0day in chilli at single and double dose of agricultural applications. Cooking practices were effective for removal of mancozeb residues from vegetables. Copyright © 2013 Elsevier Ltd. All rights reserved.
Daniele, Gaëlle; Lafay, Florent; Pelosi, Céline; Fritsch, Clémentine; Vulliet, Emmanuelle
2018-06-04
Agricultural intensification, and in particular the use of pesticides, leads over the years to a loss of biodiversity and a decline of ecosystem services in cultivated zones and agricultural landscapes. Among the animal communities involved in the functioning of agro-ecosystems, earthworms are ubiquitous and recognized as indicators of land uses and cultural practices. However, little data is available on the levels of pesticides in such organisms in natura, which would allow estimating their actual exposure and the potentially resulting impacts. Thus, the objective of this study was to develop a sensitive analytical methodology to detect and quantify 27 currently used pesticides in earthworms (Allolobophora chlorotica). A modified QuEChERS extraction was implemented on individual earthworms. This step was followed by liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS). The whole analytical method was validated on spiked earthworm blank samples, with regard to linearity (from 1 to 100 method limit of quantification, r 2 > 0.95), intra-day precision (relative standard deviation (RSD) < 15%), inter-day precision (RSD < 20%), recoveries (mainly in the range 70-110%), and limits of detection and of quantification (inferior to 5 ng/g for most of the pesticides). The developed method was successfully applied to determine the concentrations of pesticides in nine individuals collected in natura. Up to five of the selected pesticides have been detected in one individual. Graphical abstract.
Bayesian flood forecasting methods: A review
NASA Astrophysics Data System (ADS)
Han, Shasha; Coulibaly, Paulin
2017-08-01
Over the past few decades, floods have been seen as one of the most common and largely distributed natural disasters in the world. If floods could be accurately forecasted in advance, then their negative impacts could be greatly minimized. It is widely recognized that quantification and reduction of uncertainty associated with the hydrologic forecast is of great importance for flood estimation and rational decision making. Bayesian forecasting system (BFS) offers an ideal theoretic framework for uncertainty quantification that can be developed for probabilistic flood forecasting via any deterministic hydrologic model. It provides suitable theoretical structure, empirically validated models and reasonable analytic-numerical computation method, and can be developed into various Bayesian forecasting approaches. This paper presents a comprehensive review on Bayesian forecasting approaches applied in flood forecasting from 1999 till now. The review starts with an overview of fundamentals of BFS and recent advances in BFS, followed with BFS application in river stage forecasting and real-time flood forecasting, then move to a critical analysis by evaluating advantages and limitations of Bayesian forecasting methods and other predictive uncertainty assessment approaches in flood forecasting, and finally discusses the future research direction in Bayesian flood forecasting. Results show that the Bayesian flood forecasting approach is an effective and advanced way for flood estimation, it considers all sources of uncertainties and produces a predictive distribution of the river stage, river discharge or runoff, thus gives more accurate and reliable flood forecasts. Some emerging Bayesian forecasting methods (e.g. ensemble Bayesian forecasting system, Bayesian multi-model combination) were shown to overcome limitations of single model or fixed model weight and effectively reduce predictive uncertainty. In recent years, various Bayesian flood forecasting approaches have been developed and widely applied, but there is still room for improvements. Future research in the context of Bayesian flood forecasting should be on assimilation of various sources of newly available information and improvement of predictive performance assessment methods.
Contractor, Kaiyumars B; Kenny, Laura M; Coombes, Charles R; Turkheimer, Federico E; Aboagye, Eric O; Rosso, Lula
2012-03-24
Quantification of kinetic parameters of positron emission tomography (PET) imaging agents normally requires collecting arterial blood samples which is inconvenient for patients and difficult to implement in routine clinical practice. The aim of this study was to investigate whether a population-based input function (POP-IF) reliant on only a few individual discrete samples allows accurate estimates of tumour proliferation using [18F]fluorothymidine (FLT). Thirty-six historical FLT-PET data with concurrent arterial sampling were available for this study. A population average of baseline scans blood data was constructed using leave-one-out cross-validation for each scan and used in conjunction with individual blood samples. Three limited sampling protocols were investigated including, respectively, only seven (POP-IF7), five (POP-IF5) and three (POP-IF3) discrete samples of the historical dataset. Additionally, using the three-point protocol, we derived a POP-IF3M, the only input function which was not corrected for the fraction of radiolabelled metabolites present in blood. The kinetic parameter for net FLT retention at steady state, Ki, was derived using the modified Patlak plot and compared with the original full arterial set for validation. Small percentage differences in the area under the curve between all the POP-IFs and full arterial sampling IF was found over 60 min (4.2%-5.7%), while there were, as expected, larger differences in the peak position and peak height.A high correlation between Ki values calculated using the original arterial input function and all the population-derived IFs was observed (R2 = 0.85-0.98). The population-based input showed good intra-subject reproducibility of Ki values (R2 = 0.81-0.94) and good correlation (R2 = 0.60-0.85) with Ki-67. Input functions generated using these simplified protocols over scan duration of 60 min estimate net PET-FLT retention with reasonable accuracy.
Uncertainty Quantification for Robust Control of Wind Turbines using Sliding Mode Observer
NASA Astrophysics Data System (ADS)
Schulte, Horst
2016-09-01
A new quantification method of uncertain models for robust wind turbine control using sliding-mode techniques is presented with the objective to improve active load mitigation. This approach is based on the so-called equivalent output injection signal, which corresponds to the average behavior of the discontinuous switching term, establishing and maintaining a motion on a so-called sliding surface. The injection signal is directly evaluated to obtain estimates of the uncertainty bounds of external disturbances and parameter uncertainties. The applicability of the proposed method is illustrated by the quantification of a four degree-of-freedom model of the NREL 5MW reference turbine containing uncertainties.
Shah, Umang; Patel, Shraddha; Raval, Manan
2018-01-01
High performance liquid chromatography is an integral analytical tool in assessing drug product stability. HPLC methods should be able to separate, detect, and quantify the various drug-related degradants that can form on storage or manufacturing, plus detect any drug-related impurities that may be introduced during synthesis. A simple, economic, selective, precise, and stability-indicating HPLC method has been developed and validated for analysis of Rifampicin (RIFA) and Piperine (PIPE) in bulk drug and in the formulation. Reversed-phase chromatography was performed on a C18 column with Buffer (Potassium Dihydrogen Orthophosphate) pH 6.5 and Acetonitrile, 30:70), (%, v/v), as mobile phase at a flow rate of 1 mL min-1. The detection was performed at 341 nm and sharp peaks were obtained for RIFA and PIPE at retention time of 3.3 ± 0.01 min and 5.9 ± 0.01 min, respectively. The detection limits were found to be 2.385 ng/ml and 0.107 ng/ml and quantification limits were found to be 7.228ng/ml and 0.325ng/ml for RIFA and PIPE, respectively. The method was validated for accuracy, precision, reproducibility, specificity, robustness, and detection and quantification limits, in accordance with ICH guidelines. Stress study was performed on RIFA and PIPE and it was found that these degraded sufficiently in all applied chemical and physical conditions. Thus, the developed RP-HPLC method was found to be suitable for the determination of both the drugs in bulk as well as stability samples of capsule containing various excipients. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Li, Chunmei; Jin, Fen; Yu, Zhiyong; Qi, Yamei; Shi, Xiaomei; Wang, Miao; Shao, Hua; Jin, Maojun; Wang, Jing; Yang, Mingqi
2012-07-11
A rapid method for analyzing trace levels of chlormequat (CQ) in meat samples by hydrophilic interaction liquid chromatography (HILIC)-electrospray tandem mass spectrometry was developed. The samples were extracted with acetonitrile, followed by a rapid cleanup through a dispersive solid-phase extraction (DSPE) technique with octadecyl (C18) DSPE sorbents. The chromatographic separation was achieved within 6 min using a HILIC column with 10 mM ammonium acetate and 0.1% (v/v) formic acid in water/acetonitrile (v/v, 40:60) as the mobile phase. Quantification was performed using a matrix-matched calibration curve, which was linear in the range of the 0.05-100 μg/L. The limit of detection (LOD) was estimated at 0.03 μg/kg for CQ on the basis of a peak to peak signal noise (S/N = 3). The limit of quantification (LOQ) was 0.1 μg/kg on the basis of the lowest spiked concentration with suitable precision and accuracy. The average recovery of CQ in spiked meat samples was 86.4-94.7% at 2, 20, and 200 μg/kg. Finally, this method was applied to determine CQ in the livestock and poultry meats purchased from markets in Beijing in 2011. CQ was detected in all 12 samples, and the concentration was 0.4-636.0 μg/kg. Concentrations in a chicken sample (636.0 μg/kg) and a goat meat sample (486.0 μg/kg) were found to be 15.9 and 2.43 times the corresponding Codex maximum residue limits, respectively.
1H NMR quantification in very dilute toxin solutions: application to anatoxin-a analysis.
Dagnino, Denise; Schripsema, Jan
2005-08-01
A complete procedure is described for the extraction, detection and quantification of anatoxin-a in biological samples. Anatoxin-a is extracted from biomass by a routine acid base extraction. The extract is analysed by GC-MS, without the need of derivatization, with a detection limit of 0.5 ng. A method was developed for the accurate quantification of anatoxin-a in the standard solution to be used for the calibration of the GC analysis. 1H NMR allowed the accurate quantification of microgram quantities of anatoxin-a. The accurate quantification of compounds in standard solutions is rarely discussed, but for compounds like anatoxin-a (toxins with prices in the range of a million dollar a gram), of which generally only milligram quantities or less are available, this factor in the quantitative analysis is certainly not trivial. The method that was developed can easily be adapted for the accurate quantification of other toxins in very dilute solutions.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M
2011-07-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.
Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I.; Marcotte, Edward M.
2011-01-01
Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for all possible PSMs and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for all detected proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses. PMID:21488652
Development of an analytical method for the determination of anthracyclines in hospital effluents.
Mahnik, Susanne N; Rizovski, Blanka; Fuerhacker, Maria; Mader, Robert M
2006-11-01
Little is known about the fate of cytostatics after their elimination from humans into the environment. Being often very toxic compounds, their quantification in hospital effluents may be necessary to individualise the putative magnitude of pollution problems. We therefore developed a method for the determination of the very important group of anthracyclines (doxorubicin, epirubicin, and daunorubicin) in hospital effluents. Waste water samples were enriched by solid phase extraction (concentration factor 100), analysed by reversed-phase high performance liquid chromatography (RP-HPLC), and monitored by fluorescence detection. This method is reproducible and accurate within a range of 0.1-5 micro g l(-1) for all compounds (limits of quantification: 0.26-0.29 micro g l(-1) ; recoveries >80%). The applicability of the method was proven by chemical analysis of hospital sewage samples (range: 0.1-1.4 micro g l(-1) epirubicin and 0.1-0.5 micro g l(-1) doxorubicin). Obtained over a time period of one month, the results were in line with those calculated by an input-output model. These investigations show that the examined cytostatics are easily detectable and that the presented method is suitable to estimate the dimension of pharmaceutical contamination originating from hospital effluents.
NASA Astrophysics Data System (ADS)
Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.
2017-02-01
CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.
Krupčík, Ján; Májek, Pavel; Gorovenko, Roman; Blaško, Jaroslav; Kubinec, Robert; Sandra, Pat
2015-05-29
Methods based on the blank signal as proposed by IUPAC procedure and on the signal to noise ratio (S/N) as listed in the ISO-11843-1 norm for determination of the limit of detection (LOD) and quantitation (LOQ) in one-dimensional capillary gas chromatography (1D-GC) and comprehensive two-dimensional capillary gas chromatography (CG×GC) are described in detail and compared for both techniques. Flame ionization detection was applied and variables were the data acquisition frequency and, for CG×GC, also the modulation time. It has been stated that LOD and LOQ estimated according to IUPAC might be successfully used for 1D-GC-FID method. Moreover, LOD and LOQ decrease with decrease of data acquisition frequency (DAF). For GC×GC-FID, estimation of LOD by IUPAC gave poor reproducibility of results while for LOQ reproducibility was acceptable (within ±10% rel.). The LOD and LOQ determined by the S/N concept both for 1D-GC-FID and GC×GC-FID methods are ca. three times higher than those values estimated by the standard deviation of the blank. Since the distribution pattern of modulated peaks for any analyte separated by GC×GC is random and cannot be predicted, LOQ and LOD may vary within 30% for 3s modulation time. Concerning sensitivity, 1D-GC-FID at 2Hz and of GC×GC-FID at 50Hz shows a ca. 5 times enhancement of sensitivity in the modulated signal output. Copyright © 2015 Elsevier B.V. All rights reserved.
Tzonev, Svilen
2018-01-01
Current commercially available digital PCR (dPCR) systems and assays are capable of detecting individual target molecules with considerable reliability. As tests are developed and validated for use on clinical samples, the need to understand and develop robust statistical analysis routines increases. This chapter covers the fundamental processes and limitations of detecting and reporting on single molecule detection. We cover the basics of quantification of targets and sources of imprecision. We describe the basic test concepts: sensitivity, specificity, limit of blank, limit of detection, and limit of quantification in the context of dPCR. We provide basic guidelines how to determine those, how to choose and interpret the operating point, and what factors may influence overall test performance in practice.
NASA Astrophysics Data System (ADS)
Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.
2017-11-01
This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.
Jin, Chan; Guan, Jibin; Zhang, Dong; Li, Bing; Liu, Hongzhuo; He, Zhonggui
2017-10-01
We present a technique to rapid determine taxane in blood samples by supercritical fluid chromatography together with mass spectrometry. The aim of this study was to develop a supercritical fluid chromatography with mass spectrometry method for the analysis of paclitaxel, cabazitaxel, and docetaxel in whole-blood samples of rats. Liquid-dry matrix spot extraction was selected in sample preparation procedure. Supercritical fluid chromatography separation of paclitaxel, cabazitaxel, docetaxel, and glyburide (internal standard) was accomplished within 3 min by using the gradient mobile phase consisted of methanol as the compensation solvent and carbon dioxide at a flow rate of 1.0 mL/min. The method was validated regarding specificity, the lower limit of quantification, repeatability, and reproducibility of quantification, extraction recovery, and matrix effects. The lower limit of quantification was found to be 10 ng/mL since it exhibited acceptable precision and accuracy at the corresponding level. All interday accuracies and precisions were within the accepted criteria of ±15% of the nominal value and within ±20% at the lower limit of quantification, implying that the method was reliable and reproducible. In conclusion, this method is a promising tool to support and improve preclinical or clinical pharmacokinetic studies with the taxanes anticancer drugs. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
New High Throughput Methods to Estimate Chemical ...
EPA has made many recent advances in high throughput bioactivity testing. However, concurrent advances in rapid, quantitative prediction of human and ecological exposures have been lacking, despite the clear importance of both measures for a risk-based approach to prioritizing and screening chemicals. A recent report by the National Research Council of the National Academies, Exposure Science in the 21st Century: A Vision and a Strategy (NRC 2012) laid out a number of applications in chemical evaluation of both toxicity and risk in critical need of quantitative exposure predictions, including screening and prioritization of chemicals for targeted toxicity testing, focused exposure assessments or monitoring studies, and quantification of population vulnerability. Despite these significant needs, for the majority of chemicals (e.g. non-pesticide environmental compounds) there are no or limited estimates of exposure. For example, exposure estimates exist for only 7% of the ToxCast Phase II chemical list. In addition, the data required for generating exposure estimates for large numbers of chemicals is severely lacking (Egeghy et al. 2012). This SAP reviewed the use of EPA's ExpoCast model to rapidly estimate potential chemical exposures for prioritization and screening purposes. The focus was on bounded chemical exposure values for people and the environment for the Endocrine Disruptor Screening Program (EDSP) Universe of Chemicals. In addition to exposure, the SAP
Russell, Matthew B.; D'Amato, Anthony W.; Schulz, Bethany K.; Woodall, Christopher W.; Domke, Grant M.; Bradford, John B.
2014-01-01
The contribution of understorey vegetation (UVEG) to forest ecosystem biomass and carbon (C) across diverse forest types has, to date, eluded quantification at regional and national scales. Efforts to quantify UVEG C have been limited to field-intensive studies or broad-scale modelling approaches lacking field measurements. Although large-scale inventories of UVEG C are not common, species- and community-level inventories of vegetation structure are available and may prove useful in quantifying UVEG C stocks. This analysis developed a general framework for estimating UVEG C stocks by employing per cent cover estimates of UVEG from a region-wide forest inventory coupled with an estimate of maximum UVEG C across the US Lake States (i.e. Michigan, Minnesota and Wisconsin). Estimates of UVEG C stocks from this approach reasonably align with expected C stocks in the study region, ranging from 0.86 ± 0.06 Mg ha-1 in red pine-dominated to 1.59 ± 0.06 Mg ha-1 for aspen/birch-dominated forest types. Although the data employed here were originally collected to assess broad-scale forest structure and diversity, this study proposes a framework for using UVEG inventories as a foundation for estimating C stocks in an often overlooked, yet important ecosystem C pool.
Uncertainty Quantification and Statistical Convergence Guidelines for PIV Data
NASA Astrophysics Data System (ADS)
Stegmeir, Matthew; Kassen, Dan
2016-11-01
As Particle Image Velocimetry has continued to mature, it has developed into a robust and flexible technique for velocimetry used by expert and non-expert users. While historical estimates of PIV accuracy have typically relied heavily on "rules of thumb" and analysis of idealized synthetic images, recently increased emphasis has been placed on better quantifying real-world PIV measurement uncertainty. Multiple techniques have been developed to provide per-vector instantaneous uncertainty estimates for PIV measurements. Often real-world experimental conditions introduce complications in collecting "optimal" data, and the effect of these conditions is important to consider when planning an experimental campaign. The current work utilizes the results of PIV Uncertainty Quantification techniques to develop a framework for PIV users to utilize estimated PIV confidence intervals to compute reliable data convergence criteria for optimal sampling of flow statistics. Results are compared using experimental and synthetic data, and recommended guidelines and procedures leveraging estimated PIV confidence intervals for efficient sampling for converged statistics are provided.
Kapke, G E; Watson, G; Sheffler, S; Hunt, D; Frederick, C
1997-01-01
Several assays for quantification of DNA have been developed and are currently used in research and clinical laboratories. However, comparison of assay results has been difficult owing to the use of different standards and units of measurements as well as differences between assays in dynamic range and quantification limits. Although a few studies have compared results generated by different assays, there has been no consensus on conversion factors and thorough analysis has been precluded by small sample size and limited dynamic range studied. In this study, we have compared the Chiron branched DNA (bDNA) and Abbott liquid hybridization assays for quantification of hepatitis B virus (HBV) DNA in clinical specimens and have derived conversion factors to facilitate comparison of assay results. Additivity and variance stabilizing (AVAS) regression, a form of non-linear regression analysis, was performed on assay results for specimens from HBV clinical trials. Our results show that there is a strong linear relationship (R2 = 0.96) between log Chiron and log Abbott assay results. Conversion factors derived from regression analyses were found to be non-constant and ranged from 6-40. Analysis of paired assay results below and above each assay's limit of quantification (LOQ) indicated that a significantly (P < 0.01) larger proportion of observations were below the Abbott assay LOQ but above the Chiron assay LOQ, indicating that the Chiron assay is significantly more sensitive than the Abbott assay. Testing of replicate specimens showed that the Chiron assay consistently yielded lower per cent coefficients of variance (% CVs) than the Abbott assay, indicating that the Chiron assay provides superior precision.
An estimation framework for building information modeling (BIM)-based demolition waste by type.
Kim, Young-Chan; Hong, Won-Hwa; Park, Jae-Woo; Cha, Gi-Wook
2017-12-01
Most existing studies on demolition waste (DW) quantification do not have an official standard to estimate the amount and type of DW. Therefore, there are limitations in the existing literature for estimating DW with a consistent classification system. Building information modeling (BIM) is a technology that can generate and manage all the information required during the life cycle of a building, from design to demolition. Nevertheless, there has been a lack of research regarding its application to the demolition stage of a building. For an effective waste management plan, the estimation of the type and volume of DW should begin from the building design stage. However, the lack of tools hinders an early estimation. This study proposes a BIM-based framework that estimates DW in the early design stages, to achieve an effective and streamlined planning, processing, and management. Specifically, the input of construction materials in the Korean construction classification system and those in the BIM library were matched. Based on this matching integration, the estimates of DW by type were calculated by applying the weight/unit volume factors and the rates of DW volume change. To verify the framework, its operation was demonstrated by means of an actual BIM modeling and by comparing its results with those available in the literature. This study is expected to contribute not only to the estimation of DW at the building level, but also to the automated estimation of DW at the district level.
Collender, Philip A.; Kirby, Amy E.; Addiss, David G.; Freeman, Matthew C.; Remais, Justin V.
2015-01-01
Limiting the environmental transmission of soil-transmitted helminths (STH), which infect 1.5 billion people worldwide, will require sensitive, reliable, and cost effective methods to detect and quantify STH in the environment. We review the state of the art of STH quantification in soil, biosolids, water, produce, and vegetation with respect to four major methodological issues: environmental sampling; recovery of STH from environmental matrices; quantification of recovered STH; and viability assessment of STH ova. We conclude that methods for sampling and recovering STH require substantial advances to provide reliable measurements for STH control. Recent innovations in the use of automated image identification and developments in molecular genetic assays offer considerable promise for improving quantification and viability assessment. PMID:26440788
Microwave-assisted extraction of green coffee oil and quantification of diterpenes by HPLC.
Tsukui, A; Santos Júnior, H M; Oigman, S S; de Souza, R O M A; Bizzo, H R; Rezende, C M
2014-12-01
The microwave-assisted extraction (MAE) of 13 different green coffee beans (Coffea arabica L.) was compared to Soxhlet extraction for oil obtention. The full factorial design applied to the microwave-assisted extraction (MAE), related to time and temperature parameters, allowed to develop a powerful fast and smooth methodology (10 min at 45°C) compared to a 4h Soxhlet extraction. The quantification of cafestol and kahweol diterpenes present in the coffee oil was monitored by HPLC/UV and showed satisfactory linearity (R(2)=0.9979), precision (CV 3.7%), recovery (<93%), limit of detection (0.0130 mg/mL), and limit of quantification (0.0406 mg/mL). The space-time yield calculated on the diterpenes content for sample AT1 (Arabica green coffee) showed a six times higher value compared to the traditional Soxhlet method. Copyright © 2014 Elsevier Ltd. All rights reserved.
Jeong, Hyun Cheol; Hong, Hee-Do; Kim, Young-Chan; Rhee, Young Kyoung; Choi, Sang Yoon; Kim, Kyung-Tack; Kim, Sung Soo; Lee, Young-Chul; Cho, Chang-Won
2015-01-01
Background: Maltol, as a type of phenolic compounds, is produced by the browning reaction during the high-temperature treatment of ginseng. Thus, maltol can be used as a marker for the quality control of various ginseng products manufactured by high-temperature treatment including red ginseng. For the quantification of maltol in Korean ginseng products, an effective high-performance liquid chromatography-diode array detector (HPLC-DAD) method was developed. Materials and Methods: The HPLC-DAD method for maltol quantification coupled with a liquid-liquid extraction (LLE) method was developed and validated in terms of linearity, precision, and accuracy. An HPLC separation was performed on a C18 column. Results: The LLE methods and HPLC running conditions for maltol quantification were optimized. The calibration curve of the maltol exhibited good linearity (R2 = 1.00). The limit of detection value of maltol was 0.26 μg/mL, and the limit of quantification value was 0.79 μg/mL. The relative standard deviations (RSDs) of the data of the intra- and inter-day experiments were <1.27% and 0.61%, respectively. The results of the recovery test were 101.35–101.75% with an RSD value of 0.21–1.65%. The developed method was applied successfully to quantify the maltol in three ginseng products manufactured by different methods. Conclusion: The results of validation demonstrated that the proposed HPLC-DAD method was useful for the quantification of maltol in various ginseng products. PMID:26246746
Paul B. Alaback; Duncan C. Lutes
1997-01-01
Methods for the quantification of coarse woody debris volume and the description of spatial patterning were studied in the Tenderfoot Creek Experimental Forest, Montana. The line transect method was found to be an accurate, unbiased estimator of down debris volume (> 10cm diameter) on 1/4 hectare fixed-area plots, when perpendicular lines were used. The Fischer...
Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems
NASA Technical Reports Server (NTRS)
He, Yuning; Davies, Misty Dawn
2014-01-01
The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.
Quantification of confocal images of biofilms grown on irregular surfaces
Ross, Stacy Sommerfeld; Tu, Mai Han; Falsetta, Megan L.; Ketterer, Margaret R.; Kiedrowski, Megan R.; Horswill, Alexander R.; Apicella, Michael A.; Reinhardt, Joseph M.; Fiegel, Jennifer
2014-01-01
Bacterial biofilms grow on many types of surfaces, including flat surfaces such as glass and metal and irregular surfaces such as rocks, biological tissues and polymers. While laser scanning confocal microscopy can provide high-resolution images of biofilms grown on any surface, quantification of biofilm-associated bacteria is currently limited to bacteria grown on flat surfaces. This can limit researchers studying irregular surfaces to qualitative analysis or quantification of only the total bacteria in an image. In this work, we introduce a new algorithm called modified connected volume filtration (MCVF) to quantify bacteria grown on top of an irregular surface that is fluorescently labeled or reflective. Using the MCVF algorithm, two new quantification parameters are introduced. The modified substratum coverage parameter enables quantification of the connected-biofilm bacteria on top of the surface and on the imaging substratum. The utility of MCVF and the modified substratum coverage parameter were shown with Pseudomonas aeruginosa and Staphylococcus aureus biofilms grown on human airway epithelial cells. A second parameter, the percent association, provides quantified data on the colocalization of the bacteria with a labeled component, including bacteria within a labeled tissue. The utility of quantifying the bacteria associated with the cell cytoplasm was demonstrated with Neisseria gonorrhoeae biofilms grown on cervical epithelial cells. This algorithm provides more flexibility and quantitative ability to researchers studying biofilms grown on a variety of irregular substrata. PMID:24632515
Salmona, Maud; Fourati, Slim; Feghoul, Linda; Scieux, Catherine; Thiriez, Aline; Simon, François; Resche-Rigon, Matthieu; LeGoff, Jérôme
2016-08-01
Accurate quantification of Epstein-Barr virus (EBV) load in blood is essential for the management of post-transplant lymphoproliferative disorders. The automation of DNA extraction and amplification may improve accuracy and reproducibility. We evaluated the EBV PCR Kit V1 with fully automated DNA extraction and amplification on the m2000 system (Abbott assay). Conversion factor between copies and international units (IU), lower limit of quantification, imprecision and linearity were determined in a whole blood (WB) matrix. Results from 339 clinical WB specimens were compared with a home-brew real-time PCR assay used in our laboratory (in-house assay). The conversion factor between copies and IU was 3.22 copies/IU. The lower limit of quantification (LLQ) was 1000 copies/mL. Intra- and inter-assay coefficients of variation were 3.1% and 7.9% respectively for samples with EBV load higher than the LLQ. The comparison between Abbott assay and in-house assay showed a good concordance (kappa = 0.77). Loads were higher with the Abbott assay (mean difference = 0.62 log10 copies/mL). The EBV PCR Kit V1 assay on the m2000 system provides a reliable and easy-to-use method for quantification of EBV DNA in WB. Copyright © 2016 Elsevier Inc. All rights reserved.
Single cell genomic quantification by non-fluorescence nonlinear microscopy
NASA Astrophysics Data System (ADS)
Kota, Divya; Liu, Jing
2017-02-01
Human epidermal growth receptor 2 (Her2) is a gene which plays a major role in breast cancer development. The quantification of Her2 expression in single cells is limited by several drawbacks in existing fluorescence-based single molecule techniques, such as low signal-to-noise ratio (SNR), strong autofluorescence and background signals from biological components. For rigorous genomic quantification, a robust method of orthogonal detection is highly desirable and we demonstrated it by two non-fluorescent imaging techniques -transient absorption microscopy (TAM) and second harmonic generation (SHG). In TAM, gold nanoparticles (AuNPs) are chosen as an orthogonal probes for detection of single molecules which gives background-free quantifications of single mRNA transcript. In SHG, emission from barium titanium oxide (BTO) nanoprobes was demonstrated which allows stable signal beyond the autofluorescence window. Her2 mRNA was specifically labeled with nanoprobes which are conjugated with antibodies or oligonucleotides and quantified at single copy sensitivity in the cancer cells and tissues. Furthermore, a non-fluorescent super-resolution concept, named as second harmonic super-resolution microscopy (SHaSM), was proposed to quantify individual Her2 transcripts in cancer cells beyond the diffraction limit. These non-fluorescent imaging modalities will provide new dimensions in biomarker quantification at single molecule sensitivity in turbid biological samples, offering a strong cross-platform strategy for clinical monitoring at single cell resolution.
Vu, Dai Long; Ranglová, Karolína; Hájek, Jan; Hrouzek, Pavel
2018-05-01
Quantification of selenated amino-acids currently relies on methods employing inductively coupled plasma mass spectrometry (ICP-MS). Although very accurate, these methods do not allow the simultaneous determination of standard amino-acids, hampering the comparison of the content of selenated versus non-selenated species such as methionine (Met) and selenomethionine (SeMet). This paper reports two approaches for the simultaneous quantification of Met and SeMet. In the first approach, standard enzymatic hydrolysis employing Protease XIV was applied for the preparation of samples. The second approach utilized methanesulfonic acid (MA) for the hydrolysis of samples, either in a reflux system or in a microwave oven, followed by derivatization with diethyl ethoxymethylenemalonate. The prepared samples were then analyzed by multiple reaction monitoring high performance liquid chromatography tandem mass spectrometry (MRM-HPLC-MS/MS). Both approaches provided platforms for the accurate determination of selenium/sulfur substitution rate in Met. Moreover the second approach also provided accurate simultaneous quantification of Met and SeMet with a low limit of detection, low limit of quantification and wide linearity range, comparable to the commonly used gas chromatography mass spectrometry (GC-MS) method or ICP-MS. The novel method was validated using certified reference material in conjunction with the GC-MS reference method. Copyright © 2018. Published by Elsevier B.V.
The Qiagen Investigator® Quantiplex HYres as an alternative kit for DNA quantification.
Frégeau, Chantal J; Laurin, Nancy
2015-05-01
The Investigator® Quantiplex HYres kit was evaluated as a potential replacement for dual DNA quantification of casework samples. This kit was determined to be highly sensitive with a limit of quantification and limit of detection of 0.0049ng/μL and 0.0003ng/μL, respectively, for both human and male DNA, using full or half reaction volumes. It was also accurate in assessing the amount of male DNA present in 96 mock and actual casework male:female mixtures (various ratios) processed in this exercise. The close correlation between the male/human DNA ratios expressed in percentages derived from the Investigator® Quantiplex HYres quantification results and the male DNA proportion calculated in mixed AmpFlSTR® Profiler® Plus or AmpFlSTR® Identifiler® Plus profiles, using the Amelogenin Y peak and STR loci, allowed guidelines to be developed to facilitate decisions regarding when to submit samples to Y-STR rather than autosomal STR profiling. The internal control (IC) target was shown to be more sensitive to inhibitors compared to the human and male DNA targets included in the Investigator® Quantiplex HYres kit serving as a good quality assessor of DNA extracts. The new kit met our criteria of enhanced sensitivity, accuracy, consistency, reliability and robustness for casework DNA quantification. Crown Copyright © 2015. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Reeves, Anthony P.; Yankelevitz, David F.; Henschke, Claudia I.; Barr, R. Graham
2009-02-01
Emphysema is a disease of the lungs that destroys the alveolar air sacs and induces long-term respiratory dysfunction. CT scans allow for the imaging of the anatomical basis of emphysema and quantification of the underlying disease state. Several measures have been introduced for the quantification emphysema directly from CT data; most,however, are based on the analysis of density information provided by the CT scans, which vary by scanner and can be hard to standardize across sites and time. Given that one of the anatomical variations associated with the progression of emphysema is the flatting of the diaphragm due to the loss of elasticity in the lung parenchyma, curvature analysis of the diaphragm would provide information about emphysema from CT. Therefore, we propose a new, non-density based measure of the curvature of the diaphragm that would allow for further quantification methods in a robust manner. To evaluate the new method, 24 whole-lung scans were analyzed using the ratios of the lung height and diaphragm width to diaphragm height as curvature estimates as well as using the emphysema index as comparison. Pearson correlation coefficients showed a strong trend of several of the proposed diaphragm curvature measures to have higher correlations, of up to r=0.57, with DLCO% and VA than did the emphysema index. Furthermore, we found emphysema index to have only a 0.27 correlation to the proposed measures, indicating that the proposed measures evaluate different aspects of the disease.
Limited-memory adaptive snapshot selection for proper orthogonal decomposition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill
2015-04-02
Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less
Composable security proof for continuous-variable quantum key distribution with coherent States.
Leverrier, Anthony
2015-02-20
We give the first composable security proof for continuous-variable quantum key distribution with coherent states against collective attacks. Crucially, in the limit of large blocks the secret key rate converges to the usual value computed from the Holevo bound. Combining our proof with either the de Finetti theorem or the postselection technique then shows the security of the protocol against general attacks, thereby confirming the long-standing conjecture that Gaussian attacks are optimal asymptotically in the composable security framework. We expect that our parameter estimation procedure, which does not rely on any assumption about the quantum state being measured, will find applications elsewhere, for instance, for the reliable quantification of continuous-variable entanglement in finite-size settings.
Archetypal Analysis for Sparse Representation-Based Hyperspectral Sub-Pixel Quantification
NASA Astrophysics Data System (ADS)
Drees, L.; Roscher, R.
2017-05-01
This paper focuses on the quantification of land cover fractions in an urban area of Berlin, Germany, using simulated hyperspectral EnMAP data with a spatial resolution of 30m×30m. For this, sparse representation is applied, where each pixel with unknown surface characteristics is expressed by a weighted linear combination of elementary spectra with known land cover class. The elementary spectra are determined from image reference data using simplex volume maximization, which is a fast heuristic technique for archetypal analysis. In the experiments, the estimation of class fractions based on the archetypal spectral library is compared to the estimation obtained by a manually designed spectral library by means of reconstruction error, mean absolute error of the fraction estimates, sum of fractions and the number of used elementary spectra. We will show, that a collection of archetypes can be an adequate and efficient alternative to the spectral library with respect to mentioned criteria.
García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan
2009-02-01
An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.
Villoria Sáez, Paola; del Río Merino, Mercedes; Porras-Amores, César
2012-02-01
The management planning of construction and demolition (C&D) waste uses a single indicator which does not provide enough detailed information. Therefore the determination and implementation of other innovative and precise indicators should be determined. The aim of this research work is to improve existing C&D waste quantification tools in the construction of new residential buildings in Spain. For this purpose, several housing projects were studied to determine an estimation of C&D waste generated during their construction process. This paper determines the values of three indicators to estimate the generation of C&D waste in new residential buildings in Spain, itemizing types of waste and construction stages. The inclusion of two more accurate indicators, in addition to the global one commonly in use, provides a significant improvement in C&D waste quantification tools and management planning.
Fusion of spectral and electrochemical sensor data for estimating soil macronutrients
USDA-ARS?s Scientific Manuscript database
Rapid and efficient quantification of plant-available soil phosphorus (P) and potassium (K) is needed to support variable-rate fertilization strategies. Two methods that have been used for estimating these soil macronutrients are diffuse reflectance spectroscopy in visible and near-infrared (VNIR) w...
Zacharis, Constantinos K; Vastardi, Elli
2018-02-20
In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Khorram, S.; Smith, H. G.
1979-01-01
A remote sensing-aided procedure was applied to the watershed-wide estimation of water loss to the atmosphere (evapotranspiration, ET). The approach involved a spatially referenced databank based on both remotely sensed and ground-acquired information. Physical models for both estimation of ET and quantification of input parameters are specified, and results of the investigation are outlined.
ARM Best Estimate Data (ARMBE) Products for Climate Science for a Sustainable Energy Future (CSSEF)
Riihimaki, Laura; Gaustad, Krista; McFarlane, Sally
2014-06-12
This data set was created for the Climate Science for a Sustainable Energy Future (CSSEF) model testbed project and is an extension of the hourly average ARMBE dataset to other extended facility sites and to include uncertainty estimates. Uncertainty estimates were needed in order to use uncertainty quantification (UQ) techniques with the data.
Takabatake, Reona; Akiyama, Hiroshi; Sakata, Kozue; Onishi, Mari; Koiwa, Tomohiro; Futo, Satoshi; Minegishi, Yasutaka; Teshima, Reiko; Mano, Junichi; Furui, Satoshi; Kitta, Kazumi
2011-01-01
A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event; A2704-12. During the plant transformation, DNA fragments derived from pUC19 plasmid were integrated in A2704-12, and the region was found to be A2704-12 specific. The pUC19-derived DNA sequences were used as primers for the specific detection of A2704-12. We first tried to construct a standard plasmid for A2704-12 quantification using pUC19. However, non-specific signals appeared with both qualitative and quantitative PCR analyses using the specific primers with pUC19 as a template, and we then constructed a plasmid using pBR322. The conversion factor (C(f)), which is required to calculate the amount of the genetically modified organism (GMO), was experimentally determined with two real-time PCR instruments, the Applied Biosystems 7900HT and the Applied Biosystems 7500. The determined C(f) values were both 0.98. The quantitative method was evaluated by means of blind tests in multi-laboratory trials using the two real-time PCR instruments. The limit of quantitation for the method was estimated to be 0.1%. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSD(R)), and the determined bias and RSD(R) values for the method were each less than 20%. These results suggest that the developed method would be suitable for practical analyses for the detection and quantification of A2704-12.
Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi
2016-01-01
A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize, 3272. We first attempted to obtain genome DNA from this maize using a DNeasy Plant Maxi kit and a DNeasy Plant Mini kit, which have been widely utilized in our previous studies, but DNA extraction yields from 3272 were markedly lower than those from non-GM maize seeds. However, lowering of DNA extraction yields was not observed with GM quicker or Genomic-tip 20/G. We chose GM quicker for evaluation of the quantitative method. We prepared a standard plasmid for 3272 quantification. The conversion factor (Cf), which is required to calculate the amount of a genetically modified organism (GMO), was experimentally determined for two real-time PCR instruments, the Applied Biosystems 7900HT (the ABI 7900) and the Applied Biosystems 7500 (the ABI7500). The determined Cf values were 0.60 and 0.59 for the ABI 7900 and the ABI 7500, respectively. To evaluate the developed method, a blind test was conducted as part of an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSDr). The determined values were similar to those in our previous validation studies. The limit of quantitation for the method was estimated to be 0.5% or less, and we concluded that the developed method would be suitable and practical for detection and quantification of 3272.
Courellis, Hristos; Mullen, Tim; Poizner, Howard; Cauwenberghs, Gert; Iversen, John R.
2017-01-01
Quantification of dynamic causal interactions among brain regions constitutes an important component of conducting research and developing applications in experimental and translational neuroscience. Furthermore, cortical networks with dynamic causal connectivity in brain-computer interface (BCI) applications offer a more comprehensive view of brain states implicated in behavior than do individual brain regions. However, models of cortical network dynamics are difficult to generalize across subjects because current electroencephalography (EEG) signal analysis techniques are limited in their ability to reliably localize sources across subjects. We propose an algorithmic and computational framework for identifying cortical networks across subjects in which dynamic causal connectivity is modeled among user-selected cortical regions of interest (ROIs). We demonstrate the strength of the proposed framework using a “reach/saccade to spatial target” cognitive task performed by 10 right-handed individuals. Modeling of causal cortical interactions was accomplished through measurement of cortical activity using (EEG), application of independent component clustering to identify cortical ROIs as network nodes, estimation of cortical current density using cortically constrained low resolution electromagnetic brain tomography (cLORETA), multivariate autoregressive (MVAR) modeling of representative cortical activity signals from each ROI, and quantification of the dynamic causal interaction among the identified ROIs using the Short-time direct Directed Transfer function (SdDTF). The resulting cortical network and the computed causal dynamics among its nodes exhibited physiologically plausible behavior, consistent with past results reported in the literature. This physiological plausibility of the results strengthens the framework's applicability in reliably capturing complex brain functionality, which is required by applications, such as diagnostics and BCI. PMID:28566997
Almeida, M P; Rezende, C P; Souza, L F; Brito, R B
2012-01-01
The use of aminoglycoside antibiotics in food animals is approved in Brazil. Accordingly, Brazilian food safety legislation sets maximum levels for these drugs in tissues from these animals in an effort to guarantee that food safety is not compromised. Aiming to monitor the levels of these drugs in tissues from food animals, the validation of a quantitative, confirmatory method for the detection of residues of 10 aminoglycosides antibiotics in poultry, swine, equine and bovine kidney, with extraction using a solid phase and detection and quantification by LC-MS/MS was performed. The procedure is an adaptation of the US Department of Agriculture, Food Safety and Inspection Service (USDA-FSIS) qualitative method, with the inclusion of additional clean-up and quantification at lower levels, which proved more efficient. Extraction was performed using a phosphate buffer containing trifluoroacetic acid followed by neutralization, purification on a cationic exchange SPE cartridge, with elution with methanol/acetic acid, evaporation, and dilution in ion-pair solvent. The method was validated according to the criteria and requirements of the European Commission Decision 2002/657/EC, showing selectivity with no matrix interference. Linearity was established for all analytes using the method of weighted minimum squares. CCα and CCβ varied between 1036 and 12,293 µg kg(-1), and between 1073 and 14,588 µg kg(-1), respectively. The limits of quantification varied between 27 and 688 µg kg(-1). The values of recovery for all analytes in poultry kidney, fortified in the range of 500-1500 µg kg(-1), were higher than 90%, and the relative standard deviations were lower than 15%, except spectinomycin (21.8%). Uncertainty was estimated using a simplified methodology of 'bottom-up' and 'top-down' strategies. The results showed that this method is effective for the quantification and confirmation of aminoglycoside residues and could be used by the Brazilian programme of residue control.
Dutta, Sibasish; Saikia, Gunjan Prasad; Sarma, Dhruva Jyoti; Gupta, Kuldeep; Das, Priyanka; Nath, Pabitra
2017-05-01
In this paper the utilization of smartphone as a detection platform for colorimetric quantification of biological macromolecules has been demonstrated. Using V-channel of HSV color space, the quantification of BSA protein, catalase enzyme and carbohydrate (using D-glucose) have been successfully investigated. A custom designed android application has been developed for estimating the total concentration of biological macromolecules. The results have been compared with that of a standard spectrophotometer which is generally used for colorimetric quantification in laboratory settings by measuring its absorbance at a specific wavelength. The results obtained with the designed sensor is found to be similar when compared with the spectrophotometer data. The designed sensor is low cost, robust and we envision that it could promote diverse fields of bio-analytical investigations. Schematic illustration of the smartphone sensing mechanism for colorimetric analysis of biomolecular samples. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Rosli, A. U. M.; Lall, U.; Josset, L.; Rising, J. A.; Russo, T. A.; Eisenhart, T.
2017-12-01
Analyzing the trends in water use and supply across the United States is fundamental to efforts in ensuring water sustainability. As part of this, estimating the costs of producing or obtaining water (water extraction) and the correlation with water use is an important aspect in understanding the underlying trends. This study estimates groundwater costs by interpolating the depth to water level across the US in each county. We use Ordinary and Universal Kriging, accounting for the differences between aquifers. Kriging generates a best linear unbiased estimate at each location and has been widely used to map ground-water surfaces (Alley, 1993).The spatial covariates included in the universal Kriging were land-surface elevation as well as aquifer information. The average water table is computed for each county using block kriging to obtain a national map of groundwater cost, which we compare with survey estimates of depth to the water table performed by the USDA. Groundwater extraction costs were then assumed to be proportional to water table depth. Beyond estimating the water cost, the approach can provide an indication of groundwater-stress by exploring the historical evolution of depth to the water table using time series information between 1960 and 2015. Despite data limitations, we hope to enable a more compelling and meaningful national-level analysis through the quantification of cost and stress for more economically efficient water management.
Discriminability limits in spatio-temporal stereo block matching.
Jain, Ankit K; Nguyen, Truong Q
2014-05-01
Disparity estimation is a fundamental task in stereo imaging and is a well-studied problem. Recently, methods have been adapted to the video domain where motion is used as a matching criterion to help disambiguate spatially similar candidates. In this paper, we analyze the validity of the underlying assumptions of spatio-temporal disparity estimation, and determine the extent to which motion aids the matching process. By analyzing the error signal for spatio-temporal block matching under the sum of squared differences criterion and treating motion as a stochastic process, we determine the probability of a false match as a function of image features, motion distribution, image noise, and number of frames in the spatio-temporal patch. This performance quantification provides insight into when spatio-temporal matching is most beneficial in terms of the scene and motion, and can be used as a guide to select parameters for stereo matching algorithms. We validate our results through simulation and experiments on stereo video.
Beauzamy, Léna; Derr, Julien; Boudaoud, Arezki
2015-05-19
Plant cell growth depends on a delicate balance between an inner drive-the hydrostatic pressure known as turgor-and an outer restraint-the polymeric wall that surrounds a cell. The classical technique to measure turgor in a single cell, the pressure probe, is intrusive and cannot be applied to small cells. In order to overcome these limitations, we developed a method that combines quantification of topography, nanoindentation force measurements, and an interpretation using a published mechanical model for the pointlike loading of thin elastic shells. We used atomic force microscopy to estimate the elastic properties of the cell wall and turgor pressure from a single force-depth curve. We applied this method to onion epidermal peels and quantified the response to changes in osmolality of the bathing solution. Overall our approach is accessible and enables a straightforward estimation of the hydrostatic pressure inside a walled cell. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Standardization of HPTLC method for the estimation of oxytocin in edibles.
Rani, Roopa; Medhe, Sharad; Raj, Kumar Rohit; Srivastava, Manmohan
2013-12-01
Adulteration in food stuff has been regarded as a major social evil and is a mind-boggling problem in society. In this study, a rapid, reliable and cost effective High Performance thin layer Chromatography (HPTLC) has been established for the estimation of oxytocin (adulterant) in vegetables, fruits and milk samples. Oxytocin is one of the most frequently used adulterant added in vegetables and fruits for increasing the growth rate and also to enhance milk production from lactating animals. The standardization of the method was based on simulation parameters of mobile phase, stationary phase and saturation time. The mobile phase used was MeOH: Ammonia (pH 6.8), optimized stationary phase was silica gel and saturation time of 5 min. The method was validated by testing its linearity, accuracy, precision, repeatability and limits of detection and quantification. Thus, the proposed method is simple, rapid and specific and was successfully employed for quality and quantity monitoring of oxytocin content in edible products.
Hydroxytyrosol disposition in humans.
Miro-Casas, Elisabet; Covas, Maria-Isabel; Farre, Magi; Fito, Montserrat; Ortuño, Jordi; Weinbrenner, Tanja; Roset, Pere; de la Torre, Rafael
2003-06-01
Animal and in vitro studies suggest that phenolic compounds in virgin olive oil are effective antioxidants. In animal and in vitro studies, hydroxytyrosol and its metabolites have been shown to be strong antioxidants. One of the prerequisites to assess their in vivo physiologic significance is to determine their presence in human plasma. We developed an analytical method for both hydroxytyrosol and 3-O-methyl-hydroxytyrosol in plasma. The administered dose of phenolic compounds was estimated from methanolic extracts of virgin olive oil after subjecting them to different hydrolytic treatments. Plasma and urine samples were collected from 0 to 12 h before and after 25 mL of virgin olive oil intake, a dose close to that used as daily intake in Mediterranean countries. Samples were analyzed by capillary gas chromatography-mass spectrometry before and after being subjected to acidic and enzymatic hydrolytic treatments. Calibration curves were linear (r >0.99). Analytical recoveries were 42-60%. Limits of quantification were <1.5 mg/L. Plasma hydroxytyrosol and 3-O-methyl-hydroxytyrosol increased as a response to virgin olive oil administration, reaching maximum concentrations at 32 and 53 min, respectively (P <0.001 for quadratic trend). The estimated hydroxytyrosol elimination half-life was 2.43 h. Free forms of these phenolic compounds were not detected in plasma samples. The proposed analytical method permits quantification of hydroxytyrosol and 3-O-methyl-hydroxytyrosol in plasma after real-life doses of virgin olive oil. From our results, approximately 98% of hydroxytyrosol appears to be present in plasma and urine in conjugated forms, mainly glucuronoconjugates, suggesting extensive first-pass intestinal/hepatic metabolism of the ingested hydroxytyrosol.
Bouby, M; Geckeis, H; Geyer, F W
2008-12-01
A straightforward quantification method is presented for the application of asymmetric flow field-flow fractionation (AsFlFFF) combined with inductively coupled plasma mass spectrometry (ICPMS) to the characterization of colloid-borne metal ions and nanoparticles. Reproducibility of the size calibration and recovery of elements are examined. Channel flow fluctuations are observed notably after initiation of the fractionation procedure. Their impact on quantification is considered by using (103)Rh as internal reference. Intensity ratios measured for various elements and Rh are calculated for each data point. These ratios turned out to be independent of the metal concentration and total sample solution flow introduced into the nebulizer within a range of 0.4-1.2 mL min(-1). The method is applied to study the interaction of Eu, U(VI) and Th with a mixture of humic acid and clay colloids and to the characterization of synthetic nanoparticles, namely CdSe/ZnS-MAA (mercaptoacetic acid) core/shell-coated quantum dots (QDs). Information is given not only on inorganic element composition but also on the effective hydrodynamic size under relevant conditions. Detection limits (DLs) are estimated for Ca, Al, Fe, the lanthanide Ce and the natural actinides Th and U in colloid-containing groundwater. For standard crossflow nebulizer, estimated values are 7 x 10(3), 20, 3 x 10(2), 0.1, 0.1 and 7 x 10(-2) microg L(-1), respectively. DLs for Zn and Cd in QD characterization are 28 and 11 microg L(-1), respectively.
Utilization of remote sensing techniques for the quantification of fire behavior in two pine stands
Eric V. Mueller; Nicholas Skowronski; Kenneth Clark; Michael Gallagher; Robert Kremens; Jan C. Thomas; Mohamad El Houssami; Alexander Filkov; Rory M. Hadden; William Mell; Albert Simeoni
2017-01-01
Quantification of field-scale fire behavior is necessary to improve the current scientific understanding of wildland fires and to develop and test relevant, physics-based models. In particular, detailed descriptions of individual fires are required, for which the available literature is limited. In this work, two such field-scale experiments, carried out in pine stands...
Nahar, Limon Khatun; Cordero, Rosa Elena; Nutt, David; Lingford-Hughes, Anne; Turton, Samuel; Durant, Claire; Wilson, Sue; Paterson, Sue
2016-01-01
Abstract A highly sensitive and fully validated method was developed for the quantification of baclofen in human plasma. After adjusting the pH of the plasma samples using a phosphate buffer solution (pH 4), baclofen was purified using mixed mode (C8/cation exchange) solid-phase extraction (SPE) cartridges. Endogenous water-soluble compounds and lipids were removed from the cartridges before the samples were eluted and concentrated. The samples were analyzed using triple-quadrupole liquid chromatography–tandem mass spectrometry (LC–MS-MS) with triggered dynamic multiple reaction monitoring mode for simultaneous quantification and confirmation. The assay was linear from 25 to 1,000 ng/mL (r2 > 0.999; n = 6). Intraday (n = 6) and interday (n = 15) imprecisions (% relative standard deviation) were <5%, and the average recovery was 30%. The limit of detection of the method was 5 ng/mL, and the limit of quantification was 25 ng/mL. Plasma samples from healthy male volunteers (n = 9, median age: 22) given two single oral doses of baclofen (10 and 60 mg) on nonconsecutive days were analyzed to demonstrate method applicability. PMID:26538544
Singh, Gurmit; Koerner, Terence; Gelinas, Jean-Marc; Abbott, Michael; Brady, Beth; Huet, Anne-Catherine; Charlier, Caroline; Delahaut, Philippe; Godefroy, Samuel Benrejeb
2011-01-01
Malachite green (MG), a member of the N-methylated triphenylmethane class of dyes, has long been used to control fungal and protozoan infections in fish. MG is easily absorbed by fish during waterborne exposure and is rapidly metabolized into leucomalachite green (LMG), which is known for its long residence time in edible fish tissue. This paper describes the development of an enzyme-linked immunosorbent assay (ELISA) for the detection and quantification of LMG in fish tissue. This development includes a simple and versatile method for the conversion of LMG to monodesmethyl-LMG, which is then conjugated to bovine serum albumin (BSA) to produce an immunogenic material. Rabbit polyclonal antibodies are generated against this immunogen, purified and used to develop a direct competitive enzyme-linked immunosorbent assay (ELISA) for the screening and quantification of LMG in fish tissue. The assay performed well, with a limit of detection (LOD) and limit of quantification (LOQ) of 0.1 and 0.3 ng g−1 of fish tissue, respectively. The average extraction efficiency from a matrix of tilapia fillets was approximately 73% and the day-to-day reproducibility for these extractions in the assay was between 5 and 10%. PMID:21623496
Evaluation of a mass-balance approach to determine consumptive water use in northeastern Illinois
Mills, Patrick C.; Duncker, James J.; Over, Thomas M.; Marian Domanski,; ,; Engel, Frank
2014-01-01
Under ideal conditions, accurate quantification of consumptive use at the sewershed scale by the described mass-balance approach might be possible. Under most prevailing conditions, quantification likely would be more costly and time consuming than that of the present study, given the freely contributed technical support of the host community and relatively appropriate conditions of the study area. Essentials to quantification of consumptive use are a fully cooperative community, storm and sanitary sewers that are separate, and newer sewer infrastructure and (or) a robust program for limiting infiltration, exfiltration, and inflow.
Quantifying circular RNA expression from RNA-seq data using model-based framework.
Li, Musheng; Xie, Xueying; Zhou, Jing; Sheng, Mengying; Yin, Xiaofeng; Ko, Eun-A; Zhou, Tong; Gu, Wanjun
2017-07-15
Circular RNAs (circRNAs) are a class of non-coding RNAs that are widely expressed in various cell lines and tissues of many organisms. Although the exact function of many circRNAs is largely unknown, the cell type-and tissue-specific circRNA expression has implicated their crucial functions in many biological processes. Hence, the quantification of circRNA expression from high-throughput RNA-seq data is becoming important to ascertain. Although many model-based methods have been developed to quantify linear RNA expression from RNA-seq data, these methods are not applicable to circRNA quantification. Here, we proposed a novel strategy that transforms circular transcripts to pseudo-linear transcripts and estimates the expression values of both circular and linear transcripts using an existing model-based algorithm, Sailfish. The new strategy can accurately estimate transcript expression of both linear and circular transcripts from RNA-seq data. Several factors, such as gene length, amount of expression and the ratio of circular to linear transcripts, had impacts on quantification performance of circular transcripts. In comparison to count-based tools, the new computational framework had superior performance in estimating the amount of circRNA expression from both simulated and real ribosomal RNA-depleted (rRNA-depleted) RNA-seq datasets. On the other hand, the consideration of circular transcripts in expression quantification from rRNA-depleted RNA-seq data showed substantial increased accuracy of linear transcript expression. Our proposed strategy was implemented in a program named Sailfish-cir. Sailfish-cir is freely available at https://github.com/zerodel/Sailfish-cir . tongz@medicine.nevada.edu or wanjun.gu@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Geessink, Oscar G. F.; Baidoshvili, Alexi; Freling, Gerard; Klaase, Joost M.; Slump, Cornelis H.; van der Heijden, Ferdinand
2015-03-01
Visual estimation of tumor and stroma proportions in microscopy images yields a strong, Tumor-(lymph)Node- Metastasis (TNM) classification-independent predictor for patient survival in colorectal cancer. Therefore, it is also a potent (contra)indicator for adjuvant chemotherapy. However, quantification of tumor and stroma through visual estimation is highly subject to intra- and inter-observer variability. The aim of this study is to develop and clinically validate a method for objective quantification of tumor and stroma in standard hematoxylin and eosin (H and E) stained microscopy slides of rectal carcinomas. A tissue segmentation algorithm, based on supervised machine learning and pixel classification, was developed, trained and validated using histological slides that were prepared from surgically excised rectal carcinomas in patients who had not received neoadjuvant chemotherapy and/or radiotherapy. Whole-slide scanning was performed at 20× magnification. A total of 40 images (4 million pixels each) were extracted from 20 whole-slide images at sites showing various relative proportions of tumor and stroma. Experienced pathologists provided detailed annotations for every extracted image. The performance of the algorithm was evaluated using cross-validation by testing on 1 image at a time while using the other 39 images for training. The total classification error of the algorithm was 9.4% (SD = 3.2%). Compared to visual estimation by pathologists, the algorithm was 7.3 times (P = 0.033) more accurate in quantifying tissues, also showing 60% less variability. Automatic tissue quantification was shown to be both reliable and practicable. We ultimately intend to facilitate refined prognostic stratification of (colo)rectal cancer patients and enable better personalized treatment.
Uncertainties in estimates of the risks of late effects from space radiation
NASA Astrophysics Data System (ADS)
Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P. B.; Dicello, J. F.
2004-01-01
Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Using the linear-additivity model for radiation risks, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain an estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including a deep space outpost and Mars missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative metrics, e.g., the number of days in space without exceeding a given risk level within well-defined confidence limits.
Occurrence of ivermectin in bovine milk from the Brazilian retail market.
Lobato, V; Rath, S; Reyes, F G R
2006-07-01
High-performance liquid chromatography (HPLC) with fluorescence detection was used for the quantification of ivermectin residues in bovine milk intended for human consumption. After liquid-liquid extraction of ivermectin and purification of the extract, the compound was derivatized with 1-methylimidazol in N,N-dimethyl formamide to form a fluorescent derivative, which was separated by HPLC, using reversed-phase C18, with methanol : water (96 : 4 v/v) mobile phase at a flow rate 0.7 ml min-1. The excitation and emission wavelengths of the fluorescence detector were adjusted at 360 and 470 nm, respectively. The linearity of the method was in the range 10-100 ng ivermectin ml-1. Based on a sample of 5.0 ml, the limit of detection and the limit of quantification for ivermectin in milk were 0.6 and 2 ng ml-1, respectively. The recovery rate varied from 76.4 to 87.2%, with an average of 77.9 +/- 3.2%, at four fortification levels. The inter-day precision of the method was 13% (n = 5). Of 168 samples analysed, 17.8% contained ivermectin above the limit of quantification. Nevertheless, none of the samples contained ivermectin above the maximum residue limit (10 ng ml-1) established by the Brazilian Ministry of Agriculture.
Rapid quantification and sex determination of forensic evidence materials.
Andréasson, Hanna; Allen, Marie
2003-11-01
DNA quantification of forensic evidence is very valuable for an optimal use of the available biological material. Moreover, sex determination is of great importance as additional information in criminal investigations as well as in identification of missing persons, no suspect cases, and ancient DNA studies. While routine forensic DNA analysis based on short tandem repeat markers includes a marker for sex determination, analysis of samples containing scarce amounts of DNA is often based on mitochondrial DNA, and sex determination is not performed. In order to allow quantification and simultaneous sex determination on minute amounts of DNA, an assay based on real-time PCR analysis of a marker within the human amelogenin gene has been developed. The sex determination is based on melting curve analysis, while an externally standardized kinetic analysis allows quantification of the nuclear DNA copy number in the sample. This real-time DNA quantification assay has proven to be highly sensitive, enabling quantification of single DNA copies. Although certain limitations were apparent, the system is a rapid, cost-effective, and flexible assay for analysis of forensic casework samples.
Quantitative Proteomics via High Resolution MS Quantification: Capabilities and Limitations
Higgs, Richard E.; Butler, Jon P.; Han, Bomie; Knierman, Michael D.
2013-01-01
Recent improvements in the mass accuracy and resolution of mass spectrometers have led to renewed interest in label-free quantification using data from the primary mass spectrum (MS1) acquired from data-dependent proteomics experiments. The capacity for higher specificity quantification of peptides from samples enriched for proteins of biological interest offers distinct advantages for hypothesis generating experiments relative to immunoassay detection methods or prespecified peptide ions measured by multiple reaction monitoring (MRM) approaches. Here we describe an evaluation of different methods to post-process peptide level quantification information to support protein level inference. We characterize the methods by examining their ability to recover a known dilution of a standard protein in background matrices of varying complexity. Additionally, the MS1 quantification results are compared to a standard, targeted, MRM approach on the same samples under equivalent instrument conditions. We show the existence of multiple peptides with MS1 quantification sensitivity similar to the best MRM peptides for each of the background matrices studied. Based on these results we provide recommendations on preferred approaches to leveraging quantitative measurements of multiple peptides to improve protein level inference. PMID:23710359
Event-specific real-time detection and quantification of genetically modified Roundup Ready soybean.
Huang, Chia-Chia; Pan, Tzu-Ming
2005-05-18
The event-specific real-time detection and quantification of Roundup Ready soybean (RRS) using an ABI PRISM 7700 sequence detection system with light upon extension (LUX) primer was developed in this study. The event-specific primers were designed, targeting the junction of the RRS 5' integration site and the endogenous gene lectin1. Then, a standard reference plasmid was constructed that carried both of the targeted sequences for quantitative analysis. The detection limit of the LUX real-time PCR system was 0.05 ng of 100% RRS genomic DNA, which was equal to 20.5 copies. The range of quantification was from 0.1 to 100%. The sensitivity and range of quantification successfully met the requirement of the labeling rules in the European Union and Taiwan.
Collender, Philip A; Kirby, Amy E; Addiss, David G; Freeman, Matthew C; Remais, Justin V
2015-12-01
Limiting the environmental transmission of soil-transmitted helminths (STHs), which infect 1.5 billion people worldwide, will require sensitive, reliable, and cost-effective methods to detect and quantify STHs in the environment. We review the state-of-the-art of STH quantification in soil, biosolids, water, produce, and vegetation with regard to four major methodological issues: environmental sampling; recovery of STHs from environmental matrices; quantification of recovered STHs; and viability assessment of STH ova. We conclude that methods for sampling and recovering STHs require substantial advances to provide reliable measurements for STH control. Recent innovations in the use of automated image identification and developments in molecular genetic assays offer considerable promise for improving quantification and viability assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Pi, Liqun; Li, Xiang; Cao, Yiwei; Wang, Canhua; Pan, Liangwen; Yang, Litao
2015-04-01
Reference materials are important in accurate analysis of genetically modified organism (GMO) contents in food/feeds, and development of novel reference plasmid is a new trend in the research of GMO reference materials. Herein, we constructed a novel multi-targeting plasmid, pSOY, which contained seven event-specific sequences of five GM soybeans (MON89788-5', A2704-12-3', A5547-127-3', DP356043-5', DP305423-3', A2704-12-5', and A5547-127-5') and sequence of soybean endogenous reference gene Lectin. We evaluated the specificity, limit of detection and quantification, and applicability of pSOY in both qualitative and quantitative PCR analyses. The limit of detection (LOD) was as low as 20 copies in qualitative PCR, and the limit of quantification (LOQ) in quantitative PCR was 10 copies. In quantitative real-time PCR analysis, the PCR efficiencies of all event-specific and Lectin assays were higher than 90%, and the squared regression coefficients (R(2)) were more than 0.999. The quantification bias varied from 0.21% to 19.29%, and the relative standard deviations were from 1.08% to 9.84% in simulated samples analysis. All the results demonstrated that the developed multi-targeting plasmid, pSOY, was a credible substitute of matrix reference materials, and could be used as a reliable reference calibrator in the identification and quantification of multiple GM soybean events.
Quantification of hookworm ova from wastewater matrices using quantitative PCR.
Gyawali, Pradip; Ahmed, Warish; Sidhu, Jatinder P; Jagals, Paul; Toze, Simon
2017-07-01
A quantitative PCR (qPCR) assay was used to quantify Ancylostoma caninum ova in wastewater and sludge samples. We estimated the average gene copy numbers for a single ovum using a mixed population of ova. The average gene copy numbers derived from the mixed population were used to estimate numbers of hookworm ova in A. caninum seeded and unseeded wastewater and sludge samples. The newly developed qPCR assay estimated an average of 3.7×10 3 gene copies per ovum, which was then validated by seeding known numbers of hookworm ova into treated wastewater. The qPCR estimated an average of (1.1±0.1), (8.6±2.9) and (67.3±10.4) ova for treated wastewater that was seeded with (1±0), (10±2) and (100±21) ova, respectively. The further application of the qPCR assay for the quantification of A. caninum ova was determined by seeding a known numbers of ova into the wastewater matrices. The qPCR results indicated that 50%, 90% and 67% of treated wastewater (1L), raw wastewater (1L) and sludge (~4g) samples had variable numbers of A. caninum gene copies. After conversion of the qPCR estimated gene copy numbers to ova for treated wastewater, raw wastewater, and sludge samples, had an average of 0.02, 1.24 and 67 ova, respectively. The result of this study indicated that qPCR can be used for the quantification of hookworm ova from wastewater and sludge samples; however, caution is advised in interpreting qPCR generated data for health risk assessment. Copyright © 2017. Published by Elsevier B.V.
Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2017-07-20
The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.
Comparisons of Wilks’ and Monte Carlo Methods in Response to the 10CFR50.46(c) Proposed Rulemaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Hongbin; Szilard, Ronaldo; Zou, Ling
The Nuclear Regulatory Commission (NRC) is proposing a new rulemaking on emergency core system/loss-of-coolant accident (LOCA) performance analysis. In the proposed rulemaking, designated as 10CFR50.46(c), the US NRC put forward an equivalent cladding oxidation criterion as a function of cladding pre-transient hydrogen content. The proposed rulemaking imposes more restrictive and burnup-dependent cladding embrittlement criteria; consequently nearly all the fuel rods in a reactor core need to be analyzed under LOCA conditions to demonstrate compliance to the safety limits. New analysis methods are required to provide a thorough characterization of the reactor core in order to identify the locations of themore » limiting rods as well as to quantify the safety margins under LOCA conditions. With the new analysis method presented in this work, the limiting transient case and the limiting rods can be easily identified to quantify the safety margins in response to the proposed new rulemaking. In this work, the best-estimate plus uncertainty (BEPU) analysis capability for large break LOCA with the new cladding embrittlement criteria using the RELAP5-3D code is established and demonstrated with a reduced set of uncertainty parameters. Both the direct Monte Carlo method and the Wilks’ nonparametric statistical method can be used to perform uncertainty quantification. Wilks’ method has become the de-facto industry standard to perform uncertainty quantification in BEPU LOCA analyses. Despite its widespread adoption by the industry, the use of small sample sizes to infer statement of compliance to the existing 10CFR50.46 rule, has been a major cause of unrealized operational margin in today’s BEPU methods. Moreover the debate on the proper interpretation of the Wilks’ theorem in the context of safety analyses is not fully resolved yet, even more than two decades after its introduction in the frame of safety analyses in the nuclear industry. This represents both a regulatory and application risk in rolling out new methods. With the 10CFR50.46(c) proposed rulemaking, the deficiencies of the Wilks’ approach are further exacerbated. The direct Monte Carlo approach offers a robust alternative to perform uncertainty quantification within the context of BEPU analyses. In this work, the Monte Carlo method is compared with the Wilks’ method in response to the NRC 10CFR50.46(c) proposed rulemaking.« less
Lao, Yexing; Yang, Cuiping; Zou, Wei; Gan, Manquan; Chen, Ping; Su, Weiwei
2012-05-01
The cryptand Kryptofix 2.2.2 is used extensively as a phase-transfer reagent in the preparation of [18F]fluoride-labelled radiopharmaceuticals. However, it has considerable acute toxicity. The aim of this study was to develop and validate a method for rapid (within 1 min), specific and sensitive quantification of Kryptofix 2.2.2 at trace levels. Chromatographic separations were carried out by rapid-resolution liquid chromatography (Agilent ZORBAX SB-C18 rapid-resolution column, 2.1 × 30 mm, 3.5 μm). Tandem mass spectra were acquired using a triple quadrupole mass spectrometer equipped with an electrospray ionization interface. Quantitative mass spectrometric analysis was conducted in positive ion mode and multiple reaction monitoring mode for the m/z 377.3 → 114.1 transition for Kryptofix 2.2.2. The external standard method was used for quantification. The method met the precision and efficiency requirements for PET radiopharmaceuticals, providing satisfactory results for specificity, matrix effect, stability, linearity (0.5-100 ng/ml, r(2)=0.9975), precision (coefficient of variation < 5%), accuracy (relative error < ± 3%), sensitivity (lower limit of quantification=0.5 ng) and detection time (<1 min). Fluorodeoxyglucose (n=6) was analysed, and the Kryptofix 2.2.2 content was found to be well below the maximum permissible levels approved by the US Food and Drug Administration. The developed method has a short analysis time (<1 min) and high sensitivity (lower limit of quantification=0.5 ng/ml) and can be successfully applied to rapid quantification of Kryptofix 2.2.2 at trace levels in fluorodeoxyglucose. This method could also be applied to other [18F]fluorine-labelled radiopharmaceuticals that use Kryptofix 2.2.2 as a phase-transfer reagent.
de Kinkelder, R; van der Veen, R L P; Verbaak, F D; Faber, D J; van Leeuwen, T G; Berendschot, T T J M
2011-01-01
Purpose Accurate assessment of the amount of macular pigment (MPOD) is necessary to investigate the role of carotenoids and their assumed protective functions. High repeatability and reliability are important to monitor patients in studies investigating the influence of diet and supplements on MPOD. We evaluated the Macuscope (Macuvision Europe Ltd., Lapworth, Solihull, UK), a recently introduced device for measuring MPOD using the technique of heterochromatic flicker photometry (HFP). We determined agreement with another HFP device (QuantifEye; MPS 9000 series: Tinsley Precision Instruments Ltd., Croydon, Essex, UK) and a fundus reflectance method. Methods The right eyes of 23 healthy subjects (mean age 33.9±15.1 years) were measured. We determined agreement with QuantifEye and correlation with a fundus reflectance method. Repeatability of QuantifEye was assessed in 20 other healthy subjects (mean age 32.1±7.3 years). Repeatability was also compared with measurements by a fundus reflectance method in 10 subjects. Results We found low agreement between test and retest measurements with Macuscope. The average difference and the limits of agreement were −0.041±0.32. We found high agreement between test and retest measurements of QuantifEye (−0.02±0.18) and the fundus reflectance method (−0.04±0.18). MPOD data obtained by Macuscope and QuantifEye showed poor agreement: −0.017±0.44. For Macuscope and the fundus reflectance method, the correlation coefficient was r=0.05 (P=0.83). A significant correlation of r=0.87 (P<0.001) was found between QuantifEye and the fundus reflectance method. Conclusions Because repeatability of Macuscope measurements was low (ie, wide limits of agreement) and MPOD values correlated poorly with the fundus reflectance method, and agreed poorly with QuantifEye, the tested Macuscope protocol seems less suitable for studying MPOD. PMID:21057522
Boka, Vasiliki-Ioanna; Argyropoulou, Aikaterini; Gikas, Evangelos; Angelis, Apostolis; Aligiannis, Nektarios; Skaltsounis, Alexios-Leandros
2015-11-01
A high-performance thin-layer chromatographic methodology was developed and validated for the isolation and quantitative determination of oleuropein in two extracts of Olea europaea leaves. OLE_A was a crude acetone extract, while OLE_AA was its defatted residue. Initially, high-performance thin-layer chromatography was employed for the purification process of oleuropein with fast centrifugal partition chromatography, replacing high-performance liquid-chromatography, in the stage of the determination of the distribution coefficient and the retention volume. A densitometric method was developed for the determination of the distribution coefficients, KC = CS/CM. The total concentrations of the target compound in the stationary phase (CS) and in the mobile phase (CM) were calculated by the area measured in the high-performance thin-layer chromatogram. The estimated Kc was also used for the calculation of the retention volume, VR, with a chromatographic retention equation. The obtained data were successfully applied for the purification of oleuropein and the experimental results confirmed the theoretical predictions, indicating that high-performance thin-layer chromatography could be an important counterpart in the phytochemical study of natural products. The isolated oleuropein (purity > 95%) was subsequently used for the estimation of its content in each extract with a simple, sensitive and accurate high-performance thin-layer chromatography method. The best fit calibration curve from 1.0 µg/track to 6.0 µg/track of oleuropein was polynomial and the quantification was achieved by UV detection at λ 240 nm. The method was validated giving rise to an efficient and high-throughput procedure, with the relative standard deviation % of repeatability and intermediate precision not exceeding 4.9% and accuracy between 92% and 98% (recovery rates). Moreover, the method was validated for robustness, limit of quantitation, and limit of detection. The amount of oleuropein for OLE_A, OLE_AA, and an aqueous extract of olive leaves was estimated to be 35.5% ± 2.7, 51.5% ± 1.4, and 12.5% ± 0.12, respectively. Statistical analysis proved that the method is repeatable and selective, and can be effectively applied for the estimation of oleuropein in olive leaves' extracts, and could potentially replace high-performance liquid chromatography methodologies developed so far. Thus, the phytochemical investigation of oleuropein could be based on high-performance thin-layer chromatography coupled with separation processes, such as fast centrifugal partition chromatography, showing efficacy and credibility. Georg Thieme Verlag KG Stuttgart · New York.
[Comparison of the acrylamide level in microwaved popcorn with that of ordinarily heated one].
Sun, Shiyu; Xia, Yongmei; Liu, Xuefeng; Hu, Xueyi
2007-03-01
To establish a method of examining acrylamide in cooked popcorn. Solid phase extraction/gas chromatography (SPE/GC) was established with N, N-dimethyl acrylamide as internal standard. The detection limit and the quantification limit were estimated at 3 microg/L and 10 microg/L, respectively, and the linear correlation coefficient was 0.9969. Seven commercial popcorn samples with different flavors were collected and tested in this paper. The RSD of acrylamide level of caramel sweet popcorn microwaved was 1.95 % (n = 6). When the commercial popcorns of caramel sweet and cream salted were microwaved (A and D) or conventional heated (A' and D'), the acrylamide levels reached [Am]A = 1017 microg/kg, [Am]D = 146.5 microg/kg, [Am]A, = 2206 microg/kg and [Am]D = 970.1 microg/kg, respectively. The microwaved popcorns tested are safer in general because the acrylamide level of them except that with high simple sugar content is obviously lower than that of ordinarily heated one.
Delahaut, P; Jacquemin, P; Colemonts, Y; Dubois, M; De Graeve, J; Deluyker, H
1997-08-29
A study was conducted to test a multiresidue analytical procedure for detecting and quantifying several corticosteroids on which the European Union imposes maximum residue limits (MRLs). Primary extracts from different matrices (liver, milk, urine, faeces) were first purified on C18 cartridges. A new immunoaffinity clean-up step was included. The immunoaffinity gel was used to purify several corticosteroids simultaneously with enrichment of the corresponding fractions. The extracts were treated with an aqueous solution of pyridinium chlorochromate to fully oxidise all corticosteroids and to facilitate their extraction with dichloromethane. After evaporation, the final extract was reconstituted with toluene before injection into the GC-MS apparatus. The analysis was performed in the CI-negative ionisation mode using ammonia as the reactant gas. The estimated detection and quantification limits were, respectively, 0.25 and 0.5 ppb or lower. Overall, the method is reproducible to within 20%. Recovery is between 50 and 80% according to the corticosteroid.
Azemard, Sabine; Vassileva, Emilia
2015-06-01
In this paper, we present a simple, fast and cost-effective method for determination of methyl mercury (MeHg) in marine samples. All important parameters influencing the sample preparation process were investigated and optimized. Full validation of the method was performed in accordance to the ISO-17025 (ISO/IEC, 2005) and Eurachem guidelines. Blanks, selectivity, working range (0.09-3.0ng), recovery (92-108%), intermediate precision (1.7-4.5%), traceability, limit of detection (0.009ng), limit of quantification (0.045ng) and expanded uncertainty (15%, k=2) were assessed. Estimation of the uncertainty contribution of each parameter and the demonstration of traceability of measurement results was provided as well. Furthermore, the selectivity of the method was studied by analyzing the same sample extracts by advanced mercury analyzer (AMA) and gas chromatography-atomic fluorescence spectrometry (GC-AFS). Additional validation of the proposed procedure was effectuated by participation in the IAEA-461 worldwide inter-laboratory comparison exercises. Copyright © 2014 Elsevier Ltd. All rights reserved.
Suganthi, A; John, Sofiya; Ravi, T K
2008-01-01
A simple, precise, sensitive, rapid and reproducible HPTLC method for the simultaneous estimation of the rabeprazole and itopride hydrochloride in tablets was developed and validated. This method involves separation of the components by TLC on precoated silica gel G60F254 plate with solvent system of n-butanol, toluene and ammonia (8.5:0.5:1 v/v/v) and detection was carried out densitometrically using a UV detector at 288 nm in absorbance mode. This system was found to give compact spots for rabeprazole (Rf value of 0.23 0.02) and for itopride hydrochloride (Rf value of 0.75+/-0.02). Linearity was found to be in the range of 40-200 ng/spot and 300-1500 ng/spot for rabeprazole and itopride hydrochloride. The limit of detection and limit of quantification for rabeprazole were 10 and 20 ng/spot and for itopride hydrochloride were 50 and 100 ng/spot, respectively. The method was found to be beneficial for the routine analysis of combined dosage form.
Sancheti, J. S.; Shaikh, M. F.; Khatwani, P. F.; Kulkarni, Savita R.; Sathaye, Sadhana
2013-01-01
A new robust, simple and economic high performance thin layer chromatographic method was developed for simultaneous estimation of L-glutamic acid and γ-amino butyric acid in brain homogenate. The high performance thin layer chromatographic separation of these amino acid was achieved using n-butanol:glacial acetic acid:water (22:3:5 v/v/v) as mobile phase and ninhydrin as a derivatising agent. Quantitation of the method was achieved by densitometric method at 550 nm over the concentration range of 10-100 ng/spot. This method showed good separation of amino acids in the brain homogenate with Rf value of L-glutamic acid and γ-amino butyric acid as 21.67±0.58 and 33.67±0.58, respectively. The limit of detection and limit of quantification for L-glutamic acid was found to be 10 and 20 ng and for γ-amino butyric acid it was 4 and 10 ng, respectively. The method was also validated in terms of accuracy, precision and repeatability. The developed method was found to be precise and accurate with good reproducibility and shows promising applicability for studying pathological status of disease and therapeutic significance of drug treatment. PMID:24591747
Sancheti, J S; Shaikh, M F; Khatwani, P F; Kulkarni, Savita R; Sathaye, Sadhana
2013-11-01
A new robust, simple and economic high performance thin layer chromatographic method was developed for simultaneous estimation of L-glutamic acid and γ-amino butyric acid in brain homogenate. The high performance thin layer chromatographic separation of these amino acid was achieved using n-butanol:glacial acetic acid:water (22:3:5 v/v/v) as mobile phase and ninhydrin as a derivatising agent. Quantitation of the method was achieved by densitometric method at 550 nm over the concentration range of 10-100 ng/spot. This method showed good separation of amino acids in the brain homogenate with Rf value of L-glutamic acid and γ-amino butyric acid as 21.67±0.58 and 33.67±0.58, respectively. The limit of detection and limit of quantification for L-glutamic acid was found to be 10 and 20 ng and for γ-amino butyric acid it was 4 and 10 ng, respectively. The method was also validated in terms of accuracy, precision and repeatability. The developed method was found to be precise and accurate with good reproducibility and shows promising applicability for studying pathological status of disease and therapeutic significance of drug treatment.
Giuliani, N; Saugy, M; Augsburger, M; Varlet, V
2015-11-01
A headspace-gas chromatography-tandem mass spectrometry (HS-GC-MS/MS) method for the trace measurement of perfluorocarbon compounds (PFCs) in blood was developed. Due to oxygen carrying capabilities of PFCs, application to doping and sports misuse is speculated. This study was therefore extended to perform validation methods for F-tert-butylcyclohexane (Oxycyte(®)), perfluoro(methyldecalin) (PFMD) and perfluorodecalin (PFD). The limit of detection of these compounds was established and found to be 1.2 µg/mL blood for F-tert-butylcyclohexane, 4.9 µg/mL blood for PFMD and 9.6 µg/mL blood for PFD. The limit of quantification was assumed to be 12 µg/mL blood (F-tert-butylcyclohexane), 48 µg/mL blood (PFMD) and 96 µg/mL blood (PFD). HS-GC-MS/MS technique allows detection from 1000 to 10,000 times lower than the estimated required dose to ensure a biological effect for the investigated PFCs. Thus, this technique could be used to identify a PFC misuse several hours, maybe days, after the injection or the sporting event. Clinical trials with those compounds are still required to evaluate the validation parameters with the calculated estimations. Copyright © 2015 Elsevier B.V. All rights reserved.
Placing an upper limit on cryptic marine sulphur cycling.
Johnston, D T; Gill, B C; Masterson, A; Beirne, E; Casciotti, K L; Knapp, A N; Berelson, W
2014-09-25
A quantitative understanding of sources and sinks of fixed nitrogen in low-oxygen waters is required to explain the role of oxygen-minimum zones (OMZs) in controlling the fixed nitrogen inventory of the global ocean. Apparent imbalances in geochemical nitrogen budgets have spurred numerous studies to measure the contributions of heterotrophic and autotrophic N2-producing metabolisms (denitrification and anaerobic ammonia oxidation, respectively). Recently, 'cryptic' sulphur cycling was proposed as a partial solution to the fundamental biogeochemical problem of closing marine fixed-nitrogen budgets in intensely oxygen-deficient regions. The degree to which the cryptic sulphur cycle can fuel a loss of fixed nitrogen in the modern ocean requires the quantification of sulphur recycling in OMZ settings. Here we provide a new constraint for OMZ sulphate reduction based on isotopic profiles of oxygen ((18)O/(16)O) and sulphur ((33)S/(32)S, (34)S/(32)S) in seawater sulphate through oxygenated open-ocean and OMZ-bearing water columns. When coupled with observations and models of sulphate isotope dynamics and data-constrained model estimates of OMZ water-mass residence time, we find that previous estimates for sulphur-driven remineralization and loss of fixed nitrogen from the oceans are near the upper limit for what is possible given in situ sulphate isotope data.
Thappali, Satheeshmanikandan R. S.; Varanasi, Kanthikiran; Veeraraghavan, Sridhar; Arla, Rambabu; Chennupati, Sandhya; Rajamanickam, Madheswaran; Vakkalanka, Swaroop; Khagga, Mukkanti
2012-01-01
A new method for the simultaneous determination of celecoxib, erlotinib, and its active metabolite desmethyl-erlotinib (OSI-420) in rat plasma, by liquid chromatography/tandem mass spectrometry with positive/negative ion-switching electrospray ionization mode, was developed and validated. Protein precipitation with methanol was selected as the method for preparing the samples. The analytes were separated on a reverse-phase C18 column (50mm×4.6mm i.d., 3μ) using methanol: 2 mM ammonium acetate buffer, and pH 4.0 as the mobile phase at a flow rate 0.8 mL/min. Sitagliptin and Efervirenz were used as the internal standards for quantification. The determination was carried out on a Theremo Finnigan Quantam ultra triple-quadrupole mass spectrometer, operated in selected reaction monitoring (SRM) mode using the following transitions monitored simultaneously: positive m/z 394.5→278.1 for erlotinib, m/z 380.3→278.1 for desmethyl erlotinib (OSI-420), and negative m/z −380.1→ −316.3 for celecoxib. The limits of quantification (LOQs) were 1.5 ng/mL for Celecoxib, erlotinib, and OSI-420. Within- and between-day accuracy and precision of the validated method were within the acceptable limits of < 15% at all concentrations. The quantitation method was successfully applied for the simultaneous estimation of celecoxib, erlotinib, and desmethyl erlotinib in a pharmacokinetic study in Wistar rats. PMID:23008811
Pyschik, Marcelina; Klein-Hitpaß, Marcel; Girod, Sabrina; Winter, Martin; Nowak, Sascha
2017-02-01
In this study, an optimized method using capillary electrophoresis (CE) with a direct contactless conductivity detector (C 4 D) for a new application field is presented for the quantification of fluoride in common used lithium ion battery (LIB) electrolyte using LiPF 6 in organic carbonate solvents and in ionic liquids (ILs) after contacted to Li metal. The method development for finding the right buffer and the suitable CE conditions for the quantification of fluoride was investigated. The results of the concentration of fluoride in different LIB electrolyte samples were compared to the results from the ion-selective electrode (ISE). The relative standard deviations (RSDs) and recovery rates for fluoride were obtained with a very high accuracy in both methods. The results of the fluoride concentration in the LIB electrolytes were in very good agreement for both methods. In addition, the limit of detection (LOD) and limit of quantification (LOQ) values were determined for the CE method. The CE method has been applied also for the quantification of fluoride in ILs. In the fresh IL sample, the concentration of fluoride was under the LOD. Another sample of the IL mixed with Li metal has been investigated as well. It was possible to quantify the fluoride concentration in this sample. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sigrist, Mirna; Hilbe, Nandi; Brusa, Lucila; Campagnoli, Darío; Beldoménico, Horacio
2016-11-01
An optimized flow injection hydride generation atomic absorption spectroscopy (FI-HGAAS) method was used to determine total arsenic in selected food samples (beef, chicken, fish, milk, cheese, egg, rice, rice-based products, wheat flour, corn flour, oats, breakfast cereals, legumes and potatoes) and to estimate their contributions to inorganic arsenic dietary intake. The limit of detection (LOD) and limit of quantification (LOQ) values obtained were 6μgkg(-)(1) and 18μgkg(-)(1), respectively. The mean recovery range obtained for all food at a fortification level of 200μgkg(-)(1) was 85-110%. Accuracy was evaluated using dogfish liver certified reference material (DOLT-3 NRC) for trace metals. The highest total arsenic concentrations (in μgkg(-)(1)) were found in fish (152-439), rice (87-316) and rice-based products (52-201). The contribution to inorganic arsenic (i-As) intake was calculated from the mean i-As content of each food (calculated by applying conversion factors to total arsenic data) and the mean consumption per day. The primary contributors to inorganic arsenic intake were wheat flour, including its proportion in wheat flour-based products (breads, pasta and cookies), followed by rice; both foods account for close to 53% and 17% of the intake, respectively. The i-As dietary intake, estimated as 10.7μgday(-)(1), was significantly lower than that from drinking water in vast regions of Argentina. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sankar, A S Kamatchi; Vetrichelvan, Thangarasu; Venkappaya, Devashya
2011-09-01
In the present work, three different spectrophotometric methods for simultaneous estimation of ramipril, aspirin and atorvastatin calcium in raw materials and in formulations are described. Overlapped data was quantitatively resolved by using chemometric methods, viz. inverse least squares (ILS), principal component regression (PCR) and partial least squares (PLS). Calibrations were constructed using the absorption data matrix corresponding to the concentration data matrix. The linearity range was found to be 1-5, 10-50 and 2-10 μg mL-1 for ramipril, aspirin and atorvastatin calcium, respectively. The absorbance matrix was obtained by measuring the zero-order absorbance in the wavelength range between 210 and 320 nm. A training set design of the concentration data corresponding to the ramipril, aspirin and atorvastatin calcium mixtures was organized statistically to maximize the information content from the spectra and to minimize the error of multivariate calibrations. By applying the respective algorithms for PLS 1, PCR and ILS to the measured spectra of the calibration set, a suitable model was obtained. This model was selected on the basis of RMSECV and RMSEP values. The same was applied to the prediction set and capsule formulation. Mean recoveries of the commercial formulation set together with the figures of merit (calibration sensitivity, selectivity, limit of detection, limit of quantification and analytical sensitivity) were estimated. Validity of the proposed approaches was successfully assessed for analyses of drugs in the various prepared physical mixtures and formulations.
Floren, C; Wiedemann, I; Brenig, B; Schütz, E; Beck, J
2015-04-15
Species fraud and product mislabelling in processed food, albeit not being a direct health issue, often results in consumer distrust. Therefore methods for quantification of undeclared species are needed. Targeting mitochondrial DNA, e.g. CYTB gene, for species quantification is unsuitable, due to a fivefold inter-tissue variation in mtDNA content per cell resulting in either an under- (-70%) or overestimation (+160%) of species DNA contents. Here, we describe a reliable two-step droplet digital PCR (ddPCR) assay targeting the nuclear F2 gene for precise quantification of cattle, horse, and pig in processed meat products. The ddPCR assay is advantageous over qPCR showing a limit of quantification (LOQ) and detection (LOD) in different meat products of 0.01% and 0.001%, respectively. The specificity was verified in 14 different species. Hence, determining F2 in food by ddPCR can be recommended for quality assurance and control in production systems. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Ortega, Nàdia; Macià, Alba; Romero, Maria-Paz; Trullols, Esther; Morello, Jose-Ramón; Anglès, Neus; Motilva, Maria-Jose
2009-08-26
An improved chromatographic method was developed using ultra-performance liquid chromatography-tandem mass spectrometry to identify and quantify phenolic compounds and alkaloids, theobromine and caffeine, in carob flour samples. The developed method has been validated in terms of speed, sensitivity, selectivity, peak efficiency, linearity, reproducibility, limits of detection, and limits of quantification. The chromatographic method allows the identification and quantification of 20 phenolic compounds, that is, phenolic acids, flavonoids, and their aglycone and glucoside forms, together with the determination of the alkaloids, caffeine and theobromine, at low concentration levels all in a short analysis time of less than 20 min.
Shin, Kyung-Hwa; Lee, Hyun-Ji; Chang, Chulhun L; Kim, Hyung-Hoi
2018-04-01
Hepatitis B virus (HBV) DNA levels are used to predict the response to therapy, determine therapy initiation, monitor resistance to therapy, and establish treatment success. To verify the performance of the cobas HBV test using the cobas 4800 system for HBV DNA quantification and to compare the HBV DNA quantification ability between the cobas HBV test and COBAS AmpliPrep/COBAS TaqMan HBV version 2.0 (CAP/CTM v2.0). The precision, linearity, and limit of detection of the cobas HBV test were evaluated using the 4th World Health Organization International Standard material and plasma samples. Clinical samples that yielded quantitative results using the CAP/CTM v2.0 and cobas HBV tests were subjected to correlational analysis. Three hundred forty-nine samples were subjected to correlational analysis, among which 114 samples showed results above the lower limit of quantification. Comparable results were obtained ([cobas HBV test] = 1.038 × [CAP/CTM v2.0]-0.173, r = 0.914) in 114 samples, which yielded values above the lower limit of quantification. The results for 86.8% of the samples obtained using the cobas HBV test were within 0.5 log 10 IU/mL of the CAP/CTM v2.0 results. The total precision values against the low and high positive controls were 1.4% (mean level: 2.25 log 10 IU/mL) and 3.2% (mean level: 6.23 log 10 IU/mL), respectively. The cobas HBV test demonstrated linearity (1.15-6.75 log 10 IU/mL, y = 0.95 × 6 + 0.17, r 2 = 0.994). The cobas HBV test showed good correlation with CAP/CTM v2.0, and had good precision and an acceptable limit of detection. The cobas HBV test using the cobas 4800 is a reliable method for quantifying HBV DNA levels in the clinical setting. Copyright © 2018. Published by Elsevier B.V.
Rashed-Ul Islam, S M; Jahan, Munira; Tabassum, Shahina
2015-01-01
Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 10 3 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 10 3 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log 10 IU/ml and limits of agreement of -1.82 to 3.03 log 10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log 10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15.
Jahan, Munira; Tabassum, Shahina
2015-01-01
Virological monitoring is the best predictor for the management of chronic hepatitis B virus (HBV) infections. Consequently, it is important to use the most efficient, rapid and cost-effective testing systems for HBV DNA quantification. The present study compared the performance characteristics of a one-step HBV polymerase chain reaction (PCR) vs the two-step HBV PCR method for quantification of HBV DNA from clinical samples. A total of 100 samples consisting of 85 randomly selected samples from patients with chronic hepatitis B (CHB) and 15 samples from apparently healthy individuals were enrolled in this study. Of the 85 CHB clinical samples tested, HBV DNA was detected from 81% samples by one-step PCR method with median HBV DNA viral load (VL) of 7.50 × 103 lU/ml. In contrast, 72% samples were detected by the two-step PCR system with median HBV DNA of 3.71 × 103 lU/ml. The one-step method showed strong linear correlation with two-step PCR method (r = 0.89; p < 0.0001). Both methods showed good agreement at Bland-Altman plot, with a mean difference of 0.61 log10 IU/ml and limits of agreement of -1.82 to 3.03 log10 IU/ml. The intra-assay and interassay coefficients of variation (CV%) of plasma samples (4-7 log10 IU/ml) for the one-step PCR method ranged between 0.33 to 0.59 and 0.28 to 0.48 respectively, thus demonstrating a high level of concordance between the two methods. Moreover, elimination of the DNA extraction step in the one-step PCR kit allowed time-efficient and significant labor and cost savings for the quantification of HBV DNA in a resource limited setting. How to cite this article Rashed-Ul Islam SM, Jahan M, Tabassum S. Evaluation of a Rapid One-step Real-time PCR Method as a High-throughput Screening for Quantification of Hepatitis B Virus DNA in a Resource-limited Setting. Euroasian J Hepato-Gastroenterol 2015;5(1):11-15. PMID:29201678
NASA Astrophysics Data System (ADS)
Cudalbu, C.; Mlynárik, V.; Xin, L.; Gruetter, Rolf
2009-10-01
Reliable quantification of the macromolecule signals in short echo-time 1H MRS spectra is particularly important at high magnetic fields for an accurate quantification of metabolite concentrations (the neurochemical profile) due to effectively increased spectral resolution of the macromolecule components. The purpose of the present study was to assess two approaches of quantification, which take the contribution of macromolecules into account in the quantification step. 1H spectra were acquired on a 14.1 T/26 cm horizontal scanner on five rats using the ultra-short echo-time SPECIAL (spin echo full intensity acquired localization) spectroscopy sequence. Metabolite concentrations were estimated using LCModel, combined with a simulated basis set of metabolites using published spectral parameters and either the spectrum of macromolecules measured in vivo, using an inversion recovery technique, or baseline simulated by the built-in spline function. The fitted spline function resulted in a smooth approximation of the in vivo macromolecules, but in accordance with previous studies using Subtract-QUEST could not reproduce completely all features of the in vivo spectrum of macromolecules at 14.1 T. As a consequence, the measured macromolecular 'baseline' led to a more accurate and reliable quantification at higher field strengths.
Tuot, Delphine S; McCulloch, Charles E; Velasquez, Alexandra; Schillinger, Dean; Hsu, Chi-Yuan; Handley, Margaret; Powe, Neil R
2018-04-23
Many individuals with chronic kidney disease (CKD) do not receive guideline-concordant care. We examined the impact of a team-based primary care CKD registry on clinical measures and processes of care among patients with CKD cared for in a public safety-net health care delivery system. Pragmatic trial of a CKD registry versus a usual-care registry for 1 year. Primary care providers (PCPs) and their patients with CKD in a safety-net primary care setting in San Francisco. The CKD registry identified at point of care all patients with CKD, those with blood pressure (BP)>140/90mmHg, those without angiotensin-converting enzyme (ACE) inhibitor/angiotensin receptor blocker (ARB) prescription, and those without albuminuria quantification in the past year. It also provided quarterly feedback pertinent to these metrics to promote "outreach" to patients with CKD. The usual-care registry provided point-of-care cancer screening and immunization data. Changes in systolic BP at 12 months (primary outcome), proportion of patients with BP control, prescription of ACE inhibitors/ARBs, quantification of albuminuria, severity of albuminuria, and estimated glomerular filtration rate. The patient population (n=746) had a mean age of 56.7±12.1 (standard deviation) years, was 53% women, and was diverse (8% non-Hispanic white, 35.7% black, 24.5% Hispanic, and 24.4% Asian). Randomization to the CKD registry (30 PCPs, 285 patients) versus the usual-care registry (49 PCPs, 461 patients) was associated with 2-fold greater odds of ACE inhibitor/ARB prescription (adjusted OR, 2.25; 95% CI, 1.45-3.49) and albuminuria quantification (adjusted OR, 2.44; 95% CI, 1.38-4.29) during the 1-year study period. Randomization to the CKD registry was not associated with changes in systolic BP, proportion of patients with uncontrolled BP, or degree of albuminuria or estimated glomerular filtration rate. Potential misclassification of CKD; missing baseline medication data; limited to study of a public safety-net health care system. A team-based safety-net primary care CKD registry did not improve BP parameters, but led to greater albuminuria quantification and more ACE inhibitor/ARB prescriptions after 1 year. Adoption of team-based CKD registries may represent an important step in translating evidence into practice for CKD management. Copyright © 2018 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.
43 CFR 11.73 - Quantification phase-resource recoverability analysis.
Code of Federal Regulations, 2014 CFR
2014-10-01
... analysis. (a) Requirement. The time needed for the injured resources to recover to the state that the... been acquired to baseline levels shall be estimated. The time estimated for recovery or any lesser period of time as determined in the Assessment Plan must be used as the recovery period for purposes of...
USDA-ARS?s Scientific Manuscript database
Quantification of regional greenhouse gas (GHG) fluxes is essential for establishing mitigation strategies and evaluating their effectiveness. Here, we used multiple top-down approaches and multiple trace gas observations at a tall tower to estimate GHG regional fluxes and evaluate the GHG fluxes de...
CONTRIBUTIONS OF CURRENT YEAR PHOTOSYNTHATE TO FINE ROOTS ESTIMATED USING A 13C-DEPLETED CO2 SOURCE
The quantification of root turnover is necessary for a complete understanding of plant carbon (C) budgets, especially in terms of impacts of global climate change. To improve estimates of root turnover, we present a method to distinguish current- from prior-year allocation of ca...
Forest Stand Canopy Structure Attribute Estimation from High Resolution Digital Airborne Imagery
Demetrios Gatziolis
2006-01-01
A study of forest stand canopy variable assessment using digital, airborne, multispectral imagery is presented. Variable estimation involves stem density, canopy closure, and mean crown diameter, and it is based on quantification of spatial autocorrelation among pixel digital numbers (DN) using variogram analysis and an alternative, non-parametric approach known as...
ERIC Educational Resources Information Center
Kim, Ho Sung
2013-01-01
A quantitative method for estimating an expected uncertainty (reliability and validity) in assessment results arising from the relativity between four variables, viz examiner's expertise, examinee's expertise achieved, assessment task difficulty and examinee's performance, was developed for the complex assessment applicable to final…
De Backer, A; van den Bos, K H W; Van den Broek, W; Sijbers, J; Van Aert, S
2016-12-01
An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. Copyright © 2016 Elsevier B.V. All rights reserved.
Liu, Junyan; Liu, Yang; Gao, Mingxia; Zhang, Xiangmin
2012-08-01
A facile proteomic quantification method, fluorescent labeling absolute quantification (FLAQ), was developed. Instead of using MS for quantification, the FLAQ method is a chromatography-based quantification in combination with MS for identification. Multidimensional liquid chromatography (MDLC) with laser-induced fluorescence (LIF) detection with high accuracy and tandem MS system were employed for FLAQ. Several requirements should be met for fluorescent labeling in MS identification: Labeling completeness, minimum side-reactions, simple MS spectra, and no extra tandem MS fragmentations for structure elucidations. A fluorescence dye, 5-iodoacetamidofluorescein, was finally chosen to label proteins on all cysteine residues. The fluorescent dye was compatible with the process of the trypsin digestion and MALDI MS identification. Quantitative labeling was achieved with optimization of reacting conditions. A synthesized peptide and model proteins, BSA (35 cysteines), OVA (five cysteines), were used for verifying the completeness of labeling. Proteins were separated through MDLC and quantified based on fluorescent intensities, followed by MS identification. High accuracy (RSD% < 1.58) and wide linearity of quantification (1-10(5) ) were achieved by LIF detection. The limit of quantitation for the model protein was as low as 0.34 amol. Parts of proteins in human liver proteome were quantified and demonstrated using FLAQ. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
[Detection of recombinant-DNA in foods from stacked genetically modified plants].
Sorokina, E Iu; Chernyshova, O N
2012-01-01
A quantitative real-time multiplex polymerase chain reaction method was applied to the detection and quantification of MON863 and MON810 in stacked genetically modified maize MON 810xMON 863. The limit of detection was approximately 0,1%. The accuracy of the quantification, measured as bias from the accepted value and the relative repeatability standard deviation, which measures the intra-laboratory variability, were within 25% at each GM-level. A method verification has demonstrated that the MON 863 and the MON810 methods can be equally applied in quantification of the respective events in stacked MON810xMON 863.
McEwen, I; Mulloy, B; Hellwig, E; Kozerski, L; Beyer, T; Holzgrabe, U; Wanko, R; Spieser, J-M; Rodomonte, A
2008-12-01
Oversulphated Chondroitin Sulphate (OSCS) and Dermatan Sulphate (DS) in unfractionated heparins can be identified by nuclear magnetic resonance spectrometry (NMR). The limit of detection (LoD) of OSCS is 0.1% relative to the heparin content. This LoD is obtained at a signal-to-noise ratio (S/N) of 2000:1 of the heparin methyl signal. Quantification is best obtained by comparing peak heights of the OSCS and heparin methyl signals. Reproducibility of less than 10% relative standard deviation (RSD) has been obtained. The accuracy of quantification was good.
Hao, Jie; Astle, William; De Iorio, Maria; Ebbels, Timothy M D
2012-08-01
Nuclear Magnetic Resonance (NMR) spectra are widely used in metabolomics to obtain metabolite profiles in complex biological mixtures. Common methods used to assign and estimate concentrations of metabolites involve either an expert manual peak fitting or extra pre-processing steps, such as peak alignment and binning. Peak fitting is very time consuming and is subject to human error. Conversely, alignment and binning can introduce artefacts and limit immediate biological interpretation of models. We present the Bayesian automated metabolite analyser for NMR spectra (BATMAN), an R package that deconvolutes peaks from one-dimensional NMR spectra, automatically assigns them to specific metabolites from a target list and obtains concentration estimates. The Bayesian model incorporates information on characteristic peak patterns of metabolites and is able to account for shifts in the position of peaks commonly seen in NMR spectra of biological samples. It applies a Markov chain Monte Carlo algorithm to sample from a joint posterior distribution of the model parameters and obtains concentration estimates with reduced error compared with conventional numerical integration and comparable to manual deconvolution by experienced spectroscopists. http://www1.imperial.ac.uk/medicine/people/t.ebbels/ t.ebbels@imperial.ac.uk.
Progress and limitations on quantifying nutrient and carbon loading to coastal waters
NASA Astrophysics Data System (ADS)
Stets, E.; Oelsner, G. P.; Stackpoole, S. M.
2017-12-01
Riverine export of nutrients and carbon to estuarine and coastal waters are important determinants of coastal ecosystem health and provide necessary insight into global biogeochemical cycles. Quantification of coastal solute loads typically relies upon modeling based on observations of concentration and discharge from selected rivers draining to the coast. Most large-scale river export models require unidirectional flow and thus are referenced to monitoring locations at the head of tide, which can be located far inland. As a result, the contributions of the coastal plain, tidal wetlands, and concentrated coastal development are often poorly represented in regional and continental-scale estimates of solute delivery to coastal waters. However, site-specific studies have found that these areas are disproportionately active in terms of nutrient and carbon export. Modeling efforts to upscale fluxes from these areas, while not common, also suggest an outsized importance to coastal flux estimates. This presentation will focus on illustrating how the problem of under-representation of near-shore environments impacts large-scale coastal flux estimates in the context of recent regional and continental-scale assessments. Alternate approaches to capturing the influence of the near-coastal terrestrial inputs including recent data aggregation efforts and modeling approaches will be discussed.
Ihssane, B; Bouchafra, H; El Karbane, M; Azougagh, M; Saffaj, T
2016-05-01
We propose in this work an efficient way to evaluate the measurement of uncertainty at the end of the development step of an analytical method, since this assessment provides an indication of the performance of the optimization process. The estimation of the uncertainty is done through a robustness test by applying a Placquett-Burman design, investigating six parameters influencing the simultaneous chromatographic assay of five water-soluble vitamins. The estimated effects of the variation of each parameter are translated into standard uncertainty value at each concentration level. The values obtained of the relative uncertainty do not exceed the acceptance limit of 5%, showing that the procedure development was well done. In addition, a statistical comparison conducted to compare standard uncertainty after the development stage and those of the validation step indicates that the estimated uncertainty are equivalent. The results obtained show clearly the performance and capacity of the chromatographic method to simultaneously assay the five vitamins and suitability for use in routine application. Copyright © 2015 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
NASA Astrophysics Data System (ADS)
Dekker, Iris N.; Houweling, Sander; Aben, Ilse; Röckmann, Thomas; Krol, Maarten; Martínez-Alonso, Sara; Deeter, Merritt N.; Worden, Helen M.
2017-12-01
The growth of mega-cities leads to air quality problems directly affecting the citizens. Satellite measurements are becoming of higher quality and quantity, which leads to more accurate satellite retrievals of enhanced air pollutant concentrations over large cities. In this paper, we compare and discuss both an existing and a new method for estimating urban-scale trends in CO emissions using multi-year retrievals from the MOPITT satellite instrument. The first method is mainly based on satellite data, and has the advantage of fewer assumptions, but also comes with uncertainties and limitations as shown in this paper. To improve the reliability of urban-to-regional scale emission trend estimation, we simulate MOPITT retrievals using the Weather Research and Forecast model with chemistry core (WRF-Chem). The difference between model and retrieval is used to optimize CO emissions in WRF-Chem, focusing on the city of Madrid, Spain. This method has the advantage over the existing method in that it allows both a trend analysis of CO concentrations and a quantification of CO emissions. Our analysis confirms that MOPITT is capable of detecting CO enhancements over Madrid, although significant differences remain between the yearly averaged model output and satellite measurements (R2 = 0.75) over the city. After optimization, we find Madrid CO emissions to be lower by 48 % for 2002 and by 17 % for 2006 compared with the EdgarV4.2 emission inventory. The MOPITT-derived emission adjustments lead to better agreement with the European emission inventory TNO-MAC-III for both years. This suggests that the downward trend in CO emissions over Madrid is overestimated in EdgarV4.2 and more realistically represented in TNO-MACC-III. However, our satellite and model based emission estimates have large uncertainties, around 20 % for 2002 and 50 % for 2006.
Nanoparticle size detection limits by single particle ICP-MS for 40 elements.
Lee, Sungyun; Bi, Xiangyu; Reed, Robert B; Ranville, James F; Herckes, Pierre; Westerhoff, Paul
2014-09-02
The quantification and characterization of natural, engineered, and incidental nano- to micro-size particles are beneficial to assessing a nanomaterial's performance in manufacturing, their fate and transport in the environment, and their potential risk to human health. Single particle inductively coupled plasma mass spectrometry (spICP-MS) can sensitively quantify the amount and size distribution of metallic nanoparticles suspended in aqueous matrices. To accurately obtain the nanoparticle size distribution, it is critical to have knowledge of the size detection limit (denoted as Dmin) using spICP-MS for a wide range of elements (other than a few available assessed ones) that have been or will be synthesized into engineered nanoparticles. Herein is described a method to estimate the size detection limit using spICP-MS and then apply it to nanoparticles composed of 40 different elements. The calculated Dmin values correspond well for a few of the elements with their detectable sizes that are available in the literature. Assuming each nanoparticle sample is composed of one element, Dmin values vary substantially among the 40 elements: Ta, U, Ir, Rh, Th, Ce, and Hf showed the lowest Dmin values, ≤10 nm; Bi, W, In, Pb, Pt, Ag, Au, Tl, Pd, Y, Ru, Cd, and Sb had Dmin in the range of 11-20 nm; Dmin values of Co, Sr, Sn, Zr, Ba, Te, Mo, Ni, V, Cu, Cr, Mg, Zn, Fe, Al, Li, and Ti were located at 21-80 nm; and Se, Ca, and Si showed high Dmin values, greater than 200 nm. A range of parameters that influence the Dmin, such as instrument sensitivity, nanoparticle density, and background noise, is demonstrated. It is observed that, when the background noise is low, the instrument sensitivity and nanoparticle density dominate the Dmin significantly. Approaches for reducing the Dmin, e.g., collision cell technology (CCT) and analyte isotope selection, are also discussed. To validate the Dmin estimation approach, size distributions for three engineered nanoparticle samples were obtained using spICP-MS. The use of this methodology confirms that the observed minimum detectable sizes are consistent with the calculated Dmin values. Overall, this work identifies the elements and nanoparticles to which current spICP-MS approaches can be applied, in order to enable quantification of very small nanoparticles at low concentrations in aqueous media.
Srivastava, Pooja; Tiwari, Neerja; Yadav, Akhilesh K; Kumar, Vijendra; Shanker, Karuna; Verma, Ram K; Gupta, Madan M; Gupta, Anil K; Khanuja, Suman P S
2008-01-01
This paper describes a sensitive, selective, specific, robust, and validated densitometric high-performance thin-layer chromatographic (HPTLC) method for the simultaneous determination of 3 key withanolides, namely, withaferin-A, 12-deoxywithastramonolide, and withanolide-A, in Ashwagandha (Withania somnifera) plant samples. The separation was performed on aluminum-backed silica gel 60F254 HPTLC plates using dichloromethane-methanol-acetone-diethyl ether (15 + 1 + 1 + 1, v/v/v/v) as the mobile phase. The withanolides were quantified by densitometry in the reflection/absorption mode at 230 nm. Precise and accurate quantification could be performed in the linear working concentration range of 66-330 ng/band with good correlation (r2 = 0.997, 0.999, and 0.996, respectively). The method was validated for recovery, precision, accuracy, robustness, limit of detection, limit of quantitation, and specificity according to International Conference on Harmonization guidelines. Specificity of quantification was confirmed using retention factor (Rf) values, UV-Vis spectral correlation, and electrospray ionization mass spectra of marker compounds in sample tracks.
Louveau, B; Fernandez, C; Zahr, N; Sauvageon-Martre, H; Maslanka, P; Faure, P; Mourah, S; Goldwirt, L
2016-12-01
A precise and accurate high-performance liquid chromatography (HPLC) quantification method of rifampicin in human plasma was developed and validated using ultraviolet detection after an automatized solid-phase extraction. The method was validated with respect to selectivity, extraction recovery, linearity, intra- and inter-day precision, accuracy, lower limit of quantification and stability. Chromatographic separation was performed on a Chromolith RP 8 column using a mixture of 0.05 m acetate buffer pH 5.7-acetonitrile (35:65, v/v) as mobile phase. The compounds were detected at a wavelength of 335 nm with a lower limit of quantification of 0.05 mg/L in human plasma. Retention times for rifampicin and 6,7-dimethyl-2,3-di(2-pyridyl) quinoxaline used as internal standard were respectively 3.77 and 4.81 min. This robust and exact method was successfully applied in routine for therapeutic drug monitoring in patients treated with rifampicin. Copyright © 2016 John Wiley & Sons, Ltd.
Song, Yingshi; Yan, Huiyu; Xu, Jingbo; Ma, Hongxi
2017-09-01
A rapid and sensitive liquid chromatography tandem mass spectrometry detection using selected reaction monitoring in positive ionization mode was developed and validated for the quantification of nodakenin in rat plasma and brain. Pareruptorin A was used as internal standard. A single step liquid-liquid extraction was used for plasma and brain sample preparation. The method was validated with respect to selectivity, precision, accuracy, linearity, limit of quantification, recovery, matrix effect and stability. Lower limit of quantification of nodakenin was 2.0 ng/mL in plasma and brain tissue homogenates. Linear calibration curves were obtained over concentration ranges of 2.0-1000 ng/mL in plasma and brain tissue homogenates for nodakenin. Intra-day and inter-day precisions (relative standard deviation, RSD) were <15% in both biological media. This assay was successfully applied to plasma and brain pharmacokinetic studies of nodakenin in rats after intravenous administration. Copyright © 2017 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gracy Elias; Earl D. Mattson; Jessica E. Little
A quantitative analytical method to determine butyramide and acetamide concentrations at the low ppb levels in geothermal waters has been developed. The analytes are concentrated in a preparation step by evaporation and analyzed using HPLC-UV. Chromatographic separation is achieved isocratically with a RP C-18 column using a 30 mM phosphate buffer solution with 5 mM heptane sulfonic acid and methanol (98:2 ratio) as the mobile phase. Absorbance is measured at 200 nm. The limit of detection (LOD) for BA and AA were 2.0 {mu}g L{sup -1} and 2.5 {mu}g L{sup -1}, respectively. The limit of quantification (LOQ) for BA andmore » AA were 5.7 {mu}g L{sup -1} and 7.7 {mu}g L{sup -1}, respectively, at the detection wavelength of 200 nm. Attaining these levels of quantification better allows these amides to be used as thermally reactive tracers in low-temperature hydrogeothermal systems.« less
Detection and quantification of benzodiazepines in hair by ToF-SIMS: preliminary results
NASA Astrophysics Data System (ADS)
Audinot, J.-N.; Yegles, M.; Labarthe, A.; Ruch, D.; Wennig, R.; Migeon, H.-N.
2003-01-01
Successful results have been obtained in detection and quantification of buprenorphine in urine and hemolysed blood by time of flight-secondary ion mass spectrometry (ToF-SIMS). The present work is focused on four molecules of the benzodiazepine's family: nordiazepam, aminoflunitrozepam, diazepam and oxazepam. These drugs remain difficult to analyse in routine clinical and forensic toxicology because of their thermal instability and low therapeutic range (0.5-5 ng/ml). Internal standards are prepared by means of deuterated molecules. The benzadiazepine and their deuterated form (nordiazepam-D5, amino-flunitrazepam-D3, diazepam-D5 and oxazepam-D5) were added, in known concentration, in urine. These molecules were then extracted with several methods (pH, solvent, etc.) and, after adsorption on a noble metal, analysed by ToF-SIMS. The paper will focus for the different molecules on the comparison of the different preparation procedures, the optimisation of the SIMS conditions, the limits of detection and the limits of quantification.
Hesse, Almut
2016-01-01
Amino acid analysis is considered to be the gold standard for quantitative peptide and protein analysis. Here, we would like to propose a simple HPLC/UV method based on a reversed-phase separation of the aromatic amino acids tyrosine (Tyr), phenylalanine (Phe), and optionally tryptophan (Trp) without any derivatization. The hydrolysis of the proteins and peptides was performed by an accelerated microwave technique, which needs only 30 minutes. Two internal standard compounds, homotyrosine (HTyr) and 4-fluorophenylalanine (FPhe) were used for calibration. The limit of detection (LOD) was estimated to be 0.05 µM (~10 µg/L) for tyrosine and phenylalanine at 215 nm. The LOD for a protein determination was calculated to be below 16 mg/L (~300 ng BSA absolute). Aromatic amino acid analysis (AAAA) offers excellent accuracy and a precision of about 5% relative standard deviation, including the hydrolysis step. The method was validated with certified reference materials (CRM) of amino acids and of a pure protein (bovine serum albumin, BSA). AAAA can be used for the quantification of aromatic amino acids, isolated peptides or proteins, complex peptide or protein samples, such as serum or milk powder, and peptides or proteins immobilized on solid supports. PMID:27559481
Statistical models for the analysis and design of digital polymerase chain (dPCR) experiments
Dorazio, Robert; Hunter, Margaret
2015-01-01
Statistical methods for the analysis and design of experiments using digital PCR (dPCR) have received only limited attention and have been misused in many instances. To address this issue and to provide a more general approach to the analysis of dPCR data, we describe a class of statistical models for the analysis and design of experiments that require quantification of nucleic acids. These models are mathematically equivalent to generalized linear models of binomial responses that include a complementary, log–log link function and an offset that is dependent on the dPCR partition volume. These models are both versatile and easy to fit using conventional statistical software. Covariates can be used to specify different sources of variation in nucleic acid concentration, and a model’s parameters can be used to quantify the effects of these covariates. For purposes of illustration, we analyzed dPCR data from different types of experiments, including serial dilution, evaluation of copy number variation, and quantification of gene expression. We also showed how these models can be used to help design dPCR experiments, as in selection of sample sizes needed to achieve desired levels of precision in estimates of nucleic acid concentration or to detect differences in concentration among treatments with prescribed levels of statistical power.
Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow
NASA Astrophysics Data System (ADS)
Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca
2017-11-01
The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.
Dorazio, Robert M; Hunter, Margaret E
2015-11-03
Statistical methods for the analysis and design of experiments using digital PCR (dPCR) have received only limited attention and have been misused in many instances. To address this issue and to provide a more general approach to the analysis of dPCR data, we describe a class of statistical models for the analysis and design of experiments that require quantification of nucleic acids. These models are mathematically equivalent to generalized linear models of binomial responses that include a complementary, log-log link function and an offset that is dependent on the dPCR partition volume. These models are both versatile and easy to fit using conventional statistical software. Covariates can be used to specify different sources of variation in nucleic acid concentration, and a model's parameters can be used to quantify the effects of these covariates. For purposes of illustration, we analyzed dPCR data from different types of experiments, including serial dilution, evaluation of copy number variation, and quantification of gene expression. We also showed how these models can be used to help design dPCR experiments, as in selection of sample sizes needed to achieve desired levels of precision in estimates of nucleic acid concentration or to detect differences in concentration among treatments with prescribed levels of statistical power.
Nakazawa, Hiroyuki; Iwasaki, Yusuke; Ito, Rie
2014-01-01
Our modern society has created a large number of chemicals that are used for the production of everyday commodities including toys, food packaging, cosmetic products, and building materials. We enjoy a comfortable and convenient lifestyle with access to these items. In addition, in specialized areas, such as experimental science and various medical fields, laboratory equipment and devices that are manufactured using a wide range of chemical substances are also extensively employed. The association between human exposure to trace hazardous chemicals and an increased incidence of endocrine disease has been recognized. However, the evaluation of human exposure to such endocrine disrupting chemicals is therefore imperative, and the determination of exposure levels requires the analysis of human biological materials, such as blood and urine. To obtain as much information as possible from limited sample sizes, highly sensitive and reliable analytical methods are also required for exposure assessments. The present review focuses on effective analytical methods for the quantification of bisphenol A (BPA), alkylphenols (APs), phthalate esters (PEs), and perfluoronated chemicals (PFCs), which are chemicals used in the production of everyday commodities. Using data obtained from liquid chromatography/mass spectrometry (LC/MS) and LC/MS/MS analyses, assessments of the risks to humans were also presented based on the estimated levels of exposure to PFCs.
[Determinants of equity in financing medicines in Argentina: an empirical study].
Dondo, Mariana; Monsalvo, Mauricio; Garibaldi, Lucas A
2016-01-01
Medicines are an important part of household health spending. A progressive system for financing drugs is thus essential for an equitable health system. Some authors have proposed that the determinants of equity in drug financing are socioeconomic, demographic, and associated with public interventions, but little progress has been made in the empirical evaluation and quantification of their relative importance. The current study estimated quantile regressions at the provincial level in Argentina and found that old age (> 65 years), unemployment, the existence of a public pharmaceutical laboratory, treatment transfers, and a health system orientated to primary care were important predictors of progressive payment schemes. Low income, weak institutions, and insufficient infrastructure and services were associated with the most regressive social responses to health needs, thereby aggravating living conditions and limiting development opportunities.
Traits Without Borders: Integrating Functional Diversity Across Scales.
Carmona, Carlos P; de Bello, Francesco; Mason, Norman W H; Lepš, Jan
2016-05-01
Owing to the conceptual complexity of functional diversity (FD), a multitude of different methods are available for measuring it, with most being operational at only a small range of spatial scales. This causes uncertainty in ecological interpretations and limits the potential to generalize findings across studies or compare patterns across scales. We solve this problem by providing a unified framework expanding on and integrating existing approaches. The framework, based on trait probability density (TPD), is the first to fully implement the Hutchinsonian concept of the niche as a probabilistic hypervolume in estimating FD. This novel approach could revolutionize FD-based research by allowing quantification of the various FD components from organismal to macroecological scales, and allowing seamless transitions between scales. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jeudy, Jeremy; Salvador, Arnaud; Simon, Romain; Jaffuel, Aurore; Fonbonne, Catherine; Léonard, Jean-François; Gautier, Jean-Charles; Pasquier, Olivier; Lemoine, Jerome
2014-02-01
Targeted mass spectrometry in the so-called multiple reaction monitoring mode (MRM) is certainly a promising way for the precise, accurate, and multiplexed measurement of proteins and their genetic or posttranslationally modified isoforms. MRM carried out on a low-resolution triple quadrupole instrument faces a lack of specificity when addressing the quantification of weakly concentrated proteins. In this case, extensive sample fractionation or immunoenrichment alleviates signal contamination by interferences, but in turn decreases assay performance and throughput. Recently, MRM(3) was introduced as an alternative to MRM to improve the limit of quantification of weakly concentrated protein biomarkers. In the present work, we compare MRM and MRM(3) modes for the detection of biomarkers in plasma and urine. Calibration curves drawn with MRM and MRM(3) showed a similar range of linearity (R(2) > 0.99 for both methods) with protein concentrations above 1 μg/mL in plasma and a few nanogram per milliliter in urine. In contrast, optimized MRM(3) methods improve the limits of quantification by a factor of 2 to 4 depending on the targeted peptide. This gain arises from the additional MS(3) fragmentation step, which significantly removes or decreases interfering signals within the targeted transition channels.
Nahar, Limon Khatun; Cordero, Rosa Elena; Nutt, David; Lingford-Hughes, Anne; Turton, Samuel; Durant, Claire; Wilson, Sue; Paterson, Sue
2016-03-01
A highly sensitive and fully validated method was developed for the quantification of baclofen in human plasma. After adjusting the pH of the plasma samples using a phosphate buffer solution (pH 4), baclofen was purified using mixed mode (C8/cation exchange) solid-phase extraction (SPE) cartridges. Endogenous water-soluble compounds and lipids were removed from the cartridges before the samples were eluted and concentrated. The samples were analyzed using triple-quadrupole liquid chromatography-tandem mass spectrometry (LC-MS-MS) with triggered dynamic multiple reaction monitoring mode for simultaneous quantification and confirmation. The assay was linear from 25 to 1,000 ng/mL (r(2) > 0.999; n = 6). Intraday (n = 6) and interday (n = 15) imprecisions (% relative standard deviation) were <5%, and the average recovery was 30%. The limit of detection of the method was 5 ng/mL, and the limit of quantification was 25 ng/mL. Plasma samples from healthy male volunteers (n = 9, median age: 22) given two single oral doses of baclofen (10 and 60 mg) on nonconsecutive days were analyzed to demonstrate method applicability. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Arashida, Naoko; Nishimoto, Rumi; Harada, Masashi; Shimbo, Kazutaka; Yamada, Naoyuki
2017-02-15
Amino acids and their related metabolites play important roles in various physiological processes and have consequently become biomarkers for diseases. However, accurate quantification methods have only been established for major compounds, such as amino acids and a limited number of target metabolites. We previously reported a highly sensitive high-throughput method for the simultaneous quantification of amines using 3-aminopyridyl-N-succinimidyl carbamate as a derivatization reagent combined with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Herein, we report the successful development of a practical and accurate LC-MS/MS method to analyze low concentrations of 40 physiological amines in 19 min. Thirty-five of these amines showed good linearity, limits of quantification, accuracy, precision, and recovery characteristics in plasma, with scheduled selected reaction monitoring acquisitions. Plasma samples from 10 healthy volunteers were evaluated using our newly developed method. The results revealed that 27 amines were detected in one of the samples, and that 24 of these compounds could be quantified. Notably, this new method successfully quantified metabolites with high accuracy across three orders of magnitude, with lowest and highest averaged concentrations of 31.7 nM (for spermine) and 18.3 μM (for α-aminobutyric acid), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.
Quintelas, Cristina; Mesquita, Daniela P; Lopes, João A; Ferreira, Eugénio C; Sousa, Clara
2015-08-15
Accurate detection and quantification of microbiological contaminations remains an issue mainly due the lack of rapid and precise analytical techniques. Standard methods are expensive and time-consuming being associated to high economic losses and public health threats. In the context of pharmaceutical industry, the development of fast analytical techniques able to overcome these limitations is crucial and spectroscopic techniques might constitute a reliable alternative. In this work we proved the ability of Fourier transform near infrared spectroscopy (FT-NIRS) to detect and quantify bacteria (Bacillus subtilis, Escherichia coli, Pseudomonas fluorescens, Salmonella enterica, Staphylococcus epidermidis) from 10 to 10(8) CFUs/mL in sterile saline solutions (NaCl 0.9%). Partial least squares discriminant analysis (PLSDA) models showed that FT-NIRS was able to discriminate between sterile and contaminated solutions for all bacteria as well as to identify the contaminant bacteria. Partial least squares (PLS) models allowed bacterial quantification with limits of detection ranging from 5.1 to 9 CFU/mL for E. coli and B. subtilis, respectively. This methodology was successfully validated in three pharmaceutical preparations (contact lens solution, cough syrup and topic anti-inflammatory solution) proving that this technique possess a high potential to be routinely used for the detection and quantification of bacterial contaminations. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Gallardo Estrella, L.; van Ginneken, B.; van Rikxoort, E. M.
2013-03-01
Chronic Obstructive Pulmonary Disease (COPD) is a lung disease characterized by progressive air flow limitation caused by emphysema and chronic bronchitis. Emphysema is quantified from chest computed tomography (CT) scans as the percentage of attentuation values below a fixed threshold. The emphysema quantification varies substantially between scans reconstructed with different kernels, limiting the possibilities to compare emphysema quantifications obtained from scans with different reconstruction parameters. In this paper we propose a method to normalize scans reconstructed with different kernels to have the same characteristics as scans reconstructed with a reference kernel and investigate if this normalization reduces the variability in emphysema quantification. The proposed normalization splits a CT scan into different frequency bands based on hierarchical unsharp masking. Normalization is performed by changing the energy in each frequency band to the average energy in each band in the reference kernel. A database of 15 subjects with COPD was constructed for this study. All subjects were scanned at total lung capacity and the scans were reconstructed with four different reconstruction kernels. The normalization was applied to all scans. Emphysema quantification was performed before and after normalization. It is shown that the emphysema score varies substantially before normalization but the variation diminishes after normalization.
Li, Xiang; Wang, Xiuxiu; Yang, Jielin; Liu, Yueming; He, Yuping; Pan, Liangwen
2014-05-16
To date, over 150 genetically modified (GM) crops are widely cultivated. To comply with regulations developed for genetically modified organisms (GMOs), including labeling policies, many detection methods for GMO identification and quantification have been developed. To detect the entrance and exit of unauthorized GM crop events in China, we developed a novel quadruplex real-time PCR method for simultaneous detection and quantification of GM cotton events GHB119 and T304-40 in cotton-derived products (based on the 5'-flanking sequence) and the insect-resistance gene Cry2Ae. The limit of detection was 10 copies for GHB119 and Cry2Ae and 25 copies for T304-40. The limit of quantification was 25 copies for GHB119 and Cry2Ae and 50 copies for T304-40. Moreover, low bias and acceptable standard deviation and relative standard deviation values were obtained in quantification analysis of six blind samples containing different GHB119 and T304-40 ingredients. The developed quadruplex quantitative method could be used for quantitative detection of two GM cotton events (GHB119 and T304-40) and Cry2Ae gene ingredient in cotton derived products.
2014-01-01
Background To date, over 150 genetically modified (GM) crops are widely cultivated. To comply with regulations developed for genetically modified organisms (GMOs), including labeling policies, many detection methods for GMO identification and quantification have been developed. Results To detect the entrance and exit of unauthorized GM crop events in China, we developed a novel quadruplex real-time PCR method for simultaneous detection and quantification of GM cotton events GHB119 and T304-40 in cotton-derived products (based on the 5′-flanking sequence) and the insect-resistance gene Cry2Ae. The limit of detection was 10 copies for GHB119 and Cry2Ae and 25 copies for T304-40. The limit of quantification was 25 copies for GHB119 and Cry2Ae and 50 copies for T304-40. Moreover, low bias and acceptable standard deviation and relative standard deviation values were obtained in quantification analysis of six blind samples containing different GHB119 and T304-40 ingredients. Conclusions The developed quadruplex quantitative method could be used for quantitative detection of two GM cotton events (GHB119 and T304-40) and Cry2Ae gene ingredient in cotton derived products. PMID:24884946
Jiang, Lingxi; Yang, Litao; Rao, Jun; Guo, Jinchao; Wang, Shu; Liu, Jia; Lee, Seonghun; Zhang, Dabing
2010-02-01
To implement genetically modified organism (GMO) labeling regulations, an event-specific analysis method based on the junction sequence between exogenous integration and host genomic DNA has become the preferential approach for GMO identification and quantification. In this study, specific primers and TaqMan probes based on the revealed 5'-end junction sequence of GM cotton MON15985 were designed, and qualitative and quantitative polymerase chain reaction (PCR) assays were established employing the designed primers and probes. In the qualitative PCR assay, the limit of detection (LOD) was 0.5 g kg(-1) in 100 ng total cotton genomic DNA, corresponding to about 17 copies of haploid cotton genomic DNA, and the LOD and limit of quantification (LOQ) for quantitative PCR assay were 10 and 17 copies of haploid cotton genomic DNA, respectively. Furthermore, the developed quantitative PCR assays were validated in-house by five different researchers. Also, five practical samples with known GM contents were quantified using the developed PCR assay in in-house validation, and the bias between the true and quantification values ranged from 2.06% to 12.59%. This study shows that the developed qualitative and quantitative PCR methods are applicable for the identification and quantification of GM cotton MON15985 and its derivates.
Quantification of liver fat in the presence of iron overload.
Horng, Debra E; Hernando, Diego; Reeder, Scott B
2017-02-01
To evaluate the accuracy of R2* models (1/T 2 * = R2*) for chemical shift-encoded magnetic resonance imaging (CSE-MRI)-based proton density fat-fraction (PDFF) quantification in patients with fatty liver and iron overload, using MR spectroscopy (MRS) as the reference standard. Two Monte Carlo simulations were implemented to compare the root-mean-squared-error (RMSE) performance of single-R2* and dual-R2* correction in a theoretical liver environment with high iron. Fatty liver was defined as hepatic PDFF >5.6% based on MRS; only subjects with fatty liver were considered for analyses involving fat. From a group of 40 patients with known/suspected iron overload, nine patients were identified at 1.5T, and 13 at 3.0T with fatty liver. MRS linewidth measurements were used to estimate R2* values for water and fat peaks. PDFF was measured from CSE-MRI data using single-R2* and dual-R2* correction with magnitude and complex fitting. Spectroscopy-based R2* analysis demonstrated that the R2* of water and fat remain close in value, both increasing as iron overload increases: linear regression between R2* W and R2* F resulted in slope = 0.95 [0.79-1.12] (95% limits of agreement) at 1.5T and slope = 0.76 [0.49-1.03] at 3.0T. MRI-PDFF using dual-R2* correction had severe artifacts. MRI-PDFF using single-R2* correction had good agreement with MRS-PDFF: Bland-Altman analysis resulted in -0.7% (bias) ± 2.9% (95% limits of agreement) for magnitude-fit and -1.3% ± 4.3% for complex-fit at 1.5T, and -1.5% ± 8.4% for magnitude-fit and -2.2% ± 9.6% for complex-fit at 3.0T. Single-R2* modeling enables accurate PDFF quantification, even in patients with iron overload. 1 J. Magn. Reson. Imaging 2017;45:428-439. © 2016 International Society for Magnetic Resonance in Medicine.
Zheng, Weijia; Park, Jin-A; Abd El-Aty, A M; Kim, Seong-Kwan; Cho, Sang-Hyun; Choi, Jeong-Min; Yi, Hee; Cho, Soo-Min; Ramadan, Amer; Jeong, Ji Hoon; Shim, Jae-Han; Shin, Ho-Chul
2018-01-01
Over the past few decades, honey products have been polluted by different contaminants, such as pesticides, which are widely applied in agriculture. In this work, a modified EN - quick, easy, cheap, effective, rugged, and safe (QuEChERS) extraction method was developed for the simultaneous quantification of pesticide residues, including cymiazole, fipronil, coumaphos, fluvalinate, amitraz, and its metabolite 2,4-dimethylaniline (2,4-DMA), in four types of honey (acacia, wild, chestnut, and manuka) and royal jelly. Samples were buffered with 0.2M dibasic sodium phosphate (pH 9), and subsequently, acetonitrile was employed as the extraction solvent. A combination of primary secondary amine (PSA) and C18 sorbents was used for purification prior to liquid chromatography-electrospray ionization tandem mass spectrometry (LC-ESI + /MS-MS) analysis. The estimated linearity measured at six concentration levels presented good correlation coefficients (R 2 )≥0.99. The recovery, calculated from three different spiking levels, was 62.06-108.79% in honey and 67.58-106.34% in royal jelly, with an RSD<12% for all the tested compounds. The matrix effect was also evaluated, and most of the analytes presented signal enhancement. The limits of quantification (LOQ) ranged between 0.001 and 0.005mg/kg in various samples. These are considerably lower than the maximum residue limits (MRL) set by various regulatory authorities. A total of 43 market (domestic and imported) samples were assayed for method application. Among the tested samples, three samples were tested positive (i.e. detected and quantified) only for cymiazole residues. The residues in the rest of the samples were detected but not quantified. We concluded that the protocol developed in this work is simple and versatile for the routine quantification of cymiazole, 2,4-DMA, fipronil, coumaphos, amitraz, and fluvalinate in various types of honey and royal jelly. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantification of Campylobacter jejuni contamination on chicken carcasses in France.
Duqué, Benjamin; Daviaud, Samuel; Guillou, Sandrine; Haddad, Nabila; Membré, Jeanne-Marie
2018-04-01
Highly prevalent in poultry, Campylobacter is a foodborne pathogen which remains the primary cause of enteritis in humans. Several studies have determined prevalence and contamination level of this pathogen throughout the food chain. However it is generally performed in a deterministic way without considering heterogeneity of contamination level. The purpose of this study was to quantify, using probabilistic tools, the contamination level of Campylobacter spp. on chicken carcasses after air-chilling step in several slaughterhouses in France. From a dataset (530 data) containing censored data (concentration <10CFU/g), several factors were considered, including the month of sampling, the farming method (standard vs certified) and the sampling area (neck vs leg). All probabilistic analyses were performed in R using fitdistrplus, mc2d and nada packages. The uncertainty (i.e. error) generated by the presence of censored data was small (ca 1 log 10 ) in comparison to the variability (i.e. heterogeneity) of contamination level (3 log 10 or more), strengthening the probabilistic analysis and facilitating result interpretation. The sampling period and sampling area (neck/leg) had a significant effect on Campylobacter contamination level. More precisely, two "seasons" were distinguished: one from January to May, another one from June to December. During the June-to-December season, the mean Campylobacter concentration was estimated to 2.6 [2.4; 2.8] log 10 (CFU/g) and 1.8 [1.5; 2.0] log 10 (CFU/g) for neck and leg, respectively. The probability of having >1000CFU/g (higher limit of European microbial criterion) was estimated to 35.3% and 12.6%, for neck and leg, respectively. In contrast, during January-to-May season, the mean contamination level was estimated to 1.0 [0.6; 1.3] log 10 (CFU/g) and 0.6 [0.3; 0.9] log 10 (CFU/g) for neck and leg, respectively. The probability of having >1000CFU/g was estimated to 13.5% and 2.0% for neck and leg, respectively. An accurate quantification of contamination level enables industrials to better adapt their processing and hygiene practices. These results will also help in refining exposure assessment models. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.
She, Hoi Lam; Roest, Arno A W; Calkoen, Emmeline E; van den Boogaard, Pieter J; van der Geest, Rob J; Hazekamp, Mark G; de Roos, Albert; Westenberg, Jos J M
2017-01-01
To evaluate the inflow pattern and flow quantification in patients with functional univentricular heart after Fontan's operation using 4D flow magnetic resonance imaging (MRI) with streamline visualization when compared with the conventional 2D flow approach. Seven patients with functional univentricular heart after Fontan's operation and twenty-three healthy controls underwent 4D flow MRI. In two orthogonal two-chamber planes, streamline visualization was applied, and inflow angles with peak inflow velocity (PIV) were measured. Transatrioventricular flow quantification was assessed using conventional 2D multiplanar reformation (MPR) and 4D MPR tracking the annulus and perpendicular to the streamline inflow at PIV, and they were validated with net forward aortic flow. Inflow angles at PIV in the patient group demonstrated wide variation of angles and directions when compared with the control group (P < .01). The use of 4D flow MRI with streamlines visualization in quantification of the transatrioventricular flow had smaller limits of agreement (2.2 ± 4.1 mL; 95% limit of agreement -5.9-10.3 mL) when compared with the static plane assessment from 2DFlow MRI (-2.2 ± 18.5 mL; 95% limit of agreement agreement -38.5-34.1 mL). Stronger correlation was present in the 4D flow between the aortic and trans-atrioventricular flow (R 2 correlation in 4D flow: 0.893; in 2D flow: 0.786). Streamline visualization in 4D flow MRI confirmed variable atrioventricular inflow directions in patients with functional univentricular heart with previous Fontan's procedure. 4D flow aided generation of measurement planes according to the blood flood dynamics and has proven to be more accurate than the fixed plane 2D flow measurements when calculating flow quantifications. © 2016 Wiley Periodicals, Inc.
A strategy to facilitate cleanup at the Mare Island Naval Station
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, J.; Albert, D.
1995-12-31
A strategy based on an early realistic estimation of ecological risk was devised to facilitate cleanup of installation restoration units at the Mare Island Naval Station. The strategy uses the results of 100 years of soil-plant studies, which centered on maximizing the bioavailability of nutrients for crop growth. The screening strategy classifies sites according to whether they present (1) little or no ecological risk and require no further action, (2) an immediate and significant risk, and (3) an ecological risk that requires further quantification. The strategy assumes that the main focus of screening level risk assessment is quantification of themore » potential for abiotic-to-biotic transfer (bioavailability) of contaminants, especially at lower trophic levels where exposure is likely to be at a maximum. Sediment screening criteria developed by the California Environmental Protection Agency is used as one regulatory endpoint for evaluating total chemical concentrations. A realistic estimation of risk is then determined by estimating the bioavailability of contaminants.« less
Dubey, J K; Patyal, S K; Sharma, Ajay
2018-03-19
In the present day scenario of increasing awareness and concern about the pesticides, it is very important to ensure the quality of data being generated in pesticide residue analysis. To impart confidence in the products, terms like quality assurance and quality control are used as an integral part of quality management. In order to ensure better quality of results in pesticide residue analysis, validation of analytical methods to be used is extremely important. Keeping in view the importance of validation of method, the validation of QuEChERS (quick, easy, cheap, effective, rugged, and safe) a multiresidue method for extraction of 13 organochlorines and seven synthetic pyrethroids in fruits and vegetables followed by GC ECD for quantification was done so as to use this method for analysis of samples received in the laboratory. The method has been validated as per the Guidelines issued by SANCO (French words Sante for Health and Consommateurs for Consumers) in accordance with their document SANCO/XXXX/2013. Various parameters analyzed, viz., linearity, specificity, repeatability, reproducibility, and ruggedness were found to have acceptable values with a per cent RSD of less than 10%. Limit of quantification (LOQ) for the organochlorines was established to be 0.01 and 0.05 mg kg -1 for the synthetic pyrethroids. The uncertainty of the measurement (MU) for all these compounds ranged between 1 and 10%. The matrix-match calibration was used to compensate the matrix effect on the quantification of the compounds. The overall recovery of the method ranged between 80 and 120%. These results demonstrate the applicability and acceptability of this method in routine estimation of pesticide residues of these 20 pesticides in the fruits and vegetables by the laboratory.
Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.
Jakobsson, Gerd; Kronstrand, Robert
2014-06-01
A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10 mg) was incubated in 2M sodium hydroxide (80°C, 10 min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4 min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5 ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. Copyright © 2014 John Wiley & Sons, Ltd.
Digital Quantification of Proteins and mRNA in Single Mammalian Cells.
Albayrak, Cem; Jordi, Christian A; Zechner, Christoph; Lin, Jing; Bichsel, Colette A; Khammash, Mustafa; Tay, Savaş
2016-03-17
Absolute quantification of macromolecules in single cells is critical for understanding and modeling biological systems that feature cellular heterogeneity. Here we show extremely sensitive and absolute quantification of both proteins and mRNA in single mammalian cells by a very practical workflow that combines proximity ligation assay (PLA) and digital PCR. This digital PLA method has femtomolar sensitivity, which enables the quantification of very small protein concentration changes over its entire 3-log dynamic range, a quality necessary for accounting for single-cell heterogeneity. We counted both endogenous (CD147) and exogenously expressed (GFP-p65) proteins from hundreds of single cells and determined the correlation between CD147 mRNA and the protein it encodes. Using our data, a stochastic two-state model of the central dogma was constructed and verified using joint mRNA/protein distributions, allowing us to estimate transcription burst sizes and extrinsic noise strength and calculate the transcription and translation rate constants in single mammalian cells. Copyright © 2016 Elsevier Inc. All rights reserved.
Breast density quantification with cone-beam CT: A post-mortem study
Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee
2014-01-01
Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
NASA Astrophysics Data System (ADS)
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.
2018-03-01
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.
Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...
2018-02-09
The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less
Joseph R. Milanovich; John C. Maerz; Amy D. Rosemond
2015-01-01
1.Because of their longevity and skeletal phosphorus demand, vertebrates can have distinct influences on the uptake, storage and recycling of nutrients in ecosystems. Quantification of body stoichiometry, combined with estimates of abundance or biomass, can provide insights into the effect of vertebrates on nutrient cycling. 2.We measured the nutrient content and...
Developing population models with data from marked individuals
Hae Yeong Ryu,; Kevin T. Shoemaker,; Eva Kneip,; Anna Pidgeon,; Patricia Heglund,; Brooke Bateman,; Thogmartin, Wayne E.; Reşit Akçakaya,
2016-01-01
Population viability analysis (PVA) is a powerful tool for biodiversity assessments, but its use has been limited because of the requirements for fully specified population models such as demographic structure, density-dependence, environmental stochasticity, and specification of uncertainties. Developing a fully specified population model from commonly available data sources – notably, mark–recapture studies – remains complicated due to lack of practical methods for estimating fecundity, true survival (as opposed to apparent survival), natural temporal variability in both survival and fecundity, density-dependence in the demographic parameters, and uncertainty in model parameters. We present a general method that estimates all the key parameters required to specify a stochastic, matrix-based population model, constructed using a long-term mark–recapture dataset. Unlike standard mark–recapture analyses, our approach provides estimates of true survival rates and fecundities, their respective natural temporal variabilities, and density-dependence functions, making it possible to construct a population model for long-term projection of population dynamics. Furthermore, our method includes a formal quantification of parameter uncertainty for global (multivariate) sensitivity analysis. We apply this approach to 9 bird species and demonstrate the feasibility of using data from the Monitoring Avian Productivity and Survivorship (MAPS) program. Bias-correction factors for raw estimates of survival and fecundity derived from mark–recapture data (apparent survival and juvenile:adult ratio, respectively) were non-negligible, and corrected parameters were generally more biologically reasonable than their uncorrected counterparts. Our method allows the development of fully specified stochastic population models using a single, widely available data source, substantially reducing the barriers that have until now limited the widespread application of PVA. This method is expected to greatly enhance our understanding of the processes underlying population dynamics and our ability to analyze viability and project trends for species of conservation concern.
Uncertainties in estimates of the risks of late effects from space radiation
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P. B.; Dicello, J. F.
2004-01-01
Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Using the linear-additivity model for radiation risks, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain an estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including a deep space outpost and Mars missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative metrics, e.g., the number of days in space without exceeding a given risk level within well-defined confidence limits. Published by Elsevier Ltd on behalf of COSPAR.
Uncertainties in Estimates of the Risks of Late Effects from Space Radiation
NASA Technical Reports Server (NTRS)
Cucinotta, F. A.; Schimmerling, W.; Wilson, J. W.; Peterson, L. E.; Saganti, P.; Dicelli, J. F.
2002-01-01
The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, and non-cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low-Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a Maximum Likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objective's, i.e., the number of days in space without exceeding a given risk level within well defined confidence limits.
Bondu, Joseph Dian; Selvakumar, R; Fleming, Jude Joseph
2018-01-01
A variety of methods, including the Ion Selective Electrode (ISE), have been used for estimation of fluoride levels in drinking water. But as these methods suffer many drawbacks, the newer method of IC has replaced many of these methods. The study aimed at (1) validating IC for estimation of fluoride levels in drinking water and (2) to assess drinking water fluoride levels of villages in and around Vellore district using IC. Forty nine paired drinking water samples were measured using ISE and IC method (Metrohm). Water samples from 165 randomly selected villages in and around Vellore district were collected for fluoride estimation over 1 year. Standardization of IC method showed good within run precision, linearity and coefficient of variance with correlation coefficient R 2 = 0.998. The limit of detection was 0.027 ppm and limit of quantification was 0.083 ppm. Among 165 villages, 46.1% of the villages recorded water fluoride levels >1.00 ppm from which 19.4% had levels ranging from 1 to 1.5 ppm, 10.9% had recorded levels 1.5-2 ppm and about 12.7% had levels of 2.0-3.0 ppm. Three percent of villages had more than 3.0 ppm fluoride in the water tested. Most (44.42%) of these villages belonged to Jolarpet taluk with moderate to high (0.86-3.56 ppm) water fluoride levels. Ion Chromatography method has been validated and is therefore a reliable method in assessment of fluoride levels in the drinking water. While the residents of Jolarpet taluk (Vellore distict) are found to be at a high risk of developing dental and skeletal fluorosis.
Ogirala, Tejaswi; Eapen, Ashley; Salvante, Katrina G; Rapaport, Tomas; Nepomnaschy, Pablo A; Parameswaran, Ash M
2017-10-01
Biologists frequently collect and analyze biospecimens in naturalistic (i.e., field) conditions to ascertain information regarding the physiological status of their study participants. Generally, field-collected biospecimens need to be stored frozen in the field and then transported frozen to laboratory facilities where traditional biomarker assays, such as enzyme-linked immunosorbent assays (ELISAs), are conducted. As proper storage and transport of frozen specimens is often logistically difficult and expensive, particularly in nonurban field settings, methods that reduce the need for specimen storage and transport would benefit field-research dependent disciplines such as biology, ecology and epidemiology. One limiting factor to running assays in the field is the use of large and expensive equipment to visualize and quantify the assays, such as microplate readers. Here, we describe an implementation of colorimetric ELISA visualization and quantification using two novel and portable imaging instrumentation systems and data processing techniques for the determination of women's reproductive steroid hormone profiles. Using the light absorbance and transmittance properties of the chemical compounds that make up the hormone assay, we were able to estimate unknown hormone concentrations using a smartphone system and a webcam system. These estimates were comparable to those from a standard laboratory multiple reader (smartphone: accuracy = 82.20%, R 2 > 0.910; webcam: accuracy = 87.59%, R 2 > 0.942). This line of applied research, in the long run, is expected to provide necessary information for examining the extent to which reproductive function varies within and between populations and how it is influenced by psychosocial, energetic and environmental challenges. Our validation of these novel, portable visualization and quantification systems allows for the eventual development of a compact and economical closed system which can be used to quantify biomarker concentrations in remote areas.
NASA Astrophysics Data System (ADS)
Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.
2015-12-01
Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.
Methodology for quantification of waste generated in Spanish railway construction works
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guzman Baez, Ana de; Villoria Saez, Paola; Rio Merino, Mercedes del
Highlights: Black-Right-Pointing-Pointer Two equations for C and D waste estimation in railway construction works are developed. Black-Right-Pointing-Pointer Mixed C and D waste is the most generated category during railway construction works. Black-Right-Pointing-Pointer Tunnel construction is essential to quantify the waste generated during the works. Black-Right-Pointing-Pointer There is a relationship between C and D waste generated and railway functional units. Black-Right-Pointing-Pointer The methodology proposed can be used to obtain new constants for other areas. - Abstract: In the last years, the European Union (EU) has been focused on the reduction of construction and demolition (C and D) waste. Specifically, in 2006,more » Spain generated roughly 47 million tons of C and D waste, of which only 13.6% was recycled. This situation has lead to the drawing up of many regulations on C and D waste during the past years forcing EU countries to include new measures for waste prevention and recycling. Among these measures, the mandatory obligation to quantify the C and D waste expected to be originated during a construction project is mandated. However, limited data is available on civil engineering projects. Therefore, the aim of this research study is to improve C and D waste management in railway projects, by developing a model for C and D waste quantification. For this purpose, we develop two equations which estimate in advance the amount, both in weight and volume, of the C and D waste likely to be generated in railway construction projects, including the category of C and D waste generated for the entire project.« less
Zhang, Li; Wu, Yuhua; Wu, Gang; Cao, Yinglong; Lu, Changming
2014-10-01
Plasmid calibrators are increasingly applied for polymerase chain reaction (PCR) analysis of genetically modified organisms (GMOs). To evaluate the commutability between plasmid DNA (pDNA) and genomic DNA (gDNA) as calibrators, a plasmid molecule, pBSTopas, was constructed, harboring a Topas 19/2 event-specific sequence and a partial sequence of the rapeseed reference gene CruA. Assays of the pDNA showed similar limits of detection (five copies for Topas 19/2 and CruA) and quantification (40 copies for Topas 19/2 and 20 for CruA) as those for the gDNA. Comparisons of plasmid and genomic standard curves indicated that the slopes, intercepts, and PCR efficiency for pBSTopas were significantly different from CRM Topas 19/2 gDNA for quantitative analysis of GMOs. Three correction methods were used to calibrate the quantitative analysis of control samples using pDNA as calibrators: model a, or coefficient value a (Cva); model b, or coefficient value b (Cvb); and the novel model c or coefficient formula (Cf). Cva and Cvb gave similar estimated values for the control samples, and the quantitative bias of the low concentration sample exceeded the acceptable range within ±25% in two of the four repeats. Using Cfs to normalize the Ct values of test samples, the estimated values were very close to the reference values (bias -13.27 to 13.05%). In the validation of control samples, model c was more appropriate than Cva or Cvb. The application of Cf allowed pBSTopas to substitute for Topas 19/2 gDNA as a calibrator to accurately quantify the GMO.
Siebenhaar, Markus; Küllmer, Kai; Fernandes, Nuno Miguel de Barros; Hüllen, Volker; Hopf, Carsten
2015-09-01
Desorption electrospray ionization (DESI) mass spectrometry is an emerging technology for direct therapeutic drug monitoring in dried blood spots (DBS). Current DBS methods require manual application of small molecules as internal standards for absolute drug quantification. With industrial standardization in mind, we superseded the manual addition of standard and built a three-layer setup for robust quantification of salicylic acid directly from DBS. We combined a dioctyl sodium sulfosuccinate weave facilitating sample spreading with a cellulose layer for addition of isotope-labeled salicylic acid as internal standard and a filter paper for analysis of the standard-containing sample by DESI-MS. Using this setup, we developed a quantification method for salicylic acid from whole blood with a validated linear curve range from 10 to 2000 mg/L, a relative standard deviation (RSD%) ≤14%, and determination coefficients of 0.997. The limit of detection (LOD) was 8 mg/L and the lower limit of quantification (LLOQ) was 10 mg/L. Recovery rates in method verification by LC-MS/MS were 97 to 101% for blinded samples. Most importantly, a study in healthy volunteers after administration of a single dose of Aspirin provides evidence to suggest that the three-layer setup may enable individual pharmacokinetic and endpoint testing following blood collection by finger pricking by patients at home. Taken together, our data suggests that DBS-based quantification of drugs by DESI-MS on pre-manufactured three-layer cartridges may be a promising approach for future near-patient therapeutic drug monitoring.
Powder X-ray diffraction method for the quantification of cocrystals in the crystallization mixture.
Padrela, Luis; de Azevedo, Edmundo Gomes; Velaga, Sitaram P
2012-08-01
The solid state purity of cocrystals critically affects their performance. Thus, it is important to accurately quantify the purity of cocrystals in the final crystallization product. The aim of this study was to develop a powder X-ray diffraction (PXRD) quantification method for investigating the purity of cocrystals. The method developed was employed to study the formation of indomethacin-saccharin (IND-SAC) cocrystals by mechanochemical methods. Pure IND-SAC cocrystals were geometrically mixed with 1:1 w/w mixture of indomethacin/saccharin in various proportions. An accurately measured amount (550 mg) of the mixture was used for the PXRD measurements. The most intense, non-overlapping, characteristic diffraction peak of IND-SAC was used to construct the calibration curve in the range 0-100% (w/w). This calibration model was validated and used to monitor the formation of IND-SAC cocrystals by liquid-assisted grinding (LAG). The IND-SAC cocrystal calibration curve showed excellent linearity (R(2) = 0.9996) over the entire concentration range, displaying limit of detection (LOD) and limit of quantification (LOQ) values of 1.23% (w/w) and 3.74% (w/w), respectively. Validation results showed excellent correlations between actual and predicted concentrations of IND-SAC cocrystals (R(2) = 0.9981). The accuracy and reliability of the PXRD quantification method depend on the methods of sample preparation and handling. The crystallinity of the IND-SAC cocrystals was higher when larger amounts of methanol were used in the LAG method. The PXRD quantification method is suitable and reliable for verifying the purity of cocrystals in the final crystallization product.
Martins, Ayrton F; Frank, Carla da S; Altissimo, Joseline; de Oliveira, Júlia A; da Silva, Daiane S; Reichert, Jaqueline F; Souza, Darliana M
2017-08-24
Statins are classified as being amongst the most prescribed agents for treating hypercholesterolaemia and preventing vascular diseases. In this study, a rapid and effective liquid chromatography method, assisted by diode array detection, was designed and validated for the simultaneous quantification of atorvastatin (ATO) and simvastatin (SIM) in hospital effluent samples. The solid phase extraction (SPE) of the analytes was optimized regarding sorbent material and pH, and the dispersive liquid-liquid microextraction (DLLME), in terms of pH, ionic strength, type and volume of extractor/dispersor solvents. The performance of both extraction procedures was evaluated in terms of linearity, quantification limits, accuracy (recovery %), precision and matrix effects for each analyte. The methods proved to be linear in the concentration range considered; the quantification limits were 0.45 µg L -1 for ATO and 0.75 µg L -1 for SIM; the matrix effect was almost absent in both methods and the average recoveries remained between 81.5-90.0%; and the RSD values were <20%. The validated methods were applied to the quantification of the statins in real samples of hospital effluent; the concentrations ranged from 18.8 µg L -1 to 35.3 µg L -1 for ATO, and from 30.3 µg L -1 to 38.5 µg L -1 for SIM. Since the calculated risk quotient was ≤192, the occurrence of ATO and SIM in hospital effluent poses a potential serious risk to human health and the aquatic ecosystem.
Takemoto, Jody K; Remsberg, Connie M; Yáñez, Jaime A; Vega-Villa, Karina R; Davies, Neal M
2008-11-01
A stereospecific method for analysis of sakuranetin was developed. Separation was accomplished using a Chiralpak AD-RH column with UV (ultraviolet) detection at 288 nm. The stereospecific linear calibration curves ranged from 0.5 to 100 microg/mL. The mean extraction efficiency was >98%. Precision of the assay was <12% (relative standard deviation (R.S.D.)%), and within 10% at the limit of quantitation (0.5 microg/mL). Bias of the assay was lower than 10%, and within 5% at the limit of quantitation. The assay was applied successfully to pharmacokinetic quantification in rats, and the stereospecific quantification in oranges, grapefruit juice, and matico (Piper aduncum L.).
Lautié, Emmanuelle; Rasse, Catherine; Rozet, Eric; Mourgues, Claire; Vanhelleputte, Jean-Paul; Quetin-Leclercq, Joëlle
2013-02-01
The aim of this study was to find if fast microwave-assisted extraction could be an alternative to the conventional Soxhlet extraction for the quantification of rotenone in yam bean seeds by SPE and HPLC-UV. For this purpose, an experimental design was used to determine the optimal conditions of the microwave extraction. Then the values of the quantification on three accessions from two different species of yam bean seeds were compared using the two different kinds of extraction. A microwave extraction of 11 min at 55°C using methanol/dichloromethane (50:50) allowed rotenone extraction either equivalently or more efficiently than the 8-h-Soxhlet extraction method and was less sensitive to moisture content. The selectivity, precision, trueness, accuracy, and limit of quantification of the method with microwave extraction were also demonstrated. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Laurie, Matthew T; Bertout, Jessica A; Taylor, Sean D; Burton, Joshua N; Shendure, Jay A; Bielas, Jason H
2013-08-01
Due to the high cost of failed runs and suboptimal data yields, quantification and determination of fragment size range are crucial steps in the library preparation process for massively parallel sequencing (or next-generation sequencing). Current library quality control methods commonly involve quantification using real-time quantitative PCR and size determination using gel or capillary electrophoresis. These methods are laborious and subject to a number of significant limitations that can make library calibration unreliable. Herein, we propose and test an alternative method for quality control of sequencing libraries using droplet digital PCR (ddPCR). By exploiting a correlation we have discovered between droplet fluorescence and amplicon size, we achieve the joint quantification and size determination of target DNA with a single ddPCR assay. We demonstrate the accuracy and precision of applying this method to the preparation of sequencing libraries.
Quantification of trace metals in water using complexation and filter concentration.
Dolgin, Bella; Bulatov, Valery; Japarov, Julia; Elish, Eyal; Edri, Elad; Schechter, Israel
2010-06-15
Various metals undergo complexation with organic reagents, resulting in colored products. In practice, their molar absorptivities allow for quantification in the ppm range. However, a proper pre-concentration of the colored complex on paper filter lowers the quantification limit to the low ppb range. In this study, several pre-concentration techniques have been examined and compared: filtering the already complexed mixture, complexation on filter, and dipping of dye-covered filter in solution. The best quantification has been based on the ratio of filter reflectance at a certain wavelength to that at zero metal concentration. The studied complex formations (Ni ions with TAN and Cd ions with PAN) involve production of nanoparticle suspensions, which are associated with complicated kinetics. The kinetics of the complexation of Ni ions with TAN has been investigated and optimum timing could be found. Kinetic optimization in regard to some interferences has also been suggested.
Testing of The Harp Guidelines On A Small Watershed In Finland
NASA Astrophysics Data System (ADS)
Granlund, K.; Rekolainen, S.
TESTING of THE HARP GUIDELINES ON A SMALL WATERSHED IN FIN- LAND K. Granlund, S. Rekolainen Finnish Environment Institute, Research Department kirsti.granlund@vyh.fi Watersheds have emerged as environmental units for assessing, controlling and reduc- ing non-point-source pollution. Within the framework of the international conventions, such as OSPARCOM, HELCOM, and in the implementation of the EU Water Frame- work Directive, the criteria for model selection is of key importance. Harmonized Quantification and Reporting Procedures for Nutrients (HARP) aims at helping the implementation of OSPAR's (Convention for the Protection of the Marine Environ- ment of the North-East Atlantic) strategy in controlling eutrophication and reducing nutrient input to marine ecosystems by 50nitrogen and phosphorus losses from both point and nonpoint sources and help assess the effectiveness of the pollution reduction strategy. The HARP guidelines related respectively to the "Quantification of Nitrogen and Phosphorus Losses from Diffuse Anthropogenic Sources and Natural Background Losses" and to the "Quantification and Reporting of the Retention of Nitrogen and Phosphorus in River Catchments" were tested on a small, well instrumented agricul- tural watershed in Finland. The project was coordinated by the Environment Institute of the Joint Research Centre. Three types of methodologies for estimating nutrient losses to watercourses were eval- uated during the project. Simple methods based on regression equations or loading functions provide a quick method for estimating nutrient losses. Through these meth- ods the pollutant load can be related to parameters such as slope, soil type, land-use, management practices etc. Relevant nutrient loading functions for the study catch- ment were collected during the project. One mid-range model was applied to simulate the nitrogen cycle in a simplified manner in relation to climate, soil properties, land- use and management practices. Physically based models describe in detail the water and nutrient cycle within the watershed. ICECREAM and SWAT models were applied on the study watershed. ICECREAM is a management model based on CREAMS model for predicting field-scale runoff and erosion. The nitrogen and phosphorus sub- models are based on GLEAMS model. SWAT is a continuous time and spatially dis- tributed model, which includes hydrological, sediment and chemical processes in river 1 basins.The simple methods and the mid-range model for nitrogen proved to be fast and easy to apply, but due limited information on crop-specific loading functions and ni- trogen process rates (e.g. mineralisation in soil), only order-of-magnitude estimates for nutrient loads could be calculated. The ICECREAM model was used to estimate crop-specific nutrient losses from the agricultural area. The potential annual nutrient loads for the whole catchment were then calculated by including estimates for nutri- ent loads from other land-use classes (forested area and scattered settlement). Finally, calibration of the SWAT model was started to study in detail the effects of catchment characteristics on nutrient losses. The preliminary results of model testing are pre- sented and the suitability of different methodologies for estimating nutrient losses in Finnish catchments is discussed. 2
Jiang, Tingting; Dai, Yongmei; Miao, Miao; Zhang, Yue; Song, Chenglin; Wang, Zhixu
2015-07-01
To evaluate the usefulness and efficiency of a novel dietary method among urban pregnant women. Sixty one pregnant women were recruited from the ward and provided with a meal accurately weighed before cooking. The meal was photographed from three different angles before and after eating. The subjects were also interviewed for 24 h dietary recall by the investigators. Food weighting, image quantification and 24 h dietary recall were conducted by investigators from three different groups, and the messages were isolated from each other. Food consumption was analyzed on bases of classification and total summation. Nutrient intake from the meal was calculated for each subject. The data obtained from the dietary recall and the image quantification were compared with the actual values. Correlation and regression analyses were carried out on values between weight method and image quantification as well as dietary recall. Total twenty three kinds of food including rice, vegetables, fish, meats and soy bean curd were included in the experimental meal for the study. Compared with data from 24 h dietary recall (r = 0.413, P < 0.05), food weight estimated by image quantification (r = 0.778, P < 0.05, n = 308) were more correlated with weighed data, and show more concentrated linear distribution. Absolute difference distribution between image quantification and weight method of all food was 77.23 ± 56.02 (P < 0.05, n = 61), which was much small than the difference (172.77 ± 115.18) between 24 h recall and weight method. Values of almost all nutrients, including energy, protein, fat, carbohydrate, vitamin A, vitamin C, calcium, iron and zine calculated based on food weight from image quantification were more close to those of weighed data compared with 24 h dietary recall (P < 0.01). The results found by the Bland Altman analysis showed that the majority of the measurements for nutrient intake, were scattered along the mean difference line and close to the equality line (difference = 0). The plots show fairly good agreement between estimated and actual food consumption. It indicate that the differences (including the outliers) were random and did not exhibit any systematic bias, being consistent over different levels of mean food amount. On the other hand, the questionnaire showed that fifty six pregnant women considered the image quantification was less time-consuming and burdened than 24 h recall. Fifty eight of them would like to use image quantification to know their dietary status. The novel method which called instant photography (image quantification) for dietary assessment is more effective than conventional 24 h dietary recall and it also can obtain food intake values close to weighed data.
Frauen, M; Steinhart, H; Rapp, C; Hintze, U
2001-07-01
A simple, rapid and reproducible method for identification and quantification of iodopropynyl butylcarbamate (IPBC) in different cosmetic formulations is presented. The determination was carried out using a high-performance liquid chromatography (HPLC) procedure on a reversed phase column coupled to a single quadrupole mass spectrometer (MS) via an electrospray ionization (ESI) interface. Detection was performed in the positive selected ion-monitoring mode. In methanol/water extracts from different cosmetic formulations a detection limit between 50 and 100 ng/g could be achieved. A routine analytical procedure could be set up with good quantification reliability (relative standard deviation between 0.9 and 2.9%).
Russell, Jason D.; Scalf, Mark; Book, Adam J.; Ladror, Daniel T.; Vierstra, Richard D.; Smith, Lloyd M.; Coon, Joshua J.
2013-01-01
Quantification of gas-phase intact protein ions by mass spectrometry (MS) is impeded by highly-variable ionization, ion transmission, and ion detection efficiencies. Therefore, quantification of proteins using MS-associated techniques is almost exclusively done after proteolysis where peptides serve as proxies for estimating protein abundance. Advances in instrumentation, protein separations, and informatics have made large-scale sequencing of intact proteins using top-down proteomics accessible to the proteomics community; yet quantification of proteins using a top-down workflow has largely been unaddressed. Here we describe a label-free approach to determine the abundance of intact proteins separated by nanoflow liquid chromatography prior to MS analysis by using solution-phase measurements of ultraviolet light-induced intrinsic fluorescence (UV-IF). UV-IF is measured directly at the electrospray interface just prior to the capillary exit where proteins containing at least one tryptophan residue are readily detected. UV-IF quantification was demonstrated using commercially available protein standards and provided more accurate and precise protein quantification than MS ion current. We evaluated the parallel use of UV-IF and top-down tandem MS for quantification and identification of protein subunits and associated proteins from an affinity-purified 26S proteasome sample from Arabidopsis thaliana. We identified 26 unique proteins and quantified 13 tryptophan-containing species. Our analyses discovered previously unidentified N-terminal processing of the β6 (PBF1) and β7 (PBG1) subunit - such processing of PBG1 may generate a heretofore unknown additional protease active site upon cleavage. In addition, our approach permitted the unambiguous identification and quantification both isoforms of the proteasome-associated protein DSS1. PMID:23536786
Russell, Jason D; Scalf, Mark; Book, Adam J; Ladror, Daniel T; Vierstra, Richard D; Smith, Lloyd M; Coon, Joshua J
2013-01-01
Quantification of gas-phase intact protein ions by mass spectrometry (MS) is impeded by highly-variable ionization, ion transmission, and ion detection efficiencies. Therefore, quantification of proteins using MS-associated techniques is almost exclusively done after proteolysis where peptides serve as proxies for estimating protein abundance. Advances in instrumentation, protein separations, and informatics have made large-scale sequencing of intact proteins using top-down proteomics accessible to the proteomics community; yet quantification of proteins using a top-down workflow has largely been unaddressed. Here we describe a label-free approach to determine the abundance of intact proteins separated by nanoflow liquid chromatography prior to MS analysis by using solution-phase measurements of ultraviolet light-induced intrinsic fluorescence (UV-IF). UV-IF is measured directly at the electrospray interface just prior to the capillary exit where proteins containing at least one tryptophan residue are readily detected. UV-IF quantification was demonstrated using commercially available protein standards and provided more accurate and precise protein quantification than MS ion current. We evaluated the parallel use of UV-IF and top-down tandem MS for quantification and identification of protein subunits and associated proteins from an affinity-purified 26S proteasome sample from Arabidopsis thaliana. We identified 26 unique proteins and quantified 13 tryptophan-containing species. Our analyses discovered previously unidentified N-terminal processing of the β6 (PBF1) and β7 (PBG1) subunit - such processing of PBG1 may generate a heretofore unknown additional protease active site upon cleavage. In addition, our approach permitted the unambiguous identification and quantification both isoforms of the proteasome-associated protein DSS1.
Evaluation of Options for Interpreting Environmental ...
Report Secondary data from the BioResponse Operational Testing and Evaluation project were used to study six options for interpreting culture-based/microbial count data sets that include left censored data, or measurements that are less than established quantification limits and/or detection limits.
NASA Astrophysics Data System (ADS)
Martel, Dimitri; Tse Ve Koon, K.; Le Fur, Yann; Ratiney, Hélène
2015-11-01
Two-dimensional spectroscopy offers the possibility to unambiguously distinguish metabolites by spreading out the multiplet structure of J-coupled spin systems into a second dimension. Quantification methods that perform parametric fitting of the 2D MRS signal have recently been proposed for resolved PRESS (JPRESS) but not explicitly for Localized Correlation Spectroscopy (LCOSY). Here, through a whole metabolite quantification approach, correlation spectroscopy quantification performances are studied. The ability to quantify metabolite relaxation constant times is studied for three localized 2D MRS sequences (LCOSY, LCTCOSY and the JPRESS) in vitro on preclinical MR systems. The issues encountered during implementation and quantification strategies are discussed with the help of the Fisher matrix formalism. The described parameterized models enable the computation of the lower bound for error variance - generally known as the Cramér Rao bounds (CRBs), a standard of precision - on the parameters estimated from these 2D MRS signal fittings. LCOSY has a theoretical net signal loss of two per unit of acquisition time compared to JPRESS. A rapid analysis could point that the relative CRBs of LCOSY compared to JPRESS (expressed as a percentage of the concentration values) should be doubled but we show that this is not necessarily true. Finally, the LCOSY quantification procedure has been applied on data acquired in vivo on a mouse brain.
Ott, Stephan J; Musfeldt, Meike; Ullmann, Uwe; Hampe, Jochen; Schreiber, Stefan
2004-06-01
The composition of the human intestinal flora is important for the health status of the host. The global composition and the presence of specific pathogens are relevant to the effects of the flora. Therefore, accurate quantification of all major bacterial populations of the enteric flora is needed. A TaqMan real-time PCR-based method for the quantification of 20 dominant bacterial species and groups of the intestinal flora has been established on the basis of 16S ribosomal DNA taxonomy. A PCR with conserved primers was used for all reactions. In each real-time PCR, a universal probe for quantification of total bacteria and a specific probe for the species in question were included. PCR with conserved primers and the universal probe for total bacteria allowed relative and absolute quantification. Minor groove binder probes increased the sensitivity of the assays 10- to 100-fold. The method was evaluated by cross-reaction experiments and quantification of bacteria in complex clinical samples from healthy patients. A sensitivity of 10(1) to 10(3) bacterial cells per sample was achieved. No significant cross-reaction was observed. The real-time PCR assays presented may facilitate understanding of the intestinal bacterial flora through a normalized global estimation of the major contributing species.
STATISTICAL ANALYSIS OF TANK 18F FLOOR SAMPLE RESULTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, S.
2010-09-02
Representative sampling has been completed for characterization of the residual material on the floor of Tank 18F as per the statistical sampling plan developed by Shine [1]. Samples from eight locations have been obtained from the tank floor and two of the samples were archived as a contingency. Six samples, referred to in this report as the current scrape samples, have been submitted to and analyzed by SRNL [2]. This report contains the statistical analysis of the floor sample analytical results to determine if further data are needed to reduce uncertainty. Included are comparisons with the prior Mantis samples resultsmore » [3] to determine if they can be pooled with the current scrape samples to estimate the upper 95% confidence limits (UCL{sub 95%}) for concentration. Statistical analysis revealed that the Mantis and current scrape sample results are not compatible. Therefore, the Mantis sample results were not used to support the quantification of analytes in the residual material. Significant spatial variability among the current sample results was not found. Constituent concentrations were similar between the North and South hemispheres as well as between the inner and outer regions of the tank floor. The current scrape sample results from all six samples fall within their 3-sigma limits. In view of the results from numerous statistical tests, the data were pooled from all six current scrape samples. As such, an adequate sample size was provided for quantification of the residual material on the floor of Tank 18F. The uncertainty is quantified in this report by an upper 95% confidence limit (UCL{sub 95%}) on each analyte concentration. The uncertainty in analyte concentration was calculated as a function of the number of samples, the average, and the standard deviation of the analytical results. The UCL{sub 95%} was based entirely on the six current scrape sample results (each averaged across three analytical determinations).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Yueqi; Lava, Pascal; Reu, Phillip
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...
2015-12-23
This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.
Wang, Hanghang; Muehlbauer, Michael J.; O’Neal, Sara K.; Newgard, Christopher B.; Hauser, Elizabeth R.; Shah, Svati H.
2017-01-01
The field of metabolomics as applied to human disease and health is rapidly expanding. In recent efforts of metabolomics research, greater emphasis has been placed on quality control and method validation. In this study, we report an experience with quality control and a practical application of method validation. Specifically, we sought to identify and modify steps in gas chromatography-mass spectrometry (GC-MS)-based, non-targeted metabolomic profiling of human plasma that could influence metabolite identification and quantification. Our experimental design included two studies: (1) a limiting-dilution study, which investigated the effects of dilution on analyte identification and quantification; and (2) a concentration-specific study, which compared the optimal plasma extract volume established in the first study with the volume used in the current institutional protocol. We confirmed that contaminants, concentration, repeatability and intermediate precision are major factors influencing metabolite identification and quantification. In addition, we established methods for improved metabolite identification and quantification, which were summarized to provide recommendations for experimental design of GC-MS-based non-targeted profiling of human plasma. PMID:28841195
Colour thresholding and objective quantification in bioimaging
NASA Technical Reports Server (NTRS)
Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.
1992-01-01
Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.
WE-H-207A-06: Hypoxia Quantification in Static PET Images: The Signal in the Noise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, H; Yeung, I; Milosevic, M
2016-06-15
Purpose: Quantification of hypoxia from PET images is of considerable clinical interest. In the absence of dynamic PET imaging the hypoxic fraction (HF) of a tumor has to be estimated from voxel values of activity concentration of a radioactive hypoxia tracer. This work is part of an effort to standardize quantification of tumor hypoxic fraction from PET images. Methods: A simple hypoxia imaging model in the tumor was developed. The distribution of the tracer activity was described as the sum of two different probability distributions, one for the normoxic (and necrotic), the other for the hypoxic voxels. The widths ofmore » the distributions arise due to variability of the transport, tumor tissue inhomogeneity, tracer binding kinetics, and due to PET image noise. Quantification of HF was performed for various levels of variability using two different methodologies: a) classification thresholds between normoxic and hypoxic voxels based on a non-hypoxic surrogate (muscle), and b) estimation of the (posterior) probability distributions based on maximizing likelihood optimization that does not require a surrogate. Data from the hypoxia imaging model and from 27 cervical cancer patients enrolled in a FAZA PET study were analyzed. Results: In the model, where the true value of HF is known, thresholds usually underestimate the value for large variability. For the patients, a significant uncertainty of the HF values (an average intra-patient range of 17%) was caused by spatial non-uniformity of image noise which is a hallmark of all PET images. Maximum likelihood estimation (MLE) is able to directly optimize for the weights of both distributions, however, may suffer from poor optimization convergence. For some patients, MLE-based HF values showed significant differences to threshold-based HF-values. Conclusion: HF-values depend critically on the magnitude of the different sources of tracer uptake variability. A measure of confidence should also be reported.« less
Taylor, Jonathan Christopher; Fenner, John Wesley
2017-11-29
Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification performance was lower for the local database than the research database for both semi-quantitative and machine learning algorithms. However, for both databases, the machine learning methods generated equal or higher mean accuracies (with lower variance) than any of the semi-quantification approaches. The gain in performance from using machine learning algorithms as compared to semi-quantification was relatively small and may be insufficient, when considered in isolation, to offer significant advantages in the clinical context.
Mapping canopy gap fraction and leaf area index at continent-scale from satellite lidar
NASA Astrophysics Data System (ADS)
Mahoney, C.; Hopkinson, C.; Held, A. A.
2015-12-01
Information on canopy cover is essential for understanding spatial and temporal variability in vegetation biomass, local meteorological processes and hydrological transfers within vegetated environments. Gap fraction (GF), an index of canopy cover, is often derived over large areas (100's km2) via airborne laser scanning (ALS), estimates of which are reasonably well understood. However, obtaining country-wide estimates is challenging due to the lack of spatially distributed point cloud data. The Geoscience Laser Altimeter System (GLAS) removes spatial limitations, however, its large footprint nature and continuous waveform data measurements make derivations of GF challenging. ALS data from 3 Australian sites are used as a basis to scale-up GF estimates to GLAS footprint data by the use of a physically-based Weibull function. Spaceborne estimates of GF are employed in conjunction with supplementary predictor variables in the predictive Random Forest algorithm to yield country-wide estimates at a 250 m spatial resolution; country-wide estimates are accompanied with uncertainties at the pixel level. Preliminary estimates of effective Leaf Area Index (eLAI) are also presented by converting GF via the Beer-Lambert law, where an extinction coefficient of 0.5 is employed; deemed acceptable at such spatial scales. The need for such wide-scale quantification of GF and eLAI are key in the assessment and modification of current forest management strategies across Australia. Such work also assists Australia's Terrestrial Ecosystem Research Network (TERN), a key asset to policy makers with regards to the management of the national ecosystem, in fulfilling their government issued mandates.
Estimation of lactic acid bacterial cell number by DNA quantification.
Ishii, Masaki; Matsumoto, Yasuhiko; Sekimizu, Kazuhisa
2018-01-01
Lactic acid bacteria are provided by fermented foods, beverages, medicines, and supplements. Because the beneficial effects of medicines and supplements containing functional lactic acid bacteria are related to the bacterial cell number, it is important to establish a simple method for estimating the total number of lactic acid bacterial cells in the products for quality control. Almost all of the lactic acid bacteria in the products are dead, however, making it difficult to estimate the total number of lactic acid bacterial cells in the products using a standard colony-counting method. Here we estimated the total lactic acid bacterial cell number in samples containing dead bacteria by quantifying the DNA. The number of viable Enterococcus faecalis 0831-07 cells decreased to less than 1 × 10 -8 by 15 min of heat treatment at 80°C. The amount of extracted DNA from heat-treated cells was 78% that of non-heated cells. The number of viable Lactobacillus paraplantarum 11-1 cells decreased to 1 × 10 -4 after 4 days culture. The amount of extracted DNA of the long-cultured cells, however, was maintained at 97%. These results suggest that cell number of lactic acid bacteria killed by heat-treatment or long-term culture can be estimated by DNA quantification.
Zabik, John M.; Seiber, James N.
1993-01-01
Atmospheric transport of organophosphate pesticides from California's Central Valley to the Sierra Nevada mountains was assessed by collecting air- and wet-deposition samples during December, January, February, and March, 1990 to 1991. Large-scale spraying of these pesticides occurs during December and January to control insect infestations in valley orchards. Sampling sites were placed at 114- (base of the foothills), 533-, and 1920-m elevations. Samples acquired at these sites contained chlorpyrifos [phosphorothioic acid; 0,0-diethyl 0-(3,5,6-trichloro-2-pyridinyl) ester], parathion [phosphorothioic acid, 0-0-diethylo-(4-nitrophenyl) ester], diazinon {phosphorothioic acid, 0,0-diethyl 0-[6-methyl-2-(1-methylethyl)-4-pyrimidinyl] ester} diazinonoxon {phosphoric acid, 0,0-diethyl 0-[6-methyl-2-(1-methylethyl)-4-pyrimidinyl] ester}, and paraoxon [phosphoric acid, 0,0-diethyl 0-(4-nitrophenyl) ester] in both air and wet deposition samples. Air concentrations of chloropyrifos, diazinon and parathion ranged from 13 to 13 000 pg/m3 at the base of the foothills. At 533-m air concentrations were below the limit of quantification (1.4 pg/m3) to 83 pg/m3 and at 1920 m concentrations were below the limit of quantification. Concentrations in wet deposition varied with distance and elevation from the Central Valley. Rainwater concentrations at the base of the foot hills ranged from 16 to 7600 pg/mL. At 533-m rain and snow water concentrations ranged from below the limit of quantification (1.3 pg/mL) to 140 pg/mL and at 1920 m concentrations ranged from below the limit of quantification to 48 pg/mL. These findings indicate that atmospheric transport of pesticides applied in the valley to the Sierra Nevada mountains is occurring, but the levels decrease as distance and elevation increase from the valley floor.
Frew, John A.; Grue, Christian E.
2012-01-01
The neonicotinoid insecticide imidacloprid (IMI) has been proposed as an alternative to carbaryl for controlling indigenous burrowing shrimp on commercial oyster beds in Willapa Bay and Grays Harbor, Washington. A focus of concern over the use of this insecticide in an aquatic environment is the potential for adverse effects from exposure to non-target species residing in the Bay, such as juvenile Chinook (Oncorhynchus tshawytscha) and cutthroat trout (O. clarki). Federal registration and State permiting approval for the use of IMI will require confirmation that the compound does not adversely impact these salmonids following field applications. This will necessitate an environmental monitoring program for evaluating exposure in salmonids following the treatment of beds. Quantification of IMI residues in tissue can be used for determining salmonid exposure to the insecticide. Refinement of an existing protocol using liquid-chromatography mass spectrometry (LC-MS) detection would provide the low limits of quantification, given the relatively small tissue sample sizes, necessary for determining exposure in individual fish. Such an approach would not be viable for the environmental monitoring effort in Willapa Bay and Grays Harbor due to the high costs associated with running multiple analyses, however. A new sample preparation protocol was developed for use with a commercially available enzyme-linked immunosorbent assay (ELISA) for the quantification of IMI, thereby providing a low-cost alternative to LC-MS for environmental monitoring in Willapa Bay and Grays Harbor. Extraction of the analyte from the salmonid brain tissue was achieved by Dounce homogenization in 4.0 mL of 20.0 mM Triton X-100, followed by a 6 h incubation at 50–55 °C. Centrifugal ultrafiltration and reversed phase solid phase extraction were used for sample cleanup. The limit of quantification for an average 77.0 mg whole brain sample was calculated at 18.2 μg kg-1 (ppb) with an average recovery of 79%. This relatively low limit of quantification allows for the analysis of individual fish. Using controlled laboratory studies, a curvelinear relationship was found between the measured IMI residue concentrations in brain tissue and exposure concentrations in seawater. Additonally, a range of IMI brain residue concentrations was associated with an overt effect; illustrating the utility of the IMI tissue residue quantification approach for linking exposure with defined effects.
NASA Astrophysics Data System (ADS)
Wang, Audrey; Price, David T.
2007-03-01
A simple integrated algorithm was developed to relate global climatology to distributions of tree plant functional types (PFT). Multivariate cluster analysis was performed to analyze the statistical homogeneity of the climate space occupied by individual tree PFTs. Forested regions identified from the satellite-based GLC2000 classification were separated into tropical, temperate, and boreal sub-PFTs for use in the Canadian Terrestrial Ecosystem Model (CTEM). Global data sets of monthly minimum temperature, growing degree days, an index of climatic moisture, and estimated PFT cover fractions were then used as variables in the cluster analysis. The statistical results for individual PFT clusters were found consistent with other global-scale classifications of dominant vegetation. As an improvement of the quantification of the climatic limitations on PFT distributions, the results also demonstrated overlapping of PFT cluster boundaries that reflected vegetation transitions, for example, between tropical and temperate biomes. The resulting global database should provide a better basis for simulating the interaction of climate change and terrestrial ecosystem dynamics using global vegetation models.
Li, Mingzhu; Senda, Masako; Komatsu, Tsutomu; Suga, Haruhisa; Kageyama, Koji
2010-10-20
Pythium intermedium is known to play an important role in the carbon cycling of cool-temperate forest soils. In this study, a fast, precise and effective real-time PCR technique for estimating the population densities of P. intermedium from soils was developed using species-specific primers. Specificity was confirmed both with conventional PCR and real-time PCR. The detection limit (sensitivity) was determined and amplification standard curves were generated using SYBR Green II fluorescent dye. A rapid and accurate assay for quantification of P. intermedium in Takayama forest soils of Japan was developed using a combination of a new DNA extraction method and PCR primers were developed for real-time PCR. And the distribution of P. intermedium in forest soil was investigated with both soil plating method and the developed real-time PCR technique. This new technique will be a useful tool and can be applied to practical use for studying the role of Pythium species in forest and agricultural ecosystems. Copyright © 2009 Elsevier GmbH. All rights reserved.
Vieira, Elsa; Soares, M Elisa; Kozior, Marta; Krejpcio, Zbigniew; Ferreira, Isabel M P L V O; Bastos, M Lourdes
2014-09-17
A survey of the presence of total and hexavalent chromium in lager beers was conducted to understand the variability between different styles of lager beer packaged in glass or cans and to estimate daily intake of total Cr and hexavalent chromium from beer. Graphite-furnace atomic absorption spectroscopy using validated methodologies was applied. Selective extraction of hexavalent chromium was performed using a Chromabond NH2/500 mg column and elution with nitric acid. The detection limits were 0.26 and 0.68 μg L(-1) for total Cr and Cr(VI), respectively. The mean content of total Cr ranged between 1.13 μg L(-1) in canned pale lager and 4.32 μg L(-1) in low-alcohol beers, whereas the mean content of Cr(VI) was <2.51 μg L(-1). Considering an intake of 500 mL of beer, beer consumption can contribute approximately 2.28-8.64 and 1.6-6.17% of the recommended daily intake of chromium for women and men, respectively.
Agersnap, Sune; Larsen, William Brenner; Knudsen, Steen Wilhelm; Strand, David; Thomsen, Philip Francis; Hesselsøe, Martin; Mortensen, Peter Bondgaard; Vrålstad, Trude; Møller, Peter Rask
2017-01-01
For several hundred years freshwater crayfish (Crustacea-Decapoda-Astacidea) have played an important ecological, cultural and culinary role in Scandinavia. However, many native populations of noble crayfish Astacus astacus have faced major declines during the last century, largely resulting from human assisted expansion of non-indigenous signal crayfish Pacifastacus leniusculus that carry and transmit the crayfish plague pathogen. In Denmark, also the non-indigenous narrow-clawed crayfish Astacus leptodactylus has expanded due to anthropogenic activities. Knowledge about crayfish distribution and early detection of non-indigenous and invasive species are crucial elements in successful conservation of indigenous crayfish. The use of environmental DNA (eDNA) extracted from water samples is a promising new tool for early and non-invasive detection of species in aquatic environments. In the present study, we have developed and tested quantitative PCR (qPCR) assays for species-specific detection and quantification of the three above mentioned crayfish species on the basis of mitochondrial cytochrome oxidase 1 (mtDNA-CO1), including separate assays for two clades of A. leptodactylus. The limit of detection (LOD) was experimentally established as 5 copies/PCR with two different approaches, and the limit of quantification (LOQ) were determined to 5 and 10 copies/PCR, respectively, depending on chosen approach. The assays detected crayfish in natural freshwater ecosystems with known populations of all three species, and show promising potentials for future monitoring of A. astacus, P. leniusculus and A. leptodactylus. However, the assays need further validation with data 1) comparing traditional and eDNA based estimates of abundance, and 2) representing a broader geographical range for the involved crayfish species.
NASA Astrophysics Data System (ADS)
Nagaraja, Padmarajaiah; Avinash, Krishnegowda; Shivakumar, Anantharaman; Krishna, Honnur
Glomerular filtration rate (GFR), the marker of chronic kidney disease can be analyzed by the concentration of cystatin C or creatinine and its clearance in human urine and serum samples. The determination of cystatin C alone as an indicator of GFR does not provide high accuracy, and is more expensive, thus measurement of creatinine has an important role in estimating GFR. We have made an attempt to quantify creatinine based on its pseudoenzyme activity of creatinine in the presence of copper. Creatinine in the presence of copper oxidizes paraphenylenediamine dihydrochloride (PPDD) which couples with dimethylamino benzoicacid (DMAB) giving green colored chromogenic product with maximum absorbance at 710 nm. Kinetic parameters relating this reaction were evaluated. Analytical curves of creatinine by fixed time and rate methods were linear at 8.8-530 μmol L-1 and 0.221-2.65 mmol L-1, respectively. Recovery of creatinine varied from 97.8 to 107.8%. Limit of detection and limit of quantification were 2.55 and 8.52 μmol L-1 respectively whereas Sandell's sensitivity and molar absorption coefficient values were 0.0407 μg cm-2 and 0.1427 × 104 L mol-1 cm-1 respectively. Precision studies showed that within day imprecision was 0.745-1.26% and day-to-day imprecision was 1.55-3.65%. The proposed method was applied to human urine and serum samples and results were validated in accordance with modified Jaffe's procedure. Wide linearity ranges with good recovery, less tolerance from excipients and application of the method to serum and urine samples are the claims which ascertain much advantage to this method.
Knudsen, Steen Wilhelm; Strand, David; Thomsen, Philip Francis; Hesselsøe, Martin; Mortensen, Peter Bondgaard; Vrålstad, Trude; Møller, Peter Rask
2017-01-01
For several hundred years freshwater crayfish (Crustacea—Decapoda—Astacidea) have played an important ecological, cultural and culinary role in Scandinavia. However, many native populations of noble crayfish Astacus astacus have faced major declines during the last century, largely resulting from human assisted expansion of non-indigenous signal crayfish Pacifastacus leniusculus that carry and transmit the crayfish plague pathogen. In Denmark, also the non-indigenous narrow-clawed crayfish Astacus leptodactylus has expanded due to anthropogenic activities. Knowledge about crayfish distribution and early detection of non-indigenous and invasive species are crucial elements in successful conservation of indigenous crayfish. The use of environmental DNA (eDNA) extracted from water samples is a promising new tool for early and non-invasive detection of species in aquatic environments. In the present study, we have developed and tested quantitative PCR (qPCR) assays for species-specific detection and quantification of the three above mentioned crayfish species on the basis of mitochondrial cytochrome oxidase 1 (mtDNA-CO1), including separate assays for two clades of A. leptodactylus. The limit of detection (LOD) was experimentally established as 5 copies/PCR with two different approaches, and the limit of quantification (LOQ) were determined to 5 and 10 copies/PCR, respectively, depending on chosen approach. The assays detected crayfish in natural freshwater ecosystems with known populations of all three species, and show promising potentials for future monitoring of A. astacus, P. leniusculus and A. leptodactylus. However, the assays need further validation with data 1) comparing traditional and eDNA based estimates of abundance, and 2) representing a broader geographical range for the involved crayfish species. PMID:28654642
Using airborne laser altimetry to determine fuel models for estimating fire behavior
Carl A. Seielstad; Lloyd P. Queen
2003-01-01
Airborne laser altimetry provides an unprecedented view of the forest floor in timber fuel types and is a promising new tool for fuels assessments. It can be used to resolve two fuel models under closed canopies and may be effective for estimating coarse woody debris loads. A simple metric - obstacle density - provides the necessary quantification of fuel bed roughness...
The report gives results of a first attempt to estimate global and country-specific methane (CH4) emissons from sewers and on-site wastewater treatment systems, including latrines and septic sewage tanks. It follows a report that includes CH4 and nitrous oxide (N2O) estimates fro...
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
NASA Astrophysics Data System (ADS)
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
Franquesa, Marcella; Hoogduijn, Martin J.; Ripoll, Elia; Luk, Franka; Salih, Mahdi; Betjes, Michiel G. H.; Torras, Juan; Baan, Carla C.; Grinyó, Josep M.; Merino, Ana Maria
2014-01-01
The research field on extracellular vesicles (EV) has rapidly expanded in recent years due to the therapeutic potential of EV. Adipose tissue human mesenchymal stem cells (ASC) may be a suitable source for therapeutic EV. A major limitation in the field is the lack of standardization of the challenging techniques to isolate and characterize EV. The aim of our study was to incorporate new controls for the detection and quantification of EV derived from ASC and to analyze the applicability and limitations of the available techniques. ASC were cultured in medium supplemented with 5% of vesicles-free fetal bovine serum. The EV were isolated from conditioned medium by differential centrifugation with size filtration (0.2 μm). As a control, non-conditioned culture medium was used (control medium). To detect EV, electron microscopy, conventional flow cytometry, and western blot were used. The quantification of the EV was by total protein quantification, ExoELISA immunoassay, and Nanosight. Cytokines and growth factors in the EV samples were measured by multiplex bead array kit. The EV were detected by electron microscope. Total protein measurement was not useful to quantify EV as the control medium showed similar protein contents as the EV samples. The ExoELISA kits had technical troubles and it was not possible to quantify the concentration of exosomes in the samples. The use of Nanosight enabled quantification and size determination of the EV. It is, however, not possible to distinguish protein aggregates from EV with this method. The technologies for quantification and characterization of the EV need to be improved. In addition, we detected protein contaminants in the EV samples, which make it difficult to determine the real effect of EV in experimental models. It will be crucial in the future to optimize design novel methods for purification and characterization of EV. PMID:25374572
Kolo, Matthew Tikpangi; Khandaker, Mayeen Uddin; Amin, Yusoff Mohd; Abdullah, Wan Hasiah Binti
2016-01-01
Following the increasing demand of coal for power generation, activity concentrations of primordial radionuclides were determined in Nigerian coal using the gamma spectrometric technique with the aim of evaluating the radiological implications of coal utilization and exploitation in the country. Mean activity concentrations of 226Ra, 232Th, and 40K were 8.18±0.3, 6.97±0.3, and 27.38±0.8 Bq kg-1, respectively. These values were compared with those of similar studies reported in literature. The mean estimated radium equivalent activity was 20.26 Bq kg-1 with corresponding average external hazard index of 0.05. Internal hazard index and representative gamma index recorded mean values of 0.08 and 0.14, respectively. These values were lower than their respective precautionary limits set by UNSCEAR. Average excess lifetime cancer risk was calculated to be 0.04×10-3, which was insignificant compared with 0.05 prescribed by ICRP for low level radiation. Pearson correlation matrix showed significant positive relationship between 226Ra and 232Th, and with other estimated hazard parameters. Cumulative mean occupational dose received by coal workers via the three exposure routes was 7.69 ×10-3 mSv y-1, with inhalation pathway accounting for about 98%. All radiological hazard indices evaluated showed values within limits of safety. There is, therefore, no likelihood of any immediate radiological health hazards to coal workers, final users, and the environment from the exploitation and utilization of Maiganga coal.
Kolo, Matthew Tikpangi; Khandaker, Mayeen Uddin; Amin, Yusoff Mohd; Abdullah, Wan Hasiah Binti
2016-01-01
Following the increasing demand of coal for power generation, activity concentrations of primordial radionuclides were determined in Nigerian coal using the gamma spectrometric technique with the aim of evaluating the radiological implications of coal utilization and exploitation in the country. Mean activity concentrations of 226Ra, 232Th, and 40K were 8.18±0.3, 6.97±0.3, and 27.38±0.8 Bq kg-1, respectively. These values were compared with those of similar studies reported in literature. The mean estimated radium equivalent activity was 20.26 Bq kg-1 with corresponding average external hazard index of 0.05. Internal hazard index and representative gamma index recorded mean values of 0.08 and 0.14, respectively. These values were lower than their respective precautionary limits set by UNSCEAR. Average excess lifetime cancer risk was calculated to be 0.04×10−3, which was insignificant compared with 0.05 prescribed by ICRP for low level radiation. Pearson correlation matrix showed significant positive relationship between 226Ra and 232Th, and with other estimated hazard parameters. Cumulative mean occupational dose received by coal workers via the three exposure routes was 7.69 ×10−3 mSv y-1, with inhalation pathway accounting for about 98%. All radiological hazard indices evaluated showed values within limits of safety. There is, therefore, no likelihood of any immediate radiological health hazards to coal workers, final users, and the environment from the exploitation and utilization of Maiganga coal. PMID:27348624
NASA Astrophysics Data System (ADS)
Das, Anindita; Cao, Wenrui; Zhang, Hongjie; Saren, Gaowa; Jiang, Mingyu; Yu, Xinke
2017-11-01
Oceanic stretches experiencing perpetual darkness and extreme limitation of utilizable organic matter often rely on chemosynthetic carbon (C)-fixation. However, C-fixation is not limited to carbon-deplete environments alone but might also occur in varying degrees in carbon-replete locales depending on the nature and concentration of utilizable carbon, electron donors and acceptors. Quantification of microbial C-fixation and relative contribution of domains bacteria and archaea are therefore crucial. The present experiment estimates the differential rates of C-fixation by archaea and bacteria along with the effects of different electron donors. Four Sino-Pacific marine sediments from Bashi strait (Western Pacific Warm Pool), East China Sea, South China Sea and Okinawa Trough were examined. Total microbial C-uptake was estimated by doping of aqueous NaH14CO3. Total bacterial C-uptake was measured by blocking archaeal metabolism using inhibitor GC7. Archaeal contribution was estimated by subtracting total bacterial from total microbial C-uptake. Effect of electron donor addition was analyzed by spiking with ammonium, sulfide, and reduced metals. Results suggested that C-fixation in marine sediments was not the function of archaea alone, which was in contrast to results from several recent publications. C-fixing bacteria are also equally active. Often in spite of great effort of one domain to fix carbon, the system does not become net C-fixing due to equal and opposite C-releasing activity of the other domain. Thus a C-releasing bacterial or archaeal community can become C-fixing with the change of nature and concentration of electron donors.
Corwin, Michael T; Seibert, J Anthony; Fananapazir, Ghaneh; Lamba, Ramit; Boone, John M
2016-04-01
The purposes of this study were to correlate fetal z-axis location within the maternal abdomen on CT with gestational age and estimate fetal dose reduction of a study limited to the abdomen only, with its lower aspect at the top of the iliac crests, compared with full abdominopelvic CT in pregnant trauma patients. We performed a study of pregnant patients who underwent CT of the abdomen and pelvis for trauma at a single institution over a 10-year period. The inferior aspect of maternal liver, spleen, gallbladder, pancreas, adrenals, and kidneys was recorded as above or below the iliac crests. The distance from the iliac crest to the top of the fetus or gestational sac was determined. The CT images of the limited and full scanning studies were independently reviewed by two blinded radiologists to identify traumatic injuries. Fetal dose profiles, including both scatter and primary radiation, were computed analytically along the central axis of the patient to estimate fetal dose reduction. Linear regression analysis was performed between gestational age and distance of the fetus to the iliac crests. Thirty-five patients were included (mean age, 26.2 years). Gestational age ranged from 5 to 38 weeks, with 5, 19, and 11 gestations in the first, second, and third trimesters, respectively. All solid organs were above the iliac crests in all patients. In three of six patients, traumatic findings in the pelvis would have been missed with the limited study. There was high correlation between gestational age and distance of the fetus to the iliac crests (R(2) = 0.84). The mean gestational age at which the top of the fetus was at the iliac crest was 17.3 weeks. Using the limited scanning study, fetuses at 5, 20, and 40 weeks of gestation would receive an estimated 4.3%, 26.2%, and 59.9% of the dose, respectively, compared with the dose for the full scanning study. In pregnant patients in our series with a history of trauma, CT of the abdomen only was an effective technique to reduce fetal radiation exposure compared with full abdomen and pelvis CT.
Gallizzi, Michael A; Khazai, Ravand S; Gagnon, Christine M; Bruehl, Stephen; Harden, R Norman
2015-03-01
To correlate the amount and types of pain medications prescribed to CRPS patients, using the Medication Quantification Scale, and patients' subjective pain levels. An international, multisite, retrospective review. University medical centers in the United States, Israel, Germany, and the Netherlands. A total of 89 subjects were enrolled from four different countries: 27 from the United States, 20 Germany, 18 Netherlands, and 24 Israel. The main outcome measures used were the Medication Quantification Scale III and numerical analog pain scale. There was no statistically significant correlation noted between the medication quantification scale and the visual analog scale for any site except for a moderate positive correlation at German sites. The medication quantification scale mean differences between the United States and Germany, the Netherlands, and Israel were 9.793 (P < 0.002), 10.389 (P < 0.001), and 4.984 (P = 0.303), respectively. There appears to be only a weak correlation between amount of pain medication prescribed and patients' reported subjective pain intensity within this limited patient population. The Medication Quantification Scale is a viable tool for the analysis of pharmaceutical treatment of CRPS patients and would be useful in further prospective studies of pain medication prescription practices in the CRPS population worldwide. Wiley Periodicals, Inc.
Manibalan, Kesavan; Mani, Veerappan; Chang, Pu-Chieh; Huang, Chih-Hung; Huang, Sheng-Tung; Marchlewicz, Kasper; Neethirajan, Suresh
2017-10-15
Hydrogen sulfide (H 2 S) was discovered as a third gasotransmitter in biological systems and recent years have seen a growing interest to understand its physiological and pathological functions. However, one major limiting factor is the lack of robust sensors to quantitatively track its production in real-time. We described a facile electrochemical assay based on latent redox probe approach for highly specific and sensitive quantification in living cells. Two chemical probes, Azido Benzyl ferrocene carbamate (ABFC) and N-alkyl Azido Benzyl ferrocene carbamate (NABFC) composed of azide trigger group were designed. H 2 S molecules specifically triggered the release of reporters from probes and the current response was monitored using graphene oxide film modified electrode as transducer. The detection limits are 0.32µM (ABFC) and 0.076µM (NABFC) which are comparable to those of current sensitive methods. The probes are successful in the determination of H 2 S spiked in whole human blood, fetal bovine serum, and E. coli. The continuous monitoring and quantification of endogenous H 2 S production in E. coli were successfully accomplished. This work lays first step stone towards real-time electrochemical quantification of endogenous H 2 S in living cells, thus hold great promise in the analytical aspects of H 2 S. Copyright © 2017 Elsevier B.V. All rights reserved.
Study of boron detection limit using the in-air PIGE set-up at LAMFI-USP
NASA Astrophysics Data System (ADS)
Moro, M. V.; Silva, T. F.; Trindade, G. F.; Added, N.; Tabacniks, M. H.
2014-11-01
The quantification of small amounts of boron in materials is of extreme importance in different areas of materials science. Boron is an important contaminant and also a silicon dopant in the semiconductor industry. Boron is also extensively used in nuclear power plants, either for neutron shielding or for safety control and boron is an essential nutrient for life, either vegetable or animal. The production of silicon solar cells, by refining metallurgical-grade silicon (MG-Si) requires the control and reduction of several silicon contaminants to very low concentration levels. Boron is one of the contaminants of solar-grade silicon (SG-Si) that must be controlled and quantified at sub-ppm levels. In the metallurgical purification, boron quantification is usually made by Inductive Coupled Plasma Mass Spectrometry, (ICP-MS) but the results need to be verified by an independent analytical method. In this work we present the results of the analysis of silicon samples by Particle Induced Gamma-Ray Emission (PIGE) aiming the quantification of low concentrations of boron. PIGE analysis was carried out using the in-air external beam line of the Laboratory for Materials Analysis with Ion Beans (LAMFI-USP) by the 10B ( p ,αγ(7Be nuclear reaction, and measuring the 429 keV γ-ray. The in-air PIGE measurements at LAMFI have a quantification limit of the order of 1016 at/cm2.
Buck, Thomas; Hwang, Shawn M; Plicht, Björn; Mucci, Ronald A; Hunold, Peter; Erbel, Raimund; Levine, Robert A
2008-06-01
Cardiac ultrasound imaging systems are limited in the noninvasive quantification of valvular regurgitation due to indirect measurements and inaccurate hemodynamic assumptions. We recently demonstrated that the principle of integration of backscattered acoustic Doppler power times velocity can be used for flow quantification in valvular regurgitation directly at the vena contracta of a regurgitant flow jet. We now aimed to accomplish implementation of automated Doppler power flow analysis software on a standard cardiac ultrasound system utilizing novel matrix-array transducer technology with detailed description of system requirements, components and software contributing to the system. This system based on a 3.5 MHz, matrix-array cardiac ultrasound scanner (Sonos 5500, Philips Medical Systems) was validated by means of comprehensive experimental signal generator trials, in vitro flow phantom trials and in vivo testing in 48 patients with mitral regurgitation of different severity and etiology using magnetic resonance imaging (MRI) for reference. All measurements displayed good correlation to the reference values, indicating successful implementation of automated Doppler power flow analysis on a matrix-array ultrasound imaging system. Systematic underestimation of effective regurgitant orifice areas >0.65 cm(2) and volumes >40 ml was found due to currently limited Doppler beam width that could be readily overcome by the use of new generation 2D matrix-array technology. Automated flow quantification in valvular heart disease based on backscattered Doppler power can be fully implemented on board a routinely used matrix-array ultrasound imaging systems. Such automated Doppler power flow analysis of valvular regurgitant flow directly, noninvasively, and user independent overcomes the practical limitations of current techniques.
PCR technology for screening and quantification of genetically modified organisms (GMOs).
Holst-Jensen, Arne; Rønning, Sissel B; Løvseth, Astrid; Berdal, Knut G
2003-04-01
Although PCR technology has obvious limitations, the potentially high degree of sensitivity and specificity explains why it has been the first choice of most analytical laboratories interested in detection of genetically modified (GM) organisms (GMOs) and derived materials. Because the products that laboratories receive for analysis are often processed and refined, the quality and quantity of target analyte (e.g. protein or DNA) frequently challenges the sensitivity of any detection method. Among the currently available methods, PCR methods are generally accepted as the most sensitive and reliable methods for detection of GM-derived material in routine applications. The choice of target sequence motif is the single most important factor controlling the specificity of the PCR method. The target sequence is normally a part of the modified gene construct, for example a promoter, a terminator, a gene, or a junction between two of these elements. However, the elements may originate from wildtype organisms, they may be present in more than one GMO, and their copy number may also vary from one GMO to another. They may even be combined in a similar way in more than one GMO. Thus, the choice of method should fit the purpose. Recent developments include event-specific methods, particularly useful for identification and quantification of GM content. Thresholds for labelling are now in place in many countries including those in the European Union. The success of the labelling schemes is dependent upon the efficiency with which GM-derived material can be detected. We will present an overview of currently available PCR methods for screening and quantification of GM-derived DNA, and discuss their applicability and limitations. In addition, we will discuss some of the major challenges related to determination of the limits of detection (LOD) and quantification (LOQ), and to validation of methods.
Study for identification of Beneficial uses of Space (BUS). Volume 3: Appendices
NASA Technical Reports Server (NTRS)
1975-01-01
The quantification of required specimen(s) from space processing experiments, the typical EMI measurements and estimates of a typical RF source, and the integration of commercial payloads into spacelab were considered.
Mandal, Kousik; Jyot, Gagan; Singh, Balwinder
2009-12-01
Residues of spinosad were estimated in cauliflower curds using high performance liquid chromatography (HPLC) and confirmed by high performance thin layer chromatography (HPTLC). Following three application of spinosad (Success 2.5 SC) at 15 and 30 g a.i. ha−1, the average initial deposits of spinosad were observed to be 0.57 and 1.34 mg kg−1, respectively. These residues dissipated below the limit of quantification (LOQ) of 0.02 mg kg−1 after 10 days at both the dosages. The half-life values (T 1/2) of spinosad were worked out to be 1.20 and 1.58 days, respectively, at recommended and double the recommended dosages. Thus, a waiting period of 6 days is suggested for the safe consumption of spinosad treated cauliflower.
NASA Astrophysics Data System (ADS)
Young, Duncan; Blankeship, Donald; Beem, Lucas; Cavitte, Marie; Quartini, Enrica; Lindzey, Laura; Jackson, Charles; Roberts, Jason; Ritz, Catherine; Siegert, Martin; Greenbaum, Jamin; Frederick, Bruce
2017-04-01
The roughness of subglacial interfaces (as measured by airborne radar echo sounding) at length scales between profile line spacing and the footprint of the instrument is a key, but complex, signature of glacial and geomorphic processes, material lithology and integrated history at the bed of ice sheets. Subglacial roughness is also intertwined with assessments of ice thickness uncertainty using radar echo sounding, the utility of interpolation methodologies, and a key aspect of subglacial assess strategies. Here we present an assessment of subglacial roughness estimation in both West and East Antarctica, and compare this to exposed subglacial terrains. We will use recent high resolution aerogeophysical surveys to examine what variations in roughness are a fingerprint for, assess the limits of ice thickness uncertainty quantification and compare strategies for roughness assessment and utilization.
Analysis of eight glycols in serum using LC-ESI-MS-MS.
Imbert, Laurent; Saussereau, Elodie; Lacroix, Christian
2014-01-01
A liquid chromatography coupled with electrospray tandem mass spectrometry method was developed for the analysis of ethylene glycol, diethylene glycol, triethylene glycol, 1,4-butanediol, 1,2-butanediol, 2,3-butanediol, 1,2-propanediol and 1,3-propanediol, in serum after a Schotten-Baumann derivatization by benzoyl chloride. Usual validation parameters were tested: linearity, repeatability and intermediate precision, limits of detection and quantification, carry over and ion suppression. Limits of detection were between 0.18 and 1.1 mg/L, and limits of quantification were between 0.4 and 2.3 mg/L. Separation of isomers was possible either chromatographically or by selecting specific multiple reaction monitoring transitions. This method could be a useful tool in case of suspected intoxication with antifreeze agents, solvents, dietary supplements or some medical drug compounds. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Da; Zhang, Qibin; Gao, Xiaoli
2014-04-30
We have developed a tool for automated, high-throughput analysis of LC-MS/MS data files, which greatly simplifies LC-MS based lipidomics analysis. Our results showed that LipidMiner is accurate and comprehensive in identification and quantification of lipid molecular species. In addition, the workflow implemented in LipidMiner is not limited to identification and quantification of lipids. If a suitable metabolite library is implemented in the library matching module, LipidMiner could be reconfigured as a tool for general metabolomics data analysis. It is of note that LipidMiner currently is limited to singly charged ions, although it is adequate for the purpose of lipidomics sincemore » lipids are rarely multiply charged,[14] even for the polyphosphoinositides. LipidMiner also only processes file formats generated from mass spectrometers from Thermo, i.e. the .RAW format. In the future, we are planning to accommodate file formats generated by mass spectrometers from other predominant instrument vendors to make this tool more universal.« less
Gunn, Josh; Kriger, Scott; Terrell, Andrea R
2010-01-01
The simultaneous determination and quantification of cocaine and its major metabolite, benzoylecgonine, in meconium using UPLC-MS/MS is described. Ultra-performance liquid chromatography (UPLC) is an emerging analytical technique which draws upon the principles of chromatography to run separations at higher flow rates for increased speed, while simultaneously achieving superior resolution and sensitivity. Extraction of cocaine and benzoylecgonine from the homogenized meconium matrix was achieved with a preliminary protein precipitation or protein 'crash' employing cold acetonitrile, followed by a mixed mode solid phase extraction (SPE). Following elution from the SPE cartridge, eluents were dried down under nitrogen, reconstituted in 200 microL of DI water:acetonitrile (ACN) (75:25), and injected onto the UPLC/MS/MS for analysis. The increased speed and separation efficiency afforded by UPLC, allowed for the separation and subsequent quantification of both analytes in less than 2 min. Analytes were quantified using multiple reaction monitoring (MRM) and six-point calibration curves constructed in negative blood. Limits of detection for both analytes were 3 ng/g and the lower limit of quantitation (LLOQ) was 30 ng/g.
Costa, Rosaria; Tedone, Laura; De Grazia, Selenia; Dugo, Paola; Mondello, Luigi
2013-04-03
Multiple headspace-solid phase microextraction (MHS-SPME) followed by gas chromatography/mass spectrometry (GC-MS) and flame ionization detection (GC-FID) was applied to the identification and quantification of volatiles released by the mushroom Agaricus bisporus, also known as champignon. MHS-SPME allows to perform quantitative analysis of volatiles from solid matrices, free of matrix interferences. Samples analyzed were fresh mushrooms (chopped and homogenized) and mushroom-containing food dressings. 1-Octen-3-ol, 3-octanol, 3-octanone, 1-octen-3-one and benzaldehyde were common constituents of the samples analyzed. Method performance has been tested through the evaluation of limit of detection (LoD, range 0.033-0.078 ng), limit of quantification (LoQ, range 0.111-0.259 ng) and analyte recovery (92.3-108.5%). The results obtained showed quantitative differences among the samples, which can be attributed to critical factors, such as the degree of cell damage upon sample preparation, that are here discussed. Considerations on the mushrooms biochemistry and on the basic principles of MHS analysis are also presented. Copyright © 2013 Elsevier B.V. All rights reserved.
Jain, Rajeev; Sinha, Ankita; Khan, Ab Lateef
2016-08-01
A novel polyaniline-graphene oxide nanocomposite (PANI/GO/GCE) sensor has been fabricated for quantification of a calcium channel blocker drug levamlodipine (LAMP). Fabricated sensor has been characterized by electrochemical impedance spectroscopy, square wave and cyclic voltammetry, Raman spectroscopy and Fourier transform infrared (FTIR) spectroscopy. The developed PANI/GO/GCE sensor has excellent analytical performance towards electrocatalytic oxidation as compared to PANI/GCE, GO/GCE and bare GCE. Under optimized experimental conditions, the fabricated sensor exhibits a linear response for LAMP for its oxidation over a concentration range from 1.25μgmL(-1) to 13.25μgmL(-1) with correlation coefficient of 0.9950 (r(2)), detection limit of 1.07ngmL(-1) and quantification limit of 3.57ngmL(-1). The sensor shows an excellent performance for detecting LAMP with reproducibility of 2.78% relative standard deviation (RSD). The proposed method has been successfully applied for LAMP determination in pharmaceutical formulation with a recovery from 99.88% to 101.75%. Copyright © 2015 Elsevier B.V. All rights reserved.
Simple detection of residual enrofloxacin in meat products using microparticles and biochips.
Ha, Mi-Sun; Chung, Myung-Sub; Bae, Dong-Ho
2016-05-01
A simple and sensitive method for detecting enrofloxacin, a major veterinary fluoroquinolone, was developed. Monoclonal antibody specific for enrofloxacin was immobilised on a chip and fluorescent dye-labelled microparticles were covalently bound to the enrofloxacin molecules. Enrofloxacin in solution competes with the microparticle-immobilised enrofloxacin (enroMPs) to bind to the antibody on the chip. The presence of enrofloxacin was verified by detecting the fluorescence of enrofloxacin-bound microparticles. Under optimum conditions, a high dynamic range was achieved at enrofloxacin concentrations ranging from 1 to 1000 μg kg(-1). The limits of detection and quantification for standard solutions were 5 and 20 μg kg(-1) respectively, which are markedly lower than the maximum residue limit. Using simple extraction methods, recoveries from fortified beef, pork and chicken samples were 43.4-62.3%. This novel method also enabled approximate quantification of enrofloxacin concentration: the enroMP signal intensity decreased with increasing enrofloxacin concentration. Because of its sensitivity, specificity, simplicity and rapidity, the method described herein will facilitate the detection and approximate quantification of enrofloxacin residues in foods in a high-throughput manner.
An Uncertainty Quantification Framework for Prognostics and Condition-Based Monitoring
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar; Goebel, Kai
2014-01-01
This paper presents a computational framework for uncertainty quantification in prognostics in the context of condition-based monitoring of aerospace systems. The different sources of uncertainty and the various uncertainty quantification activities in condition-based prognostics are outlined in detail, and it is demonstrated that the Bayesian subjective approach is suitable for interpreting uncertainty in online monitoring. A state-space model-based framework for prognostics, that can rigorously account for the various sources of uncertainty, is presented. Prognostics consists of two important steps. First, the state of the system is estimated using Bayesian tracking, and then, the future states of the system are predicted until failure, thereby computing the remaining useful life of the system. The proposed framework is illustrated using the power system of a planetary rover test-bed, which is being developed and studied at NASA Ames Research Center.
Staack, Roland F; Jordan, Gregor; Heinrich, Julia
2012-02-01
For every drug development program it needs to be discussed whether discrimination between free and total drug concentrations is required to accurately describe its pharmacokinetic behavior. This perspective describes the application of mathematical simulation approaches to guide this initial decision based on available knowledge about target biology, binding kinetics and expected drug concentrations. We provide generic calculations that can be used to estimate the necessity of free drug quantification for different drug molecules. In addition, mathematical approaches are used to simulate various assay conditions in bioanalytical ligand-binding assays: it is demonstrated that due to the noncovalent interaction between the binding partners and typical assay-related interferences in the equilibrium, a correct quantification of the free drug concentration is highly challenging and requires careful design of different assay procedure steps.
Quantifying differences in land use emission estimates implied by definition discrepancies
NASA Astrophysics Data System (ADS)
Stocker, B. D.; Joos, F.
2015-11-01
The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review differences of eLUC quantification methods and apply an Earth System Model (ESM) of Intermediate Complexity to quantify them. We find that the magnitude of effects due to merely conceptual differences between ESM and offline vegetation model-based quantifications is ~ 20 % for today. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate secondary component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.
Redruello, Begoña; Ladero, Victor; Cuesta, Isabel; Álvarez-Buylla, Jorge R; Martín, María Cruz; Fernández, María; Alvarez, Miguel A
2013-08-15
Derivatisation treatment with diethyl ethoxymethylenemalonate followed by ultra-HPLC allowed the simultaneous quantification of 22 amino acids, 7 biogenic amines and ammonium ions in cheese samples in under 10 min. This is the fastest elution time ever reported for such a resolution. The proposed method shows good linearity (R(2)>0.995) and sensitivity (detection limit 0.08-3.91 μM; quantification limit <13.02 μM). Intra- and inter-day repeatability ranged from 0.35% to 1.25% and from 0.85% to 5.2%, respectively. No significant effect of the cheese matrix was observed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Heckman, Katherine M; Otemuyiwa, Bamidele; Chenevert, Thomas L; Malyarenko, Dariya; Derstine, Brian A; Wang, Stewart C; Davenport, Matthew S
2018-06-27
The purpose of the study is to determine whether a novel semi-automated DIXON-based fat quantification algorithm can reliably quantify visceral fat using a CT-based reference standard. This was an IRB-approved retrospective cohort study of 27 subjects who underwent abdominopelvic CT within 7 days of proton density fat fraction (PDFF) mapping on a 1.5T MRI. Cross-sectional visceral fat area per slice (cm 2 ) was measured in blinded fashion in each modality at intervertebral disc levels from T12 to L4. CT estimates were obtained using a previously published semi-automated computational image processing system that sums pixels with attenuation - 205 to - 51 HU. MR estimates were obtained using two novel semi-automated DIXON-based fat quantification algorithms that measure visceral fat area by spatially regularizing non-uniform fat-only signal intensity or de-speckling PDFF 2D images and summing pixels with PDFF ≥ 50%. Pearson's correlations and Bland-Altman analyses were performed. Visceral fat area per slice ranged from 9.2 to 429.8 cm 2 for MR and from 1.6 to 405.5 cm 2 for CT. There was a strong correlation between CT and MR methods in measured visceral fat area across all studied vertebral body levels (r = 0.97; n = 101 observations); the least (r = 0.93) correlation was at T12. Bland-Altman analysis revealed a bias of 31.7 cm 2 (95% CI [- 27.1]-90.4 cm 2 ), indicating modestly higher visceral fat assessed by MR. MR- and CT-based visceral fat quantification are highly correlated and have good cross-modality reliability, indicating that visceral fat quantification by either method can yield a stable and reliable biomarker.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun
2014-08-15
Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less
Quantitative Method for Simultaneous Analysis of Acetaminophen and 6 Metabolites.
Lammers, Laureen A; Achterbergh, Roos; Pistorius, Marcel C M; Romijn, Johannes A; Mathôt, Ron A A
2017-04-01
Hepatotoxicity after ingestion of high-dose acetaminophen [N-acetyl-para-aminophenol (APAP)] is caused by the metabolites of the drug. To gain more insight into factors influencing susceptibility to APAP hepatotoxicity, quantification of APAP and metabolites is important. A few methods have been developed to simultaneously quantify APAP and its most important metabolites. However, these methods require a comprehensive sample preparation and long run times. The aim of this study was to develop and validate a simplified, but sensitive method for the simultaneous quantification of acetaminophen, the main metabolites acetaminophen glucuronide and acetaminophen sulfate, and 4 Cytochrome P450-mediated metabolites by using liquid chromatography with mass spectrometric (LC-MS) detection. The method was developed and validated for the human plasma, and it entailed a single method for sample preparation, enabling quick processing of the samples followed by an LC-MS method with a chromatographic run time of 9 minutes. The method was validated for selectivity, linearity, accuracy, imprecision, dilution integrity, recovery, process efficiency, ionization efficiency, and carryover effect. The method showed good selectivity without matrix interferences. For all analytes, the mean process efficiency was >86%, and the mean ionization efficiency was >94%. Furthermore, the accuracy was between 90.3% and 112% for all analytes, and the within- and between-run imprecision were <20% for the lower limit of quantification and <14.3% for the middle level and upper limit of quantification. The method presented here enables the simultaneous quantification of APAP and 6 of its metabolites. It is less time consuming than previously reported methods because it requires only a single and simple method for the sample preparation followed by an LC-MS method with a short run time. Therefore, this analytical method provides a useful method for both clinical and research purposes.
Wang, Tao; Liu, Tingting; Wang, Zejian; Tian, Xiwei; Yang, Yi; Guo, Meijin; Chu, Ju; Zhuang, Yingping
2016-05-01
The rapid and real-time lipid determination can provide valuable information on process regulation and optimization in the algal lipid mass production. In this study, a rapid, accurate and precise quantification method of in vivo cellular lipids of Chlorella protothecoides using low field nuclear magnetic resonance (LF-NMR) was newly developed. LF-NMR was extremely sensitive to the algal lipids with the limits of the detection (LOD) of 0.0026g and 0.32g/L in dry lipid samples and algal broth, respectively, as well as limits of quantification (LOQ) of 0.0093g and 1.18g/L. Moreover, the LF-NMR signal was specifically proportional to the cellular lipids of C. protothecoides, thus the superior regression curves existing in a wide detection range from 0.02 to 0.42g for dry lipids and from 1.12 to 8.97gL(-1) of lipid concentration for in vivo lipid quantification were obtained with all R(2) higher than 0.99, irrespective of the lipid content and fatty acids profile variations. The accuracy of this novel method was further verified to be reliable by comparing lipid quantification results to those obtained by GC-MS. And the relative standard deviation (RSD) of LF-NMR results were smaller than 2%, suggesting the precision of this method. Finally, this method was successfully used in the on-line lipid monitoring during the algal lipid fermentation processes, making it possible for better understanding of the lipid accumulation mechanism and dynamic bioprocess control. Copyright © 2016 Elsevier B.V. All rights reserved.
Clais, S; Boulet, G; Van Kerckhoven, M; Lanckacker, E; Delputte, P; Maes, L; Cos, P
2015-01-01
The viable plate count (VPC) is considered as the reference method for bacterial enumeration in periodontal microbiology but shows some important limitations for anaerobic bacteria. As anaerobes such as Porphyromonas gingivalis are difficult to culture, VPC becomes time-consuming and less sensitive. Hence, efficient normalization of experimental data to bacterial cell count requires alternative rapid and reliable quantification methods. This study compared the performance of VPC with that of turbidity measurement and real-time PCR (qPCR) in an experimental context using highly concentrated bacterial suspensions. Our TaqMan-based qPCR assay for P. gingivalis 16S rRNA proved to be sensitive and specific. Turbidity measurements offer a fast method to assess P. gingivalis growth, but suffer from high variability and a limited dynamic range. VPC was very time-consuming and less repeatable than qPCR. Our study concludes that qPCR provides the most rapid and precise approach for P. gingivalis quantification. Although our data were gathered in a specific research context, we believe that our conclusions on the inferior performance of VPC and turbidity measurements in comparison to qPCR can be extended to other research and clinical settings and even to other difficult-to-culture micro-organisms. Various clinical and research settings require fast and reliable quantification of bacterial suspensions. The viable plate count method (VPC) is generally seen as 'the gold standard' for bacterial enumeration. However, VPC-based quantification of anaerobes such as Porphyromonas gingivalis is time-consuming due to their stringent growth requirements and shows poor repeatability. Comparison of VPC, turbidity measurement and TaqMan-based qPCR demonstrated that qPCR possesses important advantages regarding speed, accuracy and repeatability. © 2014 The Society for Applied Microbiology.
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
Sandra, Koen; Mortier, Kjell; Jorge, Lucie; Perez, Luis C; Sandra, Pat; Priem, Sofie; Poelmans, Sofie; Bouche, Marie-Paule
2014-05-01
Nanobodies(®) are therapeutic proteins derived from the smallest functional fragments of heavy chain-only antibodies. The development and validation of an LC-MS/MS-based method for the quantification of an IgE binding Nanobody in cynomolgus monkey plasma is presented. Nanobody quantification was performed making use of a proteotypic tryptic peptide chromatographically enriched prior to LC-MS/MS analysis. The validated LLOQ at 36 ng/ml was measured with an intra- and inter-assay precision and accuracy <20%. The required sensitivity could be obtained based on the selectivity of 2D LC combined with MS/MS. No analyte specific tools for affinity purification were used. Plasma samples originating from a PK/PD study were analyzed and compared with the results obtained with a traditional ligand-binding assay. Excellent correlations between the two techniques were obtained, and similar PK parameters were estimated. A 2D LC-MS/MS method was successfully developed and validated for the quantification of a next generation biotherapeutic.
Tiryaki, Osman
2016-10-02
This study was undertaken to validate the "quick, easy, cheap, effective, rugged and safe" (QuEChERS) method using Golden Delicious and Starking Delicious apple matrices spiked at 0.1 maximum residue limit (MRL), 1.0 MRL and 10 MRL levels of the four pesticides (chlorpyrifos, dimethoate, indoxacarb and imidacloprid). For the extraction and cleanup, original QuEChERS method was followed, then the samples were subjected to liquid chromatography-triple quadrupole mass spectrometry (LC-MS/MS) for chromatographic analyses. According to t test, matrix effect was not significant for chlorpyrifos in both sample matrices, but it was significant for dimethoate, indoxacarb and imidacloprid in both sample matrices. Thus, matrix-matched calibration (MC) was used to compensate matrix effect and quantifications were carried out by using MC. The overall recovery of the method was 90.15% with a relative standard deviation of 13.27% (n = 330). Estimated method detection limit of analytes blew the MRLs. Some other parameters of the method validation, such as recovery, precision, accuracy and linearity were found to be within the required ranges.
The relationship between carbohydrate and the mealtime insulin dose in type 1 diabetes.
Bell, Kirstine J; King, Bruce R; Shafat, Amir; Smart, Carmel E
2015-01-01
A primary focus of the nutritional management of type 1 diabetes has been on matching prandial insulin therapy with carbohydrate amount consumed. Different methods exist to quantify carbohydrate including counting in one gram increments, 10g portions or 15g exchanges. Clinicians have assumed that counting in one gram increments is necessary to precisely dose insulin and optimize postprandial control. Carbohydrate estimations in portions or exchanges have been thought of as inadequate because they may result in less precise matching of insulin dose to carbohydrate amount. However, studies examining the impact of errors in carbohydrate quantification on postprandial glycemia challenge this commonly held view. In addition it has been found that a single mealtime bolus of insulin can cover a range of carbohydrate intake without deterioration in postprandial control. Furthermore, limitations exist in the accuracy of the nutrition information panel on a food label. This article reviews the relationship between carbohydrate quantity and insulin dose, highlighting limitations in the evidence for a linear association. These insights have significant implications for patient education and mealtime insulin dose calculations. Copyright © 2015 Elsevier Inc. All rights reserved.
Suganthi, A.; John, Sofiya; Ravi, T. K.
2008-01-01
A simple, precise, sensitive, rapid and reproducible HPTLC method for the simultaneous estimation of the rabeprazole and itopride hydrochloride in tablets was developed and validated. This method involves separation of the components by TLC on precoated silica gel G60F254 plate with solvent system of n-butanol, toluene and ammonia (8.5:0.5:1 v/v/v) and detection was carried out densitometrically using a UV detector at 288 nm in absorbance mode. This system was found to give compact spots for rabeprazole (Rf value of 0.23 0.02) and for itopride hydrochloride (Rf value of 0.75±0.02). Linearity was found to be in the range of 40-200 ng/spot and 300-1500 ng/spot for rabeprazole and itopride hydrochloride. The limit of detection and limit of quantification for rabeprazole were 10 and 20 ng/spot and for itopride hydrochloride were 50 and 100 ng/spot, respectively. The method was found to be beneficial for the routine analysis of combined dosage form. PMID:20046748
Wong, Koon-Pong; Zhang, Xiaoli; Huang, Sung-Cheng
2013-01-01
Purpose Accurate determination of the plasma input function (IF) is essential for absolute quantification of physiological parameters in positron emission tomography (PET). However, it requires an invasive and tedious procedure of arterial blood sampling that is challenging in mice because of the limited blood volume. In this study, a hybrid modeling approach is proposed to estimate the plasma IF of 2-deoxy-2-[18F]fluoro-D-glucose ([18F]FDG) in mice using accumulated radioactivity in urinary bladder together with a single late-time blood sample measurement. Methods Dynamic PET scans were performed on nine isoflurane-anesthetized male C57BL/6 mice after a bolus injection of [18F]FDG at the lateral caudal vein. During a 60- or 90-min scan, serial blood samples were taken from the femoral artery. Image data were reconstructed using filtered backprojection with CT-based attenuation correction. Total accumulated radioactivity in the urinary bladder was fitted to a renal compartmental model with the last blood sample and a 1-exponential function that described the [18F]FDG clearance in blood. Multiple late-time blood sample estimates were calculated by the blood [18F]FDG clearance equation. A sum of 4-exponentials was assumed for the plasma IF that served as a forcing function to all tissues. The estimated plasma IF was obtained by simultaneously fitting the [18F]FDG model to the time-activity curves (TACs) of liver and muscle and the forcing function to early (0–1 min) left-ventricle data (corrected for delay, dispersion, partial-volume effects and erythrocytes uptake) and the late-time blood estimates. Using only the blood sample acquired at the end of the study to estimate the IF and the use of liver TAC as an alternative IF were also investigated. Results The area under the plasma TACs calculated for all studies using the hybrid approach was not significantly different from that using all blood samples. [18F]FDG uptake constants in brain, myocardium, skeletal muscle and liver computed by the Patlak analysis using estimated and measured plasma TACs were in excellent agreement (slope ~ 1; R2 > 0.938). The IF estimated using only the last blood sample acquired at the end of the study and the use of liver TAC as plasma IF provided less reliable results. Conclusions The estimated plasma IFs obtained with the hybrid model agreed well with those derived from arterial blood sampling. Importantly, the proposed method obviates the need of arterial catheterization, making it possible to perform repeated dynamic [18F]FDG PET studies on the same animal. Liver TAC is unsuitable as an input function for absolute quantification of [18F]FDG PET data. PMID:23322346
This study focused on the quantification of leakage of sanitary and industrial sewage from sanitary sewer pipes on a national basis. The method for estimating exfiltration amounts utilized groundwater talbe information to identify areas of the country where the hydraulic gradient...
Overview of Brain Microdialysis
Chefer, Vladimir I.; Thompson, Alexis C.; Zapata, Agustin; Shippenberg, Toni S.
2010-01-01
The technique of microdialysis enables sampling and collecting of small-molecular-weight substances from the interstitial space. It is a widely used method in neuroscience and is one of the few techniques available that permits quantification of neurotransmitters, peptides, and hormones in the behaving animal. More recently, it has been used in tissue preparations for quantification of neurotransmitter release. This unit provides a brief review of the history of microdialysis and its general application in the neurosciences. The authors review the theoretical principles underlying the microdialysis process, methods available for estimating extracellular concentration from dialysis samples (i.e., relative recovery), the various factors that affect the estimate of in vivo relative recovery, and the importance of determining in vivo relative recovery to data interpretation. Several areas of special note, including impact of tissue trauma on the interpretation of microdialysis results, are discussed. Step-by-step instructions for the planning and execution of conventional and quantitative microdialysis experiments are provided. PMID:19340812
Análisis de costo-beneficio: prevención del VIH/sida en migrantes en Centroamérica
Alarid-Escudero, Fernando; Sosa-Rubí, Sandra G.; Fernández, Bertha; Galárraga, Omar
2014-01-01
Objective To quantify the costs and benefits of three HIV prevention interventions in migrants in Central America: voluntary counseling and testing, treatment of sexually transmitted infections, and condom distribution. Materials and methods The methods were: a) identification and quantification of costs; b) quantification of benefits, defined as the potential savings in antiretroviral treatment of HIV cases prevented; and c) estimation of the cost-benefit ratio. Results The model estimated that 9, 21 and 8 cases of HIV were prevented by voluntary counseling and testing, treatment for sexually transmitted infections and condom distribution per 10 000 migrants, respectively. In Panama, condom distribution and treatment for sexually transmitted infections had a return of US$131/USD and US$69.8/USD. Returns in El Salvador were US$2.0/USD and US$42.3/USD in voluntary counseling and testing and condom distribution, respectively. Conclusion The potential savings on prevention have a large variation between countries. Nevertheless, the cost-benefit estimates suggest that the HIV prevention programs in Central America can potentially result in monetary savings in the long run. PMID:23918053
The Use of Satellite Remote Sensing in Epidemiological Studies
Sorek-Hamer, Meytar; Just, Allan C.; Kloog, Itai
2016-01-01
Purpose of review Particulate matter (PM) air pollution is a ubiquitous exposure linked with multiple adverse health outcomes for children and across the life course. The recent development of satellite based remote sensing models for air pollution enables the quantification of these risks and addresses many limitations of previous air pollution research strategies. We review the recent literature on the applications of satellite remote sensing in air quality research, with a focus on their use in epidemiological studies. Recent findings Aerosol optical depth (AOD) is a focus of this review and a significant number of studies show that ground-level PM can be estimated from columnar AOD. Satellite measurements have been found to be an important source of data for PM model-based exposure estimates, and recently have been used in health studies to increase the spatial breadth and temporal resolution of these estimates. Summary It is suggested that satellite-based models improve our understanding of the spatial characteristics of air quality. Although the adoption of satellite-based measures of air quality in health studies is in its infancy, it is rapidly growing. Nevertheless, further investigation is still needed in order to have a better understanding of the AOD contribution to these prediction models in order to use them with higher accuracy in epidemiological studies. PMID:26859287
Controlled impact demonstration airframe bending bridges
NASA Technical Reports Server (NTRS)
Soltis, S. J.
1986-01-01
The calibration of the KRASH and DYCAST models for transport aircraft is discussed. The FAA uses computer analysis techniques to predict the response of controlled impact demonstration (CID) during impact. The moment bridges can provide a direct correlation between the predictive loads or moments that the models will predict and what was experienced during the actual impact. Another goal is to examine structural failure mechanisms and correlate with analytical predictions. The bending bridges did achieve their goals and objectives. The data traces do provide some insight with respect to airframe loads and structural response. They demonstrate quite clearly what's happening to the airframe. A direct quantification of metal airframe loads was measured by the moment bridges. The measured moments can be correlated with the KRASH and DYCAST computer models. The bending bridge data support airframe failure mechanisms analysis and provide residual airframe strength estimation. It did not appear as if any of the bending bridges on the airframe exceeded limit loads. (The observed airframe fracture was due to the fuselage encounter with the tomahawk which tore out the keel beam.) The airframe bridges can be used to estimate the impact conditions and those estimates are correlating with some of the other data measurements. Structural response, frequency and structural damping are readily measured by the moment bridges.
Enzyme-Linked Immunofiltration Assay To Estimate Attachment of Thiobacilli to Pyrite
Dziurla, Marie-Antoinette; Achouak, Wafa; Lam, Bach-Tuyet; Heulin, Thierry; Berthelin, Jacques
1998-01-01
An enzyme-linked immunofiltration assay (ELIFA) has been developed in order to estimate directly and specifically Thiobacillus ferrooxidans attachment on sulfide minerals. This method derives from the enzyme-linked immunosorbent assay but is performed on filtration membranes which allow the retention of mineral particles for a subsequent immunoenzymatic reaction in microtiter plates. The polyclonal antiserum used in this study was raised against T. ferrooxidans DSM 583 and recognized cell surface antigens present on bacteria belonging to the genus Thiobacillus. This antiserum and the ELIFA allowed the direct quantification of attached bacteria with high sensitivity (104 bacteria were detected per well of the microtiter plate). The mean value of bacterial attachment has been estimated to be about 105 bacteria mg−1 of pyrite at a particle size of 56 to 65 μm. The geometric coverage ratio of pyrite by T. ferrooxidans ranged from 0.25 to 2.25%. This suggests an attachment of T. ferrooxidans on the pyrite surface to well-defined limited sites with specific electrochemical or surface properties. ELIFA was shown to be compatible with the measurement of variable levels of adhesion. Therefore, this method may be used to establish adhesion isotherms of T. ferrooxidans on various sulfide minerals exhibiting different physicochemical properties in order to understand the mechanisms of bacterial interaction with mineral surfaces. PMID:9687454
Satellite remote sensing in epidemiological studies.
Sorek-Hamer, Meytar; Just, Allan C; Kloog, Itai
2016-04-01
Particulate matter air pollution is a ubiquitous exposure linked with multiple adverse health outcomes for children and across the life course. The recent development of satellite-based remote-sensing models for air pollution enables the quantification of these risks and addresses many limitations of previous air pollution research strategies. We review the recent literature on the applications of satellite remote sensing in air quality research, with a focus on their use in epidemiological studies. Aerosol optical depth (AOD) is a focus of this review and a significant number of studies show that ground-level particulate matter can be estimated from columnar AOD. Satellite measurements have been found to be an important source of data for particulate matter model-based exposure estimates, and recently have been used in health studies to increase the spatial breadth and temporal resolution of these estimates. It is suggested that satellite-based models improve our understanding of the spatial characteristics of air quality. Although the adoption of satellite-based measures of air quality in health studies is in its infancy, it is rapidly growing. Nevertheless, further investigation is still needed in order to have a better understanding of the AOD contribution to these prediction models in order to use them with higher accuracy in epidemiological studies.
Committed warming inferred from observations and an energy balance model
NASA Astrophysics Data System (ADS)
Pincus, R.; Mauritsen, T.
2017-12-01
Due to the lifetime of CO2 and thermal inertia of the ocean, the Earth's climate is not equilibrated with anthropogenic forcing. As a result, even if fossil fuel emissions were to suddenly cease, some level of committed warming is expected due to past emissions. Here, we provide an observational-based quantification of this committed warming using the instrument record of global-mean warming, recently-improved estimates of Earth's energy imbalance, and estimates of radiative forcing from the fifth IPCC assessment report. Compared to pre-industrial levels, we find a committed warming of 1.5K [0.9-3.6, 5-95 percentile] at equilibrium, and of 1.3K [0.9-2.3] within this century. However, when assuming that ocean carbon uptake cancels remnant greenhouse gas-induced warming on centennial timescales, committed warming is reduced to 1.1K [0.7-1.8]. Conservatively, there is a 32% risk that committed warming already exceeds the 1.5K target set in Paris, and that this will likely be crossed prior to 2053. Regular updates of these observationally-constrained committed warming estimates, though simplistic, can provide transparent guidance as uncertainty regarding transient climate sensitivity inevitably narrows and understanding the limitations of the framework is advanced.
NASA Astrophysics Data System (ADS)
Wang, Chun Wei; Manne, Upender; Reddy, Vishnu B.; Oelschlager, Denise K.; Katkoori, Venkat R.; Grizzle, William E.; Kapoor, Rakesh
2010-11-01
A combination tapered fiber-optic biosensor (CTFOB) dip probe for rapid and cost-effective quantification of proteins in serum samples has been developed. This device relies on diode laser excitation and a charged-coupled device spectrometer and functions on a technique of sandwich immunoassay. As a proof of principle, this technique was applied in a quantitative estimation of interleukin IL-6. The probes detected IL-6 at picomolar levels in serum samples obtained from a patient with lupus, an autoimmune disease, and a patient with lymphoma. The estimated concentration of IL-6 in the lupus sample was 5.9 +/- 0.6 pM, and in the lymphoma sample, it was below the detection limit. These concentrations were verified by a procedure involving bead-based xMAP technology. A similar trend in the concentrations was observed. The specificity of the CTFOB dip probes was assessed by analysis with receiver operating characteristics. This analysis suggests that the dip probes can detect 5-pM or higher concentration of IL-6 in these samples with specificities of 100%. The results provide information for guiding further studies in the utilization of these probes to quantify other analytes in body fluids with high specificity and sensitivity.
Regional energy planning: Some suggestions to public administration
NASA Astrophysics Data System (ADS)
Sozzi, R.
A methodology is proposed to estimate the relevant data and to improve the energy efficiency in regional energy planning. The quantification of the regional energy system is subdivided in three independent parameters which are separetely estimated: energy demand, energy consumption, and transformation capacity. Definitions and estimating procedures are given. The optimization of the regional planning includes the application, wherever possible, of the technologies which centralize the space-heating energy production or combine the production of electric energy with space-heating energy distribution.
Vidal, Rocío B Pellegrino; Ibañez, Gabriela A; Escandar, Graciela M
2016-10-01
The aim of this study was to develop a novel analytical method for the determination of bisphenol A, nonylphenol, octylphenol, diethyl phthalate, dibutyl phthalate and diethylhexyl phthalate, compounds known for their endocrine-disruptor properties, based on liquid chromatography with simultaneous diode array and fluorescent detection. Following the principles of green analytical chemistry, solvent consumption and chromatographic run time were minimized. To deal with the resulting incomplete resolution in the chromatograms, a second-order calibration was proposed. Second-order data (elution time-absorbance wavelength and elution time-fluorescence emission wavelength matrices) were obtained and processed by multivariate curve resolution-alternating least-squares (MCR-ALS). Applying MCR-ALS allowed quantification of the analytes even in the presence of partially overlapped chromatographic and spectral bands among these compounds and the potential interferents. The obtained results from the analysis of beer, wine, soda, juice, water and distilled beverage samples were compared with gas chromatography-mass spectrometry (GC-MS). Limits of detection (LODs) in the range 0.04-0.38ngmL(-1) were estimated in real samples after a very simple solid-phase extraction. All the samples were found to contain at least three EDs, in concentrations as high as 334ngmL(-1). Copyright © 2016 Elsevier B.V. All rights reserved.
Koželj, Gordana; Perharič, Lucija; Stanovnik, Lovro; Prosen, Helena
2014-08-05
A liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for the determination of atropine and scopolamine in 100μL human plasma was developed and validated. Sample pretreatment consisted of protein precipitation with acetonitrile followed by a concentration step. Analytes and levobupivacaine (internal standard) were separated on a Zorbax XDB-CN column (75mm×4.6mm i.d., 3.5μm) with gradient elution (purified water, acetonitrile, formic acid). The triple quadrupole MS was operated in ESI positive mode. Matrix effect was estimated for deproteinised plasma samples. Selected reaction monitoring (SRM) was used for quantification in the range of 0.10-50.00ng/mL. Interday precision for both tropanes and intraday precision for atropine was <10%, intraday precision for scopolamine was <14% and <18% at lower limit of quantification (LLOQ). Mean interday and intraday accuracies for atropine were within ±7% and for scopolamine within ±11%. The method can be used for determination of therapeutic and toxic levels of both compounds and has been successfully applied to a study of pharmacodynamic and pharmacokinetic properties of tropanes, where plasma samples of volunteers were collected at fixed time intervals after ingestion of a buckwheat meal, spiked with five low doses of tropanes. Copyright © 2014 Elsevier B.V. All rights reserved.
Nojavan, Saeed; Bidarmanesh, Tina; Mohammadi, Ali; Yaripour, Saeid
2016-03-01
In the present study, for the first time electromembrane extraction followed by high performance liquid chromatography coupled with ultraviolet detection was optimized and validated for quantification of four gonadotropin-releasing hormone agonist anticancer peptides (alarelin, leuprolide, buserelin and triptorelin) in biological and aqueous samples. The parameters influencing electromigration were investigated and optimized. The membrane consists 95% of 1-octanol and 5% di-(2-ethylhexyl)-phosphate immobilized in the pores of a hollow fiber. A 20 V electrical field was applied to make the analytes migrate from sample solution with pH 7.0, through the supported liquid membrane into an acidic acceptor solution with pH 1.0 which was located inside the lumen of hollow fiber. Extraction recoveries in the range of 49 and 71% within 15 min extraction time were obtained in different biological matrices which resulted in preconcentration factors in the range of 82-118 and satisfactory repeatability (7.1 < RSD% < 19.8). The method offers good linearity (2.0-1000 ng/mL) with estimation of regression coefficient higher than 0.998. The procedure allows very low detection and quantitation limits of 0.2 and 0.6 ng/mL, respectively. Finally, it was applied to determination and quantification of peptides in human plasma and wastewater samples and satisfactory results were yielded. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Qiu, Feng; Zhou, Shujun; Fu, Shujun; Kong, Weijun; Yang, Shihai; Yang, Meihua
2012-11-01
A sensitive and accurate LC-ESI-MS/MS method was developed and validated of for the determination of 6'-hydroxy justicidin A (HJA), a potential antitumor active component isolated from Justicia procumbens in rat plasma using a simple liquid-liquid extraction (LLE) method for sample preparation. Chromatographic separation was achieved on an Agilent Zorbax-C(18) column (2.1 mm × 50 mm, 3.5 μm) using a step gradient program with the mobile phase of 0.1% formic acid aqueous solution and acetonitrile with 0.1% formic acid. HJA and IS (buspirone) were detected using electrospray positive ionization mass spectrometry in the multiple reaction monitoring (MRM) mode. This method demonstrated good linearity and did not show any endogenous interference with the active compound and IS peaks. The lower limit of quantification (LLOQ) of HJA was 0.50 ng/ml in 50 μl rat plasma. The developed and validated method has been successfully applied to the quantification and pharmacokinetic study of HJA in rats after intravenous and oral administration of 0.25 mg/kg HJA. The oral bioavailability (F) of HJA was estimated to be 36.0±13.4% with an elimination half-life (t(1/2)) value of 1.04±0.20 h. Copyright © 2012 Elsevier B.V. All rights reserved.
Mattarozzi, Monica; Suman, Michele; Cascio, Claudia; Calestani, Davide; Weigel, Stefan; Undas, Anna; Peters, Ruud
2017-01-01
Estimating consumer exposure to nanomaterials (NMs) in food products and predicting their toxicological properties are necessary steps in the assessment of the risks of this technology. To this end, analytical methods have to be available to detect, characterize and quantify NMs in food and materials related to food, e.g. food packaging and biological samples following metabolization of food. The challenge for the analytical sciences is that the characterization of NMs requires chemical as well as physical information. This article offers a comprehensive analysis of methods available for the detection and characterization of NMs in food and related products. Special attention was paid to the crucial role of sample preparation methods since these have been partially neglected in the scientific literature so far. The currently available instrumental methods are grouped as fractionation, counting and ensemble methods, and their advantages and limitations are discussed. We conclude that much progress has been made over the last 5 years but that many challenges still exist. Future perspectives and priority research needs are pointed out. Graphical Abstract Two possible analytical strategies for the sizing and quantification of Nanoparticles: Asymmetric Flow Field-Flow Fractionation with multiple detectors (allows the determination of true size and mass-based particle size distribution); Single Particle Inductively Coupled Plasma Mass Spectrometry (allows the determination of a spherical equivalent diameter of the particle and a number-based particle size distribution).
Ghugre, Nilesh R.; Wood, John C.
2010-01-01
Iron overload is a serious condition for patients with β-thalassemia, transfusion-dependent sickle cell anemia and inherited disorders of iron metabolism. MRI is becoming increasingly important in non-invasive quantification of tissue iron, overcoming the drawbacks of traditional techniques (liver biopsy). R2*(1/T2*) rises linearly with iron while R2(1/T2) has a curvilinear relationship in human liver. Although recent work has demonstrated clinically-valid estimates of human liver iron, the calibration varies with MRI sequence, field strength, iron chelation therapy and organ imaged, forcing recalibration in patients. To understand and correct these limitations, a thorough understanding of the underlying biophysics is of critical importance. Toward this end, a Monte Carlo based approach, using human liver as a ‘model’ tissue system, was employed to determine the contribution of particle size and distribution on MRI signal relaxation. Relaxivities were determined for hepatic iron concentrations (HIC) ranging from 0.5–40 mg iron/ g dry tissue weight. Model predictions captured the linear and curvilinear relationship of R2* and R2 with HIC respectively and were within in vivo confidence bounds; contact or chemical exchange mechanisms were not necessary. A validated and optimized model will aid understanding and quantification of iron-mediated relaxivity in tissues where biopsy is not feasible (heart, spleen). PMID:21337413
Adenovirus and Norovirus Contaminants in Commercially Distributed Shellfish.
Rodriguez-Manzano, Jesus; Hundesa, Ayalkibet; Calgua, Byron; Carratala, Anna; Maluquer de Motes, Carlos; Rusiñol, Marta; Moresco, Vanessa; Ramos, Ana Paula; Martínez-Marca, Fernando; Calvo, Miquel; Monte Barardi, Celia Regina; Girones, Rosina; Bofill-Mas, Sílvia
2014-03-01
Shellfish complying with European Regulations based on quantification of fecal bacterial indicators (FIB) are introduced into markets; however, information on viruses, more stable than FIB, is not available in the literature. To assess the presence of noroviruses (NoVs) GI and GII and human adenoviruses (HAdV) in domestic and imported mussels and clams (n = 151) their presence was analyzed during winter seasons (2004-2008) in north-west Spanish markets through a routine surveillance system. All samples tested negative for NoV GI and 13 % were positive for NoV GII. The role of HAdV as viral indicator was evaluated in 20 negative and 10 positive NoV GII samples showing an estimated sensitivity and specificity of HAdV to predict the presence of NoV GII of 100 and 74 % (cut-off 0.5). The levels of HAdV and NoVs and the efficiency of decontamination in shellfish depuration plants (SDP) were evaluated analyzing pre- and post-depurated mussels collected in May-June 2010 from three different SDP. There were no statistically significant differences in the prevalence and quantification of HAdV between pre- and post-depurated shellfish and between seawater entering and leaving the depuration systems. Moreover, infectious HAdV were detected in depurated mussels. These results confirm previous studies showing that current controls and depuration treatments limiting the number of FIB do not guarantee the absence of viruses in shellfish.
Pereira, Tomás Pellizzaro; do Amaral, Fernanda Plucani; Dall'Asta, Pamela; Brod, Fábio Cristiano Angonesi; Arisi, Ana Carolina Maisonnave
2014-07-01
The plant growth promoting bacteria Herbaspirillum seropedicae SmR1 is an endophytic diazotroph found in several economically important crops. Considering that methods to monitor the plant-bacteria interaction are required, our objective was to develop a real-time PCR method for quantification of PGPB H. seropedicae in the rhizosphere of maize seedlings. Primer pairs were designed, and their specificity was verified using DNA from 12 different bacterial species. Ten standard curves of qPCR assay using HERBAS1 primers and tenfold serial dilutions of H. seropedicae SmR1 DNA were performed, and PCR efficiency of 91 % and correlation coefficient of 0.99 were obtained. H. seropedicae SmR1 limit of detection was 10(1) copies (corresponding to 60.3 fg of bacterial DNA). qPCR assay using HERBAS1 was used to detect and quantify H. seropedicae strain SmR1 in inoculated maize roots, cultivated in vitro and in pots, harvested 1, 4, 7, and 10 days after inoculation. The estimated bacterial DNA copy number per gram of root was in the range 10(7)-10(9) for plants grown in vitro and it was around 10(6) for plants grown in pots. Primer pair HERBAS1 was able to quantify H. seropedicae SmR1, and this assay can be useful for monitoring plant-bacteria interaction.
Petrova, Darinka Todorova; Cocisiu, Gabriela Ariadna; Eberle, Christoph; Rhode, Karl-Heinz; Brandhorst, Gunnar; Walson, Philip D; Oellerich, Michael
2013-09-01
The aim of this study was to develop a novel method for automated quantification of cell-free hemoglobin (fHb) based on the HI (Roche Diagnostics). The novel fHb method based on the HI was correlated with fHb measured using the triple wavelength methods of both Harboe [fHb, g/L = (0.915 * HI + 2.634)/100] and Fairbanks et al. [fHb, g/L = (0.917 * HI + 2.131)/100]. fHb concentrations were estimated from the HI using the Roche Modular automated platform in self-made and commercially available quality controls, as well as samples from a proficiency testing scheme (INSTAND). The fHb using Roche automated HI results were then compared to results obtained using the traditional spectrophotometric assays for one hundred plasma samples with varying degrees of hemolysis, lipemia and/or bilirubinemia. The novel method using automated HI quantification on the Roche Modular clinical chemistry platform correlated well with results using the classical methods in the 100 patient samples (Harboe: r = 0.9284; Fairbanks et al.: r = 0.9689) and recovery was good for self-made controls. However, commercially available quality controls showed poor recovery due to an unidentified matrix problem. The novel method produced reliable determination of fHb in samples without interferences. However, poor recovery using commercially available fHb quality control samples currently greatly limits its usefulness. © 2013.
NASA Astrophysics Data System (ADS)
Li, Minmin; Dai, Chao; Wang, Fengzhong; Kong, Zhiqiang; He, Yan; Huang, Ya Tao; Fan, Bei
2017-02-01
An effective analysis method was developed based on a chemometric tool for the simultaneous quantification of five different post-harvest pesticides (2,4-dichlorophenoxyacetic acid (2,4-D), carbendazim, thiabendazole, iprodione, and prochloraz) in fruits and vegetables. In the modified QuEChERS (quick, easy, cheap, effective, rugged and safe) method, the factors and responses for optimization of the extraction and cleanup analyses were compared using the Plackett-Burman (P-B) screening design. Furthermore, the significant factors (toluene percentage, hydrochloric acid (HCl) percentage, and graphitized carbon black (GCB) amount) were optimized using a central composite design (CCD) combined with Derringer’s desirability function (DF). The limits of quantification (LOQs) were estimated to be 1.0 μg/kg for 2,4-D, carbendazim, thiabendazole, and prochloraz, and 1.5 μg/kg for iprodione in food matrices. The mean recoveries were in the range of 70.4-113.9% with relative standard deviations (RSDs) of less than 16.9% at three spiking levels. The measurement uncertainty of the analytical method was determined using the bottom-up approach, which yielded an average value of 7.6%. Carbendazim was most frequently found in real samples analyzed using the developed method. Consequently, the analytical method can serve as an advantageous and rapid tool for determination of five preservative pesticides in fruits and vegetables.
Li, Minmin; Dai, Chao; Wang, Fengzhong; Kong, Zhiqiang; He, Yan; Huang, Ya Tao; Fan, Bei
2017-01-01
An effective analysis method was developed based on a chemometric tool for the simultaneous quantification of five different post-harvest pesticides (2,4-dichlorophenoxyacetic acid (2,4-D), carbendazim, thiabendazole, iprodione, and prochloraz) in fruits and vegetables. In the modified QuEChERS (quick, easy, cheap, effective, rugged and safe) method, the factors and responses for optimization of the extraction and cleanup analyses were compared using the Plackett–Burman (P–B) screening design. Furthermore, the significant factors (toluene percentage, hydrochloric acid (HCl) percentage, and graphitized carbon black (GCB) amount) were optimized using a central composite design (CCD) combined with Derringer’s desirability function (DF). The limits of quantification (LOQs) were estimated to be 1.0 μg/kg for 2,4-D, carbendazim, thiabendazole, and prochloraz, and 1.5 μg/kg for iprodione in food matrices. The mean recoveries were in the range of 70.4–113.9% with relative standard deviations (RSDs) of less than 16.9% at three spiking levels. The measurement uncertainty of the analytical method was determined using the bottom-up approach, which yielded an average value of 7.6%. Carbendazim was most frequently found in real samples analyzed using the developed method. Consequently, the analytical method can serve as an advantageous and rapid tool for determination of five preservative pesticides in fruits and vegetables. PMID:28225030
Li, Minmin; Dai, Chao; Wang, Fengzhong; Kong, Zhiqiang; He, Yan; Huang, Ya Tao; Fan, Bei
2017-02-22
An effective analysis method was developed based on a chemometric tool for the simultaneous quantification of five different post-harvest pesticides (2,4-dichlorophenoxyacetic acid (2,4-D), carbendazim, thiabendazole, iprodione, and prochloraz) in fruits and vegetables. In the modified QuEChERS (quick, easy, cheap, effective, rugged and safe) method, the factors and responses for optimization of the extraction and cleanup analyses were compared using the Plackett-Burman (P-B) screening design. Furthermore, the significant factors (toluene percentage, hydrochloric acid (HCl) percentage, and graphitized carbon black (GCB) amount) were optimized using a central composite design (CCD) combined with Derringer's desirability function (DF). The limits of quantification (LOQs) were estimated to be 1.0 μg/kg for 2,4-D, carbendazim, thiabendazole, and prochloraz, and 1.5 μg/kg for iprodione in food matrices. The mean recoveries were in the range of 70.4-113.9% with relative standard deviations (RSDs) of less than 16.9% at three spiking levels. The measurement uncertainty of the analytical method was determined using the bottom-up approach, which yielded an average value of 7.6%. Carbendazim was most frequently found in real samples analyzed using the developed method. Consequently, the analytical method can serve as an advantageous and rapid tool for determination of five preservative pesticides in fruits and vegetables.
Quantification of light screening by anthocyanins in leaves of Berberis thunbergii.
Nichelmann, Lars; Bilger, Wolfgang
2017-12-01
Up to 40% of incident light was screened in red Berberis leaves in vivo by anthocyanins, resulting also in up to 40% reduction of light-limited photosynthesis. The biological function of anthocyanins in leaves has been strongly discussed, but the hypothesis of a screening function is favored by most authors. For an evaluation of the function as photoprotective pigments, a quantification of their screening of the mesophyll is important. Here, chlorophyll fluorescence excitation of leaves of a red and a green variety of Berberis thunbergii was used to estimate the extent of screening by anthocyanins at 545 nm and over the whole photosynthetically active wavelength range. Growth at high light (430 µmol m -2 s -1 ) resulted in 90% screening at 545 nm corresponding to 40-50% screening over the whole wavelength range, depending on the light source. The concomitant reduction of photosynthetic quantum yield was of the same size as the calculated reduction of light reaching the chloroplasts. The induction of anthocyanins in the red variety also enhanced the epoxidation state of the violaxanthin cycle under growth conditions, indicating that red leaves were suffering less from excessive irradiance. Pool sizes of violaxanthin cycle carotenoids indicated a shade acclimation of the light harvesting complexes in red leaves. The observed reduction of internal light in anthocyanic leaves has by necessity a photoprotective effect.
NASA Astrophysics Data System (ADS)
Hermans, Thomas; Nguyen, Frédéric; Klepikova, Maria; Dassargues, Alain; Caers, Jef
2017-04-01
Hydrogeophysics is an interdisciplinary field of sciences aiming at a better understanding of subsurface hydrological processes. If geophysical surveys have been successfully used to qualitatively characterize the subsurface, two important challenges remain for a better quantification of hydrological processes: (1) the inversion of geophysical data and (2) their integration in hydrological subsurface models. The classical inversion approach using regularization suffers from spatially and temporally varying resolution and yields geologically unrealistic solutions without uncertainty quantification, making their utilization for hydrogeological calibration less consistent. More advanced techniques such as coupled inversion allow for a direct use of geophysical data for conditioning groundwater and solute transport model calibration. However, the technique is difficult to apply in complex cases and remains computationally demanding to estimate uncertainty. In a recent study, we investigate a prediction-focused approach (PFA) to directly estimate subsurface physical properties from geophysical data, circumventing the need for classic inversions. In PFA, we seek a direct relationship between the data and the subsurface variables we want to predict (the forecast). This relationship is obtained through a prior set of subsurface models for which both data and forecast are computed. A direct relationship can often be derived through dimension reduction techniques. PFA offers a framework for both hydrogeophysical "inversion" and hydrogeophysical data integration. For hydrogeophysical "inversion", the considered forecast variable is the subsurface variable, such as the salinity. An ensemble of possible solutions is generated, allowing uncertainty quantification. For hydrogeophysical data integration, the forecast variable becomes the prediction we want to make with our subsurface models, such as the concentration of contaminant in a drinking water production well. Geophysical and hydrological data are combined to derive a direct relationship between data and forecast. We illustrate the process for the design of an aquifer thermal energy storage (ATES) system. An ATES system can theoretically recover in winter the heat stored in the aquifer during summer. In practice, the energy efficiency is often lower than expected due to spatial heterogeneity of hydraulic properties combined to a non-favorable hydrogeological gradient. A proper design of ATES systems should consider the uncertainty of the prediction related to those parameters. With a global sensitivity analysis, we identify sensitive parameters for heat storage prediction and validate the use of a short term heat tracing experiment monitored with geophysics to generate informative data. First, we illustrate how PFA can be used to successfully derive the distribution of temperature in the aquifer from ERT during the heat tracing experiment. Then, we successfully integrate the geophysical data to predict medium-term heat storage in the aquifer using PFA. The result is a full quantification of the posterior distribution of the prediction conditioned to observed data in a relatively limited time budget.
Pazesh, Samaneh; Lazorova, Lucia; Berggren, Jonas; Alderborn, Göran; Gråsjö, Johan
2016-09-10
The main purpose of the study was to evaluate various pre-processing and quantification approaches of Raman spectrum to quantify low level of amorphous content in milled lactose powder. To improve the quantification analysis, several spectral pre-processing methods were used to adjust background effects. The effects of spectral noise on the variation of determined amorphous content were also investigated theoretically by propagation of error analysis and were compared to the experimentally obtained values. Additionally, the applicability of calibration method with crystalline or amorphous domains in the estimation of amorphous content in milled lactose powder was discussed. Two straight baseline pre-processing methods gave the best and almost equal performance. By the succeeding quantification methods, PCA performed best, although the classical least square analysis (CLS) gave comparable results, while peak parameter analysis displayed to be inferior. The standard deviations of experimental determined percentage amorphous content were 0.94% and 0.25% for pure crystalline and pure amorphous samples respectively, which was very close to the standard deviation values from propagated spectral noise. The reasonable conformity between the milled samples spectra and synthesized spectra indicated representativeness of physical mixtures with crystalline or amorphous domains in the estimation of apparent amorphous content in milled lactose. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.
2004-06-01
The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.
NASA Astrophysics Data System (ADS)
Ahn, Sung Hee; Hyeon, Taeghwan; Kim, Myung Soo; Moon, Jeong Hee
2017-09-01
In matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF), matrix-derived ions are routinely deflected away to avoid problems with ion detection. This, however, limits the use of a quantification method that utilizes the analyte-to-matrix ion abundance ratio. In this work, we will show that it is possible to measure this ratio by a minor instrumental modification of a simple form of MALDI-TOF. This involves detector gain switching. [Figure not available: see fulltext.
William Salas; Steve Hagen
2013-01-01
This presentation will provide an overview of an approach for quantifying uncertainty in spatial estimates of carbon emission from land use change. We generate uncertainty bounds around our final emissions estimate using a randomized, Monte Carlo (MC)-style sampling technique. This approach allows us to combine uncertainty from different sources without making...
Confidence in outcome estimates from systematic reviews used in informed consent.
Fritz, Robert; Bauer, Janet G; Spackman, Sue S; Bains, Amanjyot K; Jetton-Rangel, Jeanette
2016-12-01
Evidence-based dentistry now guides informed consent in which clinicians are obliged to provide patients with the most current, best evidence, or best estimates of outcomes, of regimens, therapies, treatments, procedures, materials, and equipment or devices when developing personal oral health care, treatment plans. Yet, clinicians require that the estimates provided from systematic reviews be verified to their validity, reliability, and contextualized as to performance competency so that clinicians may have confidence in explaining outcomes to patients in clinical practice. The purpose of this paper was to describe types of informed estimates from which clinicians may have confidence in their capacity to assist patients in competent decision-making, one of the most important concepts of informed consent. Using systematic review methodology, researchers provide clinicians with valid best estimates of outcomes regarding a subject of interest from best evidence. Best evidence is verified through critical appraisals using acceptable sampling methodology either by scoring instruments (Timmer analysis) or checklist (grade), a Cochrane Collaboration standard that allows transparency in open reviews. These valid best estimates are then tested for reliability using large databases. Finally, valid and reliable best estimates are assessed for meaning using quantification of margins and uncertainties. Through manufacturer and researcher specifications, quantification of margins and uncertainties develops a performance competency continuum by which valid, reliable best estimates may be contextualized for their performance competency: at a lowest margin performance competency (structural failure), high margin performance competency (estimated true value of success), or clinically determined critical values (clinical failure). Informed consent may be achieved when clinicians are confident of their ability to provide useful and accurate best estimates of outcomes regarding regimens, therapies, treatments, and equipment or devices to patients in their clinical practices and when developing personal, oral health care, treatment plans. Copyright © 2016 Elsevier Inc. All rights reserved.
A method to characterize the roughness of 2-D line features: recrystallization boundaries.
Sun, J; Zhang, Y B; Dahl, A B; Conradsen, K; Juul Jensen, D
2017-03-01
A method is presented, which allows quantification of the roughness of nonplanar boundaries of objects for which the neutral plane is not known. The method provides quantitative descriptions of both the local and global characteristics. How the method can be used to estimate the sizes of rough features and local curvatures is also presented. The potential of the method is illustrated by quantification of the roughness of two recrystallization boundaries in a pure Al specimen characterized by scanning electron microscopy. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier
2018-06-01
Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mallah, Muhammad Ali; Sherazi, Syed Tufail Hussain; Bhanger, Muhammad Iqbal; Mahesar, Sarfaraz Ahmed; Bajeer, Muhammad Ashraf
2015-04-01
A transmission FTIR spectroscopic method was developed for direct, inexpensive and fast quantification of paracetamol content in solid pharmaceutical formulations. In this method paracetamol content is directly analyzed without solvent extraction. KBr pellets were formulated for the acquisition of FTIR spectra in transmission mode. Two chemometric models: simple Beer's law and partial least squares employed over the spectral region of 1800-1000 cm-1 for quantification of paracetamol content had a regression coefficient of (R2) of 0.999. The limits of detection and quantification using FTIR spectroscopy were 0.005 mg g-1 and 0.018 mg g-1, respectively. Study for interference was also done to check effect of the excipients. There was no significant interference from the sample matrix. The results obviously showed the sensitivity of transmission FTIR spectroscopic method for pharmaceutical analysis. This method is green in the sense that it does not require large volumes of hazardous solvents or long run times and avoids prior sample preparation.
NASA Astrophysics Data System (ADS)
Rana, Sachin; Ertekin, Turgay; King, Gregory R.
2018-05-01
Reservoir history matching is frequently viewed as an optimization problem which involves minimizing misfit between simulated and observed data. Many gradient and evolutionary strategy based optimization algorithms have been proposed to solve this problem which typically require a large number of numerical simulations to find feasible solutions. Therefore, a new methodology referred to as GP-VARS is proposed in this study which uses forward and inverse Gaussian processes (GP) based proxy models combined with a novel application of variogram analysis of response surface (VARS) based sensitivity analysis to efficiently solve high dimensional history matching problems. Empirical Bayes approach is proposed to optimally train GP proxy models for any given data. The history matching solutions are found via Bayesian optimization (BO) on forward GP models and via predictions of inverse GP model in an iterative manner. An uncertainty quantification method using MCMC sampling in conjunction with GP model is also presented to obtain a probabilistic estimate of reservoir properties and estimated ultimate recovery (EUR). An application of the proposed GP-VARS methodology on PUNQ-S3 reservoir is presented in which it is shown that GP-VARS provides history match solutions in approximately four times less numerical simulations as compared to the differential evolution (DE) algorithm. Furthermore, a comparison of uncertainty quantification results obtained by GP-VARS, EnKF and other previously published methods shows that the P50 estimate of oil EUR obtained by GP-VARS is in close agreement to the true values for the PUNQ-S3 reservoir.
Disease quantification on PET/CT images without object delineation
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Wu, Caiyun; Fitzpatrick, Danielle; Winchell, Nicole; Schuster, Stephen J.; Torigian, Drew A.
2017-03-01
The derivation of quantitative information from images to make quantitative radiology (QR) clinically practical continues to face a major image analysis hurdle because of image segmentation challenges. This paper presents a novel approach to disease quantification (DQ) via positron emission tomography/computed tomography (PET/CT) images that explores how to decouple DQ methods from explicit dependence on object segmentation through the use of only object recognition results to quantify disease burden. The concept of an object-dependent disease map is introduced to express disease severity without performing explicit delineation and partial volume correction of either objects or lesions. The parameters of the disease map are estimated from a set of training image data sets. The idea is illustrated on 20 lung lesions and 20 liver lesions derived from 18F-2-fluoro-2-deoxy-D-glucose (FDG)-PET/CT scans of patients with various types of cancers and also on 20 NEMA PET/CT phantom data sets. Our preliminary results show that, on phantom data sets, "disease burden" can be estimated to within 2% of known absolute true activity. Notwithstanding the difficulty in establishing true quantification on patient PET images, our results achieve 8% deviation from "true" estimates, with slightly larger deviations for small and diffuse lesions where establishing ground truth becomes really questionable, and smaller deviations for larger lesions where ground truth set up becomes more reliable. We are currently exploring extensions of the approach to include fully automated body-wide DQ, extensions to just CT or magnetic resonance imaging (MRI) alone, to PET/CT performed with radiotracers other than FDG, and other functional forms of disease maps.
The applications of statistical quantification techniques in nanomechanics and nanoelectronics.
Mai, Wenjie; Deng, Xinwei
2010-10-08
Although nanoscience and nanotechnology have been developing for approximately two decades and have achieved numerous breakthroughs, the experimental results from nanomaterials with a higher noise level and poorer repeatability than those from bulk materials still remain as a practical issue, and challenge many techniques of quantification of nanomaterials. This work proposes a physical-statistical modeling approach and a global fitting statistical method to use all the available discrete data or quasi-continuous curves to quantify a few targeted physical parameters, which can provide more accurate, efficient and reliable parameter estimates, and give reasonable physical explanations. In the resonance method for measuring the elastic modulus of ZnO nanowires (Zhou et al 2006 Solid State Commun. 139 222-6), our statistical technique gives E = 128.33 GPa instead of the original E = 108 GPa, and unveils a negative bias adjustment f(0). The causes are suggested by the systematic bias in measuring the length of the nanowires. In the electronic measurement of the resistivity of a Mo nanowire (Zach et al 2000 Science 290 2120-3), the proposed new method automatically identified the importance of accounting for the Ohmic contact resistance in the model of the Ohmic behavior in nanoelectronics experiments. The 95% confidence interval of resistivity in the proposed one-step procedure is determined to be 3.57 +/- 0.0274 x 10( - 5) ohm cm, which should be a more reliable and precise estimate. The statistical quantification technique should find wide applications in obtaining better estimations from various systematic errors and biased effects that become more significant at the nanoscale.
Dupré, Mathieu; Gilquin, Benoit; Fenaille, François; Feraudet-Tarisse, Cécile; Dano, Julie; Ferro, Myriam; Simon, Stéphanie; Junot, Christophe; Brun, Virginie; Becher, François
2015-08-18
The development of rapid methods for unambiguous identification and precise quantification of protein toxins in various matrices is essential for public health surveillance. Nowadays, analytical strategies classically rely on sensitive immunological assays, but mass spectrometry constitutes an attractive complementary approach thanks to direct measurement and protein characterization ability. We developed here an innovative multiplex immuno-LC-MS/MS method for the simultaneous and specific quantification of the three potential biological warfare agents, ricin, staphylococcal enterotoxin B, and epsilon toxin, in complex human biofluids and food matrices. At least 7 peptides were targeted for each toxin (43 peptides in total) with a quadrupole-Orbitrap high-resolution instrument for exquisite detection specificity. Quantification was performed using stable isotope-labeled toxin standards spiked early in the sample. Lower limits of quantification were determined at or close to 1 ng·mL(-1). The whole process was successfully applied to the quantitative analysis of toxins in complex samples such as milk, human urine, and plasma. Finally, we report new data on toxin stability with no evidence of toxin degradation in milk in a 48 h time frame, allowing relevant quantitative toxin analysis for samples collected in this time range.
Truta, Liliana; Castro, André L; Tarelho, Sónia; Costa, Pedro; Sales, M Goreti F; Teixeira, Helena M
2016-09-05
Depression is among the most prevalent psychiatric disorders of our society, leading to an increase in antidepressant drug consumption that needs to be accurately determined in whole blood samples in Forensic Toxicology Laboratories. For this purpose, this work presents a new gas chromatography tandem mass spectrometry (GC-MS/MS) method targeting the simultaneous and rapid determination of 14 common Antidepressants in whole blood: 13 Antidepressants (amitriptyline, citalopram, clomipramine, dothiepin, fluoxetine, imipramine, mianserin, mirtazapine, nortryptiline, paroxetine, sertraline, trimipramine and venlafaxine) and 1 Metabolite (N-desmethylclomipramine). Solid-phase extraction was used prior to chromatographic separation. Chromatographic and MS/MS parameters were selected to improve sensitivity, peak resolution and unequivocal identification of the eluted analyte. The detection was performed on a triple quadrupole tandem MS in selected ion monitoring (SIM) mode in tandem, using electronic impact ionization. Clomipramine-D3 and trimipramine-D3 were used as deutered internal standards. The validation parameters included linearity, limits of detection, lower limit of quantification, selectivity/specificity, extraction efficiency, carry-over, precision and robustness, and followed internationally accepted guidelines. Limits of quantification and detection were lower than therapeutic and sub-therapeutic concentration ranges. Overall, the method offered good selectivity, robustness and quick response (<16min) for typical concentration ranges, both for therapeutic and lethal levels. Copyright © 2016 Elsevier B.V. All rights reserved.
Detection and Quantification of Graphene-Family Nanomaterials in the Environment.
Goodwin, David G; Adeleye, Adeyemi S; Sung, Lipiin; Ho, Kay T; Burgess, Robert M; Petersen, Elijah J
2018-04-17
An increase in production of commercial products containing graphene-family nanomaterials (GFNs) has led to concern over their release into the environment. The fate and potential ecotoxicological effects of GFNs in the environment are currently unclear, partially due to the limited analytical methods for GFN measurements. In this review, the unique properties of GFNs that are useful for their detection and quantification are discussed. The capacity of several classes of techniques to identify and/or quantify GFNs in different environmental matrices (water, soil, sediment, and organisms), after environmental transformations, and after release from a polymer matrix of a product is evaluated. Extraction and strategies to combine methods for more accurate discrimination of GFNs from environmental interferences as well as from other carbonaceous nanomaterials are recommended. Overall, a comprehensive review of the techniques available to detect and quantify GFNs are systematically presented to inform the state of the science, guide researchers in their selection of the best technique for the system under investigation, and enable further development of GFN metrology in environmental matrices. Two case studies are described to provide practical examples of choosing which techniques to utilize for detection or quantification of GFNs in specific scenarios. Because the available quantitative techniques are somewhat limited, more research is required to distinguish GFNs from other carbonaceous materials and improve the accuracy and detection limits of GFNs at more environmentally relevant concentrations.
Estimating the economic impacts of disruptions to intermodal freight systems traffic.
DOT National Transportation Integrated Search
2013-01-01
The identification and quantification of the economic and social impacts of disruptions is fundamental for sound transportation policy decisions. The impacts due to disruptions on goods movement are significant. Disruptions in their various forms cau...
Quantification of Emission Factor Uncertainty
Emissions factors are important for estimating and characterizing emissions from sources of air pollution. There is no quantitative indication of uncertainty for these emission factors, most factors do not have an adequate data set to compute uncertainty, and it is very difficult...
Costa, Sofia R; Kerry, Brian R; Bardgett, Richard D; Davies, Keith G
2006-12-01
The Pasteuria group of endospore-forming bacteria has been studied as a biocontrol agent of plant-parasitic nematodes. Techniques have been developed for its detection and quantification in soil samples, and these mainly focus on observations of endospore attachment to nematodes. Characterization of Pasteuria populations has recently been performed with DNA-based techniques, which usually require the extraction of large numbers of spores. We describe a simple immunological method for the quantification and characterization of Pasteuria populations. Bayesian statistics were used to determine an extraction efficiency of 43% and a threshold of detection of 210 endospores g(-1) sand. This provided a robust means of estimating numbers of endospores in small-volume samples from a natural system. Based on visual assessment of endospore fluorescence, a quantitative method was developed to characterize endospore populations, which were shown to vary according to their host.
Liu, Rui; Zhang, Shixi; Wei, Chao; Xing, Zhi; Zhang, Sichun; Zhang, Xinrong
2016-05-17
The unambiguous quantification of biomolecules is of great significance in fundamental biological research as well as practical clinical diagnosis. Due to the lack of a detectable moiety, the direct and highly sensitive quantification of biomolecules is often a "mission impossible". Consequently, tagging strategies to introduce detectable moieties for labeling target biomolecules were invented, which had a long and significant impact on studies of biomolecules in the past decades. For instance, immunoassays have been developed with radioisotope tagging by Yalow and Berson in the late 1950s. The later languishment of this technology can be almost exclusively ascribed to the use of radioactive isotopes, which led to the development of nonradioactive tagging strategy-based assays such as enzyme-linked immunosorbent assay, fluorescent immunoassay, and chemiluminescent and electrochemiluminescent immunoassay. Despite great success, these strategies suffered from drawbacks such as limited spectral window capacity for multiplex detection and inability to provide absolute quantification of biomolecules. After recalling the sequences of tagging strategies, an apparent question is why not use stable isotopes from the start? A reasonable explanation is the lack of reliable means for accurate and precise quantification of stable isotopes at that time. The situation has changed greatly at present, since several atomic mass spectrometric measures for metal stable isotopes have been developed. Among the newly developed techniques, inductively coupled plasma mass spectrometry is an ideal technique to determine metal stable isotope-tagged biomolecules, for its high sensitivity, wide dynamic linear range, and more importantly multiplex and absolute quantification ability. Since the first published report by our group, metal stable isotope tagging has become a revolutionary technique and gained great success in biomolecule quantification. An exciting research highlight in this area is the development and application of the mass cytometer, which fully exploited the multiplexing potential of metal stable isotope tagging. It realized the simultaneous detection of dozens of parameters in single cells, accurate immunophenotyping in cell populations, through modeling of intracellular signaling network and undoubted discrimination of function and connection of cell subsets. Metal stable isotope tagging has great potential applications in hematopoiesis, immunology, stem cells, cancer, and drug screening related research and opened a post-fluorescence era of cytometry. Herein, we review the development of biomolecule quantification using metal stable isotope tagging. Particularly, the power of multiplex and absolute quantification is demonstrated. We address the advantages, applicable situations, and limitations of metal stable isotope tagging strategies and propose suggestions for future developments. The transfer of enzymatic or fluorescent tagging to metal stable isotope tagging may occur in many aspects of biological and clinical practices in the near future, just as the revolution from radioactive isotope tagging to fluorescent tagging happened in the past.
van Petegem, J W H Jan Hendrik; Wegman, Fred
2014-06-01
About 50% of all road traffic fatalities and 30% of all traffic injuries in the Netherlands take place on rural roads with a speed limit of 80 km/h. About 50% of these crashes are run-off-road (ROR) crashes. To reduce the number of crashes on this road type, attention should be put on improving the safety of the infrastructure of this road type. With the development of a crash prediction model for ROR crashes on rural roads with a speed limit of 80 km/h, this study aims at making a start in providing the necessary new tools for a proactive road safety policy to road administrators in the Netherlands. The paper presents a basic framework of the model development, comprising a problem description, the data used, and the method for developing the model. The model is developed with the utilization of generalized linear modeling in SAS, using the Negative Binomial probability distribution. A stepwise approach is used by adding one variable at a time, which forms the basis for striving for a parsimonious model and the evaluation of the model. The likelihood ratio test and the Akaike information criterion are used to assess the model fit, and parameter estimations are compared with literature findings to check for consistency. The results comprise two important outcomes. One is a crash prediction model (CPM) to estimate the relative safety of rural roads with a speed limit of 80 km/h in a network. The other is a small set of estimated effects of traffic volume and road characteristics on ROR crash frequencies. The results may lead to adjustments of the road design guidelines in the Netherlands and to further research on the quantification of risk factors with crash prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quantification of pulmonary vessel diameter in low-dose CT images
NASA Astrophysics Data System (ADS)
Rudyanto, Rina D.; Ortiz de Solórzano, Carlos; Muñoz-Barrutia, Arrate
2015-03-01
Accurate quantification of vessel diameter in low-dose Computer Tomography (CT) images is important to study pulmonary diseases, in particular for the diagnosis of vascular diseases and the characterization of morphological vascular remodeling in Chronic Obstructive Pulmonary Disease (COPD). In this study, we objectively compare several vessel diameter estimation methods using a physical phantom. Five solid tubes of differing diameters (from 0.898 to 3.980 mm) were embedded in foam, simulating vessels in the lungs. To measure the diameters, we first extracted the vessels using either of two approaches: vessel enhancement using multi-scale Hessian matrix computation, or explicitly segmenting them using intensity threshold. We implemented six methods to quantify the diameter: three estimating diameter as a function of scale used to calculate the Hessian matrix; two calculating equivalent diameter from the crosssection area obtained by thresholding the intensity and vesselness response, respectively; and finally, estimating the diameter of the object using the Full Width Half Maximum (FWHM). We find that the accuracy of frequently used methods estimating vessel diameter from the multi-scale vesselness filter depends on the range and the number of scales used. Moreover, these methods still yield a significant error margin on the challenging estimation of the smallest diameter (on the order or below the size of the CT point spread function). Obviously, the performance of the thresholding-based methods depends on the value of the threshold. Finally, we observe that a simple adaptive thresholding approach can achieve a robust and accurate estimation of the smallest vessels diameter.
Russek, Natanya S; Jensen, Matthew B
2014-03-01
Ischemic stroke is a leading cause of death and disability, and current treatments to limit tissue injury and improve recovery are limited. Cerebral infarction is accompanied by intense brain tissue inflammation involving many inflammatory cell types that may cause both negative and positive effects on outcomes. Many potential neuroprotective and neurorestorative treatments may affect, and be affected by, this inflammatory cell infiltration, so that accurate quantification of this tissue response is needed. We performed a systematic review of histological methods to quantify brain tissue inflammatory cell infiltration after cerebral infarction. We found reports of multiple techniques to quantify different inflammatory cell types. We found no direct comparison studies and conclude that more research is needed to optimize the assessment of this important stroke outcome.
Bullich, Santiago; Barthel, Henryk; Koglin, Norman; Becker, Georg A; De Santi, Susan; Jovalekic, Aleksandar; Stephens, Andrew W; Sabri, Osama
2017-11-24
Accurate amyloid PET quantification is necessary for monitoring amyloid-beta accumulation and response to therapy. Currently, most of the studies are analyzed using the static standardized uptake value ratio (SUVR) approach because of its simplicity. However, this approach may be influenced by changes in cerebral blood flow (CBF) or radiotracer clearance. Full tracer kinetic models require arterial blood sampling and dynamic image acquisition. The objectives of this work were: (1) to validate a non-invasive kinetic modeling approach for 18 F-florbetaben PET using an acquisition protocol with the best compromise between quantification accuracy and simplicity and (2) to assess the impact of CBF changes and radiotracer clearance on SUVRs and non-invasive kinetic modeling data in 18 F-florbetaben PET. Methods: Data from twenty subjects (10 patients with probable Alzheimer's dementia/ 10 healthy volunteers) were used to compare the binding potential (BP ND ) obtained from the full kinetic analysis to the SUVR and to non-invasive tracer kinetic methods (simplified reference tissue model (SRTM), and multilinear reference tissue model 2 (MRTM2)). Different approaches using shortened or interrupted acquisitions were compared to the results of the full acquisition (0-140 min). Simulations were carried out to assess the effect of CBF and radiotracer clearance changes on SUVRs and non-invasive kinetic modeling outputs. Results: A 0-30 and 120-140 min dual time-window acquisition protocol using appropriate interpolation of the missing time points provided the best compromise between patient comfort and quantification accuracy. Excellent agreement was found between BP ND obtained using full and dual time-window (2TW) acquisition protocols (BP ND,2TW =0.01+ 1.00 BP ND,FULL , R2=0.97 (MRTM2); BP ND,2TW = 0.05+ 0.92·BP ND,FULL , R2=0.93 (SRTM)). Simulations showed a limited impact of CBF and radiotracer clearance changes on MRTM parameters and SUVRs. Conclusion: This study demonstrates accurate non-invasive kinetic modeling of 18 F-florbetaben PET data using a dual time-window acquisition protocol, thus providing a good compromise between quantification accuracy, scan duration and patient burden. The influence of CBF and radiotracer clearance changes on amyloid-beta load estimates was small. For most clinical research applications, the SUVR approach is appropriate. However, for longitudinal studies in which a maximum quantification accuracy is desired, this non-invasive dual time-window acquisition protocol and kinetic analysis is recommended. Copyright © 2017 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Nitric Oxide Analyzer Quantification of Plant S-Nitrosothiols.
Hussain, Adil; Yun, Byung-Wook; Loake, Gary J
2018-01-01
Nitric oxide (NO) is a small diatomic molecule that regulates multiple physiological processes in animals, plants, and microorganisms. In animals, it is involved in vasodilation and neurotransmission and is present in exhaled breath. In plants, it regulates both plant immune function and numerous developmental programs. The high reactivity and short half-life of NO and cross-reactivity of its various derivatives make its quantification difficult. Different methods based on calorimetric, fluorometric, and chemiluminescent detection of NO and its derivatives are available, but all of them have significant limitations. Here we describe a method for the chemiluminescence-based quantification of NO using ozone-chemiluminescence technology in plants. This approach provides a sensitive, robust, and flexible approach for determining the levels of NO and its signaling products, protein S-nitrosothiols.
Estimation of Uncertainties in Stage-Discharge Curve for an Experimental Himalayan Watershed
NASA Astrophysics Data System (ADS)
Kumar, V.; Sen, S.
2016-12-01
Various water resource projects developed on rivers originating from the Himalayan region, the "Water Tower of Asia", plays an important role on downstream development. Flow measurements at the desired river site are very critical for river engineers and hydrologists for water resources planning and management, flood forecasting, reservoir operation and flood inundation studies. However, an accurate discharge assessment of these mountainous rivers is costly, tedious and frequently dangerous to operators during flood events. Currently, in India, discharge estimation is linked to stage-discharge relationship known as rating curve. This relationship would be affected by a high degree of uncertainty. Estimating the uncertainty of rating curve remains a relevant challenge because it is not easy to parameterize. Main source of rating curve uncertainty are errors because of incorrect discharge measurement, variation in hydraulic conditions and depth measurement. In this study our objective is to obtain best parameters of rating curve that fit the limited record of observations and to estimate uncertainties at different depth obtained from rating curve. The rating curve parameters of standard power law are estimated for three different streams of Aglar watershed located in lesser Himalayas by maximum-likelihood estimator. Quantification of uncertainties in the developed rating curves is obtained from the estimate of variances and covariances of the rating curve parameters. Results showed that the uncertainties varied with catchment behavior with error varies between 0.006-1.831 m3/s. Discharge uncertainty in the Aglar watershed streams significantly depend on the extent of extrapolation outside the range of observed water levels. Extrapolation analysis confirmed that more than 15% for maximum discharges and 5% for minimum discharges are not strongly recommended for these mountainous gauging sites.
NASA Astrophysics Data System (ADS)
Dube, Timothy; Mutanga, Onisimo
2015-03-01
Aboveground biomass estimation is critical in understanding forest contribution to regional carbon cycles. Despite the successful application of high spatial and spectral resolution sensors in aboveground biomass (AGB) estimation, there are challenges related to high acquisition costs, small area coverage, multicollinearity and limited availability. These challenges hamper the successful regional scale AGB quantification. The aim of this study was to assess the utility of the newly-launched medium-resolution multispectral Landsat 8 Operational Land Imager (OLI) dataset with a large swath width, in quantifying AGB in a forest plantation. We applied different sets of spectral analysis (test I: spectral bands; test II: spectral vegetation indices and test III: spectral bands + spectral vegetation indices) in testing the utility of Landsat 8 OLI using two non-parametric algorithms: stochastic gradient boosting and the random forest ensembles. The results of the study show that the medium-resolution multispectral Landsat 8 OLI dataset provides better AGB estimates for Eucalyptus dunii, Eucalyptus grandis and Pinus taeda especially when using the extracted spectral information together with the derived spectral vegetation indices. We also noted that incorporating the optimal subset of the most important selected medium-resolution multispectral Landsat 8 OLI bands improved AGB accuracies. We compared medium-resolution multispectral Landsat 8 OLI AGB estimates with Landsat 7 ETM + estimates and the latter yielded lower estimation accuracies. Overall, this study demonstrates the invaluable potential and strength of applying the relatively affordable and readily available newly-launched medium-resolution Landsat 8 OLI dataset, with a large swath width (185-km) in precisely estimating AGB. This strength of the Landsat OLI dataset is crucial especially in sub-Saharan Africa where high-resolution remote sensing data availability remains a challenge.
True versus Apparent Malaria Infection Prevalence: The Contribution of a Bayesian Approach
Claes, Filip; Van Hong, Nguyen; Torres, Kathy; Mao, Sokny; Van den Eede, Peter; Thi Thinh, Ta; Gamboa, Dioni; Sochantha, Tho; Thang, Ngo Duc; Coosemans, Marc; Büscher, Philippe; D'Alessandro, Umberto; Berkvens, Dirk; Erhart, Annette
2011-01-01
Aims To present a new approach for estimating the “true prevalence” of malaria and apply it to datasets from Peru, Vietnam, and Cambodia. Methods Bayesian models were developed for estimating both the malaria prevalence using different diagnostic tests (microscopy, PCR & ELISA), without the need of a gold standard, and the tests' characteristics. Several sources of information, i.e. data, expert opinions and other sources of knowledge can be integrated into the model. This approach resulting in an optimal and harmonized estimate of malaria infection prevalence, with no conflict between the different sources of information, was tested on data from Peru, Vietnam and Cambodia. Results Malaria sero-prevalence was relatively low in all sites, with ELISA showing the highest estimates. The sensitivity of microscopy and ELISA were statistically lower in Vietnam than in the other sites. Similarly, the specificities of microscopy, ELISA and PCR were significantly lower in Vietnam than in the other sites. In Vietnam and Peru, microscopy was closer to the “true” estimate than the other 2 tests while as expected ELISA, with its lower specificity, usually overestimated the prevalence. Conclusions Bayesian methods are useful for analyzing prevalence results when no gold standard diagnostic test is available. Though some results are expected, e.g. PCR more sensitive than microscopy, a standardized and context-independent quantification of the diagnostic tests' characteristics (sensitivity and specificity) and the underlying malaria prevalence may be useful for comparing different sites. Indeed, the use of a single diagnostic technique could strongly bias the prevalence estimation. This limitation can be circumvented by using a Bayesian framework taking into account the imperfect characteristics of the currently available diagnostic tests. As discussed in the paper, this approach may further support global malaria burden estimation initiatives. PMID:21364745
Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk
2018-06-01
Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.
Misra, Ankita; Shukla, Pushpendra Kumar; Kumar, Bhanu; Chand, Jai; Kushwaha, Poonam; Khalid, Md.; Singh Rawat, Ajay Kumar; Srivastava, Sharad
2017-01-01
Background: Gloriosa superba L. (Colchicaceae) is used as adjuvant therapy in gout for its potential antimitotic activity due to high colchicine(s) alkaloids. Objective: This study aimed to develop an easy, cheap, precise, and accurate high-performance thin-layer chromatographic (HPTLC) validated method for simultaneous quantification of bioactive alkaloids (colchicine and gloriosine) in G. superba L. and to identify its elite chemotype(s) from Sikkim Himalayas (India). Methods: The HPTLC chromatographic method was developed using mobile phase of chloroform: acetone: diethyl amine (5:4:1) at λmax of 350 nm. Results: Five germplasms were collected from targeted region, and on morpho-anatomical inspection, no significant variation was observed among them. Quantification data reveal that content of colchicine (Rf: 0.72) and gloriosine (Rf: 0.61) varies from 0.035%–0.150% to 0.006%–0.032% (dry wt. basis). Linearity of method was obtained in the concentration range of 100–400 ng/spot of marker(s), exhibiting regression coefficient of 0.9987 (colchicine) and 0.9983 (gloriosine) with optimum recovery of 97.79 ± 3.86 and 100.023% ± 0.01%, respectively. Limit of detection and limit of quantification were analyzed, respectively, as 6.245, 18.926 and 8.024, 24.316 (ng). Two germplasms, namely NBG-27 and NBG-26, were found to be elite chemotype of both the markers. Conclusion: The developed method is validated in terms of accuracy, recovery, and precision studies as per the ICH guidelines (2005) and can be adopted for the simultaneous quantification of colchicine and gloriosine in phytopharmaceuticals. In addition, this study is relevant to explore the chemotypic variability in metabolite content for commercial and medicinal purposes. SUMMARY An easy, cheap, precise, and accurate high performance thin layer chromatographic (HPTLC) validated method for simultaneous quantification of bioactive alkaloids (colchicine and gloriosine) in G. superba L.Five germplasms were collected from targeted region, and on morpho anatomical inspection, no significant variation was observed among themQuantification data reveal that content of colchicine (Rf: 0.72) and gloriosine (Rf: 0.61) varies from 0.035%–0.150% to 0.006%–0.032% (dry wt. basis)Two germplasms, namely NBG 27 and NBG 26, were found to be elite chemotype of both the markers. PMID:29142436
Misra, Ankita; Shukla, Pushpendra Kumar; Kumar, Bhanu; Chand, Jai; Kushwaha, Poonam; Khalid, Md; Singh Rawat, Ajay Kumar; Srivastava, Sharad
2017-10-01
Gloriosa superba L. (Colchicaceae) is used as adjuvant therapy in gout for its potential antimitotic activity due to high colchicine(s) alkaloids. This study aimed to develop an easy, cheap, precise, and accurate high-performance thin-layer chromatographic (HPTLC) validated method for simultaneous quantification of bioactive alkaloids (colchicine and gloriosine) in G. superba L. and to identify its elite chemotype(s) from Sikkim Himalayas (India). The HPTLC chromatographic method was developed using mobile phase of chloroform: acetone: diethyl amine (5:4:1) at λ max of 350 nm. Five germplasms were collected from targeted region, and on morpho-anatomical inspection, no significant variation was observed among them. Quantification data reveal that content of colchicine ( R f : 0.72) and gloriosine ( R f : 0.61) varies from 0.035%-0.150% to 0.006%-0.032% (dry wt. basis). Linearity of method was obtained in the concentration range of 100-400 ng/spot of marker(s), exhibiting regression coefficient of 0.9987 (colchicine) and 0.9983 (gloriosine) with optimum recovery of 97.79 ± 3.86 and 100.023% ± 0.01%, respectively. Limit of detection and limit of quantification were analyzed, respectively, as 6.245, 18.926 and 8.024, 24.316 (ng). Two germplasms, namely NBG-27 and NBG-26, were found to be elite chemotype of both the markers. The developed method is validated in terms of accuracy, recovery, and precision studies as per the ICH guidelines (2005) and can be adopted for the simultaneous quantification of colchicine and gloriosine in phytopharmaceuticals. In addition, this study is relevant to explore the chemotypic variability in metabolite content for commercial and medicinal purposes. An easy, cheap, precise, and accurate high performance thin layer chromatographic (HPTLC) validated method for simultaneous quantification of bioactive alkaloids (colchicine and gloriosine) in G. superba L.Five germplasms were collected from targeted region, and on morpho anatomical inspection, no significant variation was observed among themQuantification data reveal that content of colchicine (Rf: 0.72) and gloriosine (Rf: 0.61) varies from 0.035%-0.150% to 0.006%-0.032% (dry wt. basis)Two germplasms, namely NBG 27 and NBG 26, were found to be elite chemotype of both the markers.
Sun, Bing; Shen, Feng; McCalla, Stephanie E; Kreutz, Jason E; Karymov, Mikhail A; Ismagilov, Rustem F
2013-02-05
Here we used a SlipChip microfluidic device to evaluate the performance of digital reverse transcription-loop-mediated isothermal amplification (dRT-LAMP) for quantification of HIV viral RNA. Tests are needed for monitoring HIV viral load to control the emergence of drug resistance and to diagnose acute HIV infections. In resource-limited settings, in vitro measurement of HIV viral load in a simple format is especially needed, and single-molecule counting using a digital format could provide a potential solution. We showed here that when one-step dRT-LAMP is used for quantification of HIV RNA, the digital count is lower than expected and is limited by the yield of desired cDNA. We were able to overcome the limitations by developing a microfluidic protocol to manipulate many single molecules in parallel through a two-step digital process. In the first step we compartmentalize the individual RNA molecules (based on Poisson statistics) and perform reverse transcription on each RNA molecule independently to produce DNA. In the second step, we perform the LAMP amplification on all individual DNA molecules in parallel. Using this new protocol, we increased the absolute efficiency (the ratio between the concentration calculated from the actual count and the expected concentration) of dRT-LAMP 10-fold, from ∼2% to ∼23%, by (i) using a more efficient reverse transcriptase, (ii) introducing RNase H to break up the DNA:RNA hybrid, and (iii) adding only the BIP primer during the RT step. We also used this two-step method to quantify HIV RNA purified from four patient samples and found that in some cases, the quantification results were highly sensitive to the sequence of the patient's HIV RNA. We learned the following three lessons from this work: (i) digital amplification technologies, including dLAMP and dPCR, may give adequate dilution curves and yet have low efficiency, thereby providing quantification values that underestimate the true concentration. Careful validation is essential before a method is considered to provide absolute quantification; (ii) the sensitivity of dLAMP to the sequence of the target nucleic acid necessitates additional validation with patient samples carrying the full spectrum of mutations; (iii) for multistep digital amplification chemistries, such as a combination of reverse transcription with amplification, microfluidic devices may be used to decouple these steps from one another and to perform them under different, individually optimized conditions for improved efficiency.
Uncertainty quantification of measured quantities for a HCCI engine: composition or temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petitpas, Guillaume; Whitesides, Russell
UQHCCI_1 computes the measurement uncertainties of a HCCI engine test bench using the pressure trace and the estimated uncertainties of the measured quantities as inputs, then propagating them through Bayesian inference and a mixing model.
NASA Astrophysics Data System (ADS)
Zhu, Wei; Lin, Che-Jen; Wang, Xun; Sommar, Jonas; Fu, Xuewu; Feng, Xinbin
2016-04-01
Reliable quantification of air-surface fluxes of elemental Hg vapor (Hg0) is crucial for understanding mercury (Hg) global biogeochemical cycles. There have been extensive measurements and modeling efforts devoted to estimating the exchange fluxes between the atmosphere and various surfaces (e.g., soil, canopies, water, snow, etc.) in the past three decades. However, large uncertainties remain due to the complexity of Hg0 bidirectional exchange, limitations of flux quantification techniques and challenges in model parameterization. In this study, we provide a critical review on the state of science in the atmosphere-surface exchange of Hg0. Specifically, the advancement of flux quantification techniques, mechanisms in driving the air-surface Hg exchange and modeling efforts are presented. Due to the semi-volatile nature of Hg0 and redox transformation of Hg in environmental media, Hg deposition and evasion are influenced by multiple environmental variables including seasonality, vegetative coverage and its life cycle, temperature, light, moisture, atmospheric turbulence and the presence of reactants (e.g., O3, radicals, etc.). However, the effects of these processes on flux have not been fundamentally and quantitatively determined, which limits the accuracy of flux modeling. We compile an up-to-date global observational flux database and discuss the implication of flux data on the global Hg budget. Mean Hg0 fluxes obtained by micrometeorological measurements do not appear to be significantly greater than the fluxes measured by dynamic flux chamber methods over unpolluted surfaces (p = 0.16, one-tailed, Mann-Whitney U test). The spatiotemporal coverage of existing Hg0 flux measurements is highly heterogeneous with large data gaps existing in multiple continents (Africa, South Asia, Middle East, South America and Australia). The magnitude of the evasion flux is strongly enhanced by human activities, particularly at contaminated sites. Hg0 flux observations in East Asia are comparatively larger in magnitude than the rest of the world, suggesting substantial re-emission of previously deposited mercury from anthropogenic sources. The Hg0 exchange over pristine surfaces (e.g., background soil and water) and vegetation needs better constraints for global analyses of the atmospheric Hg budget. The existing knowledge gap and the associated research needs for future measurements and modeling efforts for the air-surface exchange of Hg0 are discussed.
Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon
2018-03-01
Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis. A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantifying selective alignment of ensemble nitrogen-vacancy centers in (111) diamond
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tahara, Kosuke; Ozawa, Hayato; Iwasaki, Takayuki
2015-11-09
Selective alignment of nitrogen-vacancy (NV) centers in diamond is an important technique towards its applications. Quantification of the alignment ratio is necessary to design the optimized diamond samples. However, this is not a straightforward problem for dense ensemble of the NV centers. We estimate the alignment ratio of ensemble NV centers along the [111] direction in (111) diamond by optically detected magnetic resonance measurements. Diamond films deposited by N{sub 2} doped chemical vapor deposition have NV center densities over 1 × 10{sup 15 }cm{sup −3} and alignment ratios over 75%. Although spin coherence time (T{sub 2}) is limited to a few μs bymore » electron spins of nitrogen impurities, the combination of the selective alignment and the high density can be a possible way to optimize NV-containing diamond samples for the sensing applications.« less
Quantification of uncertainty for fluid flow in heterogeneous petroleum reservoirs
NASA Astrophysics Data System (ADS)
Zhang, Dongxiao
Detailed description of the heterogeneity of oil/gas reservoirs is needed to make performance predictions of oil/gas recovery. However, only limited measurements at a few locations are usually available. This combination of significant spatial heterogeneity with incomplete information about it leads to uncertainty about the values of reservoir properties and thus, to uncertainty in estimates of production potential. The theory of stochastic processes provides a natural method for evaluating these uncertainties. In this study, we present a stochastic analysis of transient, single phase flow in heterogeneous reservoirs. We derive general equations governing the statistical moments of flow quantities by perturbation expansions. These moments can be used to construct confidence intervals for the flow quantities (e.g., pressure and flow rate). The moment equations are deterministic and can be solved numerically with existing solvers. The proposed moment equation approach has certain advantages over the commonly used Monte Carlo approach.
Byrne, Patrick; Mostafaei, Farshad; Liu, Yingzi; Blake, Scott P; Koltick, David; Nie, Linda H
2016-05-01
The feasibility and methodology of using a compact DD generator-based neutron activation analysis system to measure aluminum in hand bone has been investigated. Monte Carlo simulations were used to simulate the moderator, reflector, and shielding assembly and to estimate the radiation dose. A high purity germanium (HPGe) detector was used to detect the Al gamma ray signals. The minimum detectable limit (MDL) was found to be 11.13 μg g(-1) dry bone (ppm). An additional HPGe detector would improve the MDL by a factor of 1.4, to 7.9 ppm. The equivalent dose delivered to the irradiated hand was calculated by Monte Carlo to be 11.9 mSv. In vivo bone aluminum measurement with the DD generator was found to be feasible among general population with an acceptable dose to the subject.
Uncertainty quantification and validation of combined hydrological and macroeconomic analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hernandez, Jacquelynne; Parks, Mancel Jordan; Jennings, Barbara Joan
2010-09-01
Changes in climate can lead to instabilities in physical and economic systems, particularly in regions with marginal resources. Global climate models indicate increasing global mean temperatures over the decades to come and uncertainty in the local to national impacts means perceived risks will drive planning decisions. Agent-based models provide one of the few ways to evaluate the potential changes in behavior in coupled social-physical systems and to quantify and compare risks. The current generation of climate impact analyses provides estimates of the economic cost of climate change for a limited set of climate scenarios that account for a small subsetmore » of the dynamics and uncertainties. To better understand the risk to national security, the next generation of risk assessment models must represent global stresses, population vulnerability to those stresses, and the uncertainty in population responses and outcomes that could have a significant impact on U.S. national security.« less
Liu, Can-Can; Xu, Yun-Hui; Yuan, Shuai; Xu, Yu; Hua, Mo-Li
2018-04-01
A new enzyme-linked immunosorbent assay (ELISA) method for quantitative determination of monoester-type aconitic alkaloids was developed. The antibodies derived from the immunogen of benzoylmesaconine (BM) could be electively affined to benzoylaconitine-type alkaloids with an ester bond (14-benzoyl-), especially to benzoylhypaconine (BH, 140.02% of cross-reactivity). The effective working range of BH was 1 ng/ml to 5 μg/ml; the lower limit of detection and the quantification were 0.35 and 0.97 ng/ml, respectively. The values of CV for intra-day and inter-day assays and recovery ratios were in acceptable ranges. The results of stability experiments were also satisfactory. This validated method was employed for pharmacokinetic study of BH in rats and the bioavailability orally administered was estimated to be 16.3%.
Whiteaker, Jeffrey R.; Zhao, Lei; Yan, Ping; Ivey, Richard G.; Voytovich, Uliana J.; Moore, Heather D.; Lin, Chenwei; Paulovich, Amanda G.
2015-01-01
In most cell signaling experiments, analytes are measured one Western blot lane at a time in a semiquantitative and often poorly specific manner, limiting our understanding of network biology and hindering the translation of novel therapeutics and diagnostics. We show the feasibility of using multiplex immuno-MRM for phospho-pharmacodynamic measurements, establishing the potential for rapid and precise quantification of cell signaling networks. A 69-plex immuno-MRM assay targeting the DNA damage response network was developed and characterized by response curves and determinations of intra- and inter-assay repeatability. The linear range was ≥3 orders of magnitude, the median limit of quantification was 2.0 fmol/mg, the median intra-assay variability was 10% CV, and the median interassay variability was 16% CV. The assay was applied in proof-of-concept studies to immortalized and primary human cells and surgically excised cancer tissues to quantify exposure–response relationships and the effects of a genomic variant (ATM kinase mutation) or pharmacologic (kinase) inhibitor. The study shows the utility of multiplex immuno-MRM for simultaneous quantification of phosphorylated and nonmodified peptides, showing feasibility for development of targeted assay panels to cell signaling networks. PMID:25987412
Watanabe, Louis Patrick
2017-01-01
Obesity is a disease that has reached epidemic proportions in the United States and has prompted international legislation in an attempt to curtail its prevalence. Despite the fact that one of the most prescribed treatment options for obesity is exercise, the genetic mechanisms underlying exercise response in individuals are still largely unknown. The fruit fly Drosophila melanogaster is a promising new model for studying exercise genetics. Currently, the lack of an accurate method to quantify the amount of exercise performed by the animals is limiting the utility of the Drosophila model for exercise genetics research. To address this limitation, we developed the Rotational Exercise Quantification System (REQS), a novel apparatus that is able to simultaneously induce exercise in flies while recording their activity levels. Thus, the REQS provides a method to standardize Drosophila exercise and ensure that all animals irrespective of genotype and sex experience the same level of exercise. Here, we provide a basic characterization of the REQS, validate its measurements using video-tracking technology, illustrate its potential use by presenting a comparison of two different exercise regimes, and demonstrate that it can be used to detect genotype-dependent variation in activity levels. PMID:29016615
Maruenda, Helena; Vico, Maria del Lujan; Householder, J Ethan; Janovec, John P; Cañari, Cristhian; Naka, Angelica; Gonzalez, Ana E
2013-05-01
This study provides the first chemical investigation of wild-harvested fruits of Vanilla pompona ssp. grandiflora (Lindl.) Soto-Arenas developed in their natural habitat in the Peruvian Amazon. Flowers were hand-pollinated and the resulting fruits were analysed at different developmental stages using an HPLC-DAD method validated for the quantification of glucovanillin and seven other compounds. The method showed satisfactory linearity (r(2)>0.9969), precision (coefficient of variation <2%), recoveries (70-100%), limit of detection (0.008-0.212 μg/ml), and limit of quantification (0.027-0.707 μg/ml). The evaluation of crude and enzyme-hydrolyzed Soxhlet-extracted samples confirmed the leading role of glucosides in fruit development. LC-ESI-MS studies corroborated the identities of four glucosides and seven aglycones, among them vanillin (5.7/100 g), 4-hydroxybenzyl alcohol (3.6/100 g), and anisyl alcohol (7.1/100 g) were found in high concentrations. The attractive flavor/aroma profile exhibited by wild V. pompona fruits supports studies focused on the development of this species as a specialty crop. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sarais, Giorgia; Caboni, Pierluigi; Sarritzu, Erika; Russo, Mariateresa; Cabras, Paolo
2008-05-14
Neem-based insecticides containing azadirachtin and related azadirachtoids are widely used in agriculture. Here, we report an analytical method for the rapid and accurate quantification of the insecticide azadirachtin A and B and other azadirachtoids such as salannin, nimbin, and their deacetylated analogues on tomatoes and peaches. Azadirachtoids were extracted from fruits and vegetables with acetonitrile. Using high-performance liquid chromatography/electrospray ionization tandem mass spectrometer, azadirachtoids were selectively detected monitoring the multiple reaction transitions of sodium adduct precursor ions. For azadirachtin A, calibration was linear over a working range of 1-1000 microg/L with r > 0.996. The limit of detection and limit of quantification for azadirachtin A were 0.4 and 0.8 microg/kg, respectively. The presence of interfering compounds in the peach and tomato extracts was evaluated and found to be minimal. Because of the linear behavior, it was concluded that the multiple reaction transitions of sodium adduct ions can be used for analytical purposes, that is, for the identification and quantification of azadirachtin A and B and related azadirachtoids in fruit and vegetable extracts at trace levels.
Watanabe, Louis Patrick; Riddle, Nicole C
2017-01-01
Obesity is a disease that has reached epidemic proportions in the United States and has prompted international legislation in an attempt to curtail its prevalence. Despite the fact that one of the most prescribed treatment options for obesity is exercise, the genetic mechanisms underlying exercise response in individuals are still largely unknown. The fruit fly Drosophila melanogaster is a promising new model for studying exercise genetics. Currently, the lack of an accurate method to quantify the amount of exercise performed by the animals is limiting the utility of the Drosophila model for exercise genetics research. To address this limitation, we developed the Rotational Exercise Quantification System (REQS), a novel apparatus that is able to simultaneously induce exercise in flies while recording their activity levels. Thus, the REQS provides a method to standardize Drosophila exercise and ensure that all animals irrespective of genotype and sex experience the same level of exercise. Here, we provide a basic characterization of the REQS, validate its measurements using video-tracking technology, illustrate its potential use by presenting a comparison of two different exercise regimes, and demonstrate that it can be used to detect genotype-dependent variation in activity levels.
Hua, Marti Z; Feng, Shaolong; Wang, Shuo; Lu, Xiaonan
2018-08-30
We report the development of a molecularly imprinted polymers-surface-enhanced Raman spectroscopy (MIPs-SERS) method for rapid detection and quantification of a herbicide residue 2,4-dichlorophenoxyacetic acid (2,4-D) in milk. MIPs were synthesized via bulk polymerization and utilized as solid phase extraction sorbent to selectively extract and enrich 2,4-D from milk. Silver nanoparticles were synthesized to facilitate the collection of SERS spectra of the extracts. Based on the characteristic band intensity of 2,4-D (391 cm -1 ), the limit of detection was 0.006 ppm and the limit of quantification was 0.008 ppm. A simple logarithmic working range (0.01-1 ppm) was established, satisfying the sensitivity requirement referring to the maximum residue level of 2,4-D in milk in both Europe and North America. The overall test of 2,4-D for each milk sample required only 20 min including sample preparation. This MIPs-SERS method has potential for practical applications in detecting 2,4-D in agri-foods. Copyright © 2018 Elsevier Ltd. All rights reserved.
Guatemala-Morales, Guadalupe María; Beltrán-Medina, Elisa Alejandra; Murillo-Tovar, Mario Alfonso; Ruiz-Palomino, Priscilla; Corona-González, Rosa Isela; Arriola-Guevara, Enrique
2016-04-15
Polycyclic aromatic hydrocarbons (PAHs) are of significant interest due to their genotoxicity in humans. PAHs quantification in coffee is complex since some of its compounds interfere in the chromatographic analysis, which hinders the reliable determination of the PAHs. Analytical conditions for the ultrasound extraction, purification and quantification of 16 PAHs in roasted coffee were studied. The better extraction efficiency of benzo[a]pyrene (68%) from ground-roasted coffee was achieved with a solvent ratio of Hex:MC (9:1 v/v) and three extraction periods of 20 min, followed by alkaline saponification and purification of the extracts. The detection limits were 0.85-39.32 ng mL(-1), and the quantification limits from 2.84 to 131.05 ng mL(-1), obtained for fluoranthene and chrysene, respectively. The extraction was effective for most of the analytes, with recoveries of 39.8% dibenzo[ah]anthracene and 69.0% benzo[b]fluoranthene. For coffee roasted in a spouted bed reactor, the summation of the 16 PAHs ranged from 3.5 to 16.4 μg kg(-1). Copyright © 2015 Elsevier Ltd. All rights reserved.
RNA-Skim: a rapid method for RNA-Seq quantification at transcript level
Zhang, Zhaojun; Wang, Wei
2014-01-01
Motivation: RNA-Seq technique has been demonstrated as a revolutionary means for exploring transcriptome because it provides deep coverage and base pair-level resolution. RNA-Seq quantification is proven to be an efficient alternative to Microarray technique in gene expression study, and it is a critical component in RNA-Seq differential expression analysis. Most existing RNA-Seq quantification tools require the alignments of fragments to either a genome or a transcriptome, entailing a time-consuming and intricate alignment step. To improve the performance of RNA-Seq quantification, an alignment-free method, Sailfish, has been recently proposed to quantify transcript abundances using all k-mers in the transcriptome, demonstrating the feasibility of designing an efficient alignment-free method for transcriptome quantification. Even though Sailfish is substantially faster than alternative alignment-dependent methods such as Cufflinks, using all k-mers in the transcriptome quantification impedes the scalability of the method. Results: We propose a novel RNA-Seq quantification method, RNA-Skim, which partitions the transcriptome into disjoint transcript clusters based on sequence similarity, and introduces the notion of sig-mers, which are a special type of k-mers uniquely associated with each cluster. We demonstrate that the sig-mer counts within a cluster are sufficient for estimating transcript abundances with accuracy comparable with any state-of-the-art method. This enables RNA-Skim to perform transcript quantification on each cluster independently, reducing a complex optimization problem into smaller optimization tasks that can be run in parallel. As a result, RNA-Skim uses <4% of the k-mers and <10% of the CPU time required by Sailfish. It is able to finish transcriptome quantification in <10 min per sample by using just a single thread on a commodity computer, which represents >100 speedup over the state-of-the-art alignment-based methods, while delivering comparable or higher accuracy. Availability and implementation: The software is available at http://www.csbio.unc.edu/rs. Contact: weiwang@cs.ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24931995
Nasso, Sara; Goetze, Sandra; Martens, Lennart
2015-09-04
Selected reaction monitoring (SRM) MS is a highly selective and sensitive technique to quantify protein abundances in complex biological samples. To enhance the pace of SRM large studies, a validated, robust method to fully automate absolute quantification and to substitute for interactive evaluation would be valuable. To address this demand, we present Ariadne, a Matlab software. To quantify monitored targets, Ariadne exploits metadata imported from the transition lists, and targets can be filtered according to mProphet output. Signal processing and statistical learning approaches are combined to compute peptide quantifications. To robustly estimate absolute abundances, the external calibration curve method is applied, ensuring linearity over the measured dynamic range. Ariadne was benchmarked against mProphet and Skyline by comparing its quantification performance on three different dilution series, featuring either noisy/smooth traces without background or smooth traces with complex background. Results, evaluated as efficiency, linearity, accuracy, and precision of quantification, showed that Ariadne's performance is independent of data smoothness and complex background presence and that Ariadne outperforms mProphet on the noisier data set and improved 2-fold Skyline's accuracy and precision for the lowest abundant dilution with complex background. Remarkably, Ariadne could statistically distinguish from each other all different abundances, discriminating dilutions as low as 0.1 and 0.2 fmol. These results suggest that Ariadne offers reliable and automated analysis of large-scale SRM differential expression studies.
New approach for the quantification of processed animal proteins in feed using light microscopy.
Veys, P; Baeten, V
2010-07-01
A revision of European Union's total feed ban on animal proteins in feed will need robust quantification methods, especially for control analyses, if tolerance levels are to be introduced, as for fishmeal in ruminant feed. In 2006, a study conducted by the Community Reference Laboratory for Animal Proteins in feedstuffs (CRL-AP) demonstrated the deficiency of the official quantification method based on light microscopy. The study concluded that the method had to be revised. This paper puts forward an improved quantification method based on three elements: (1) the preparation of permanent slides with an optical adhesive preserving all morphological markers of bones necessary for accurate identification and precision counting; (2) the use of a counting grid eyepiece reticle; and (3) new definitions for correction factors for the estimated portions of animal particles in the sediment. This revised quantification method was tested on feeds adulterated at different levels with bovine meat and bone meal (MBM) and fishmeal, and it proved to be effortless to apply. The results obtained were very close to the expected values of contamination levels for both types of adulteration (MBM or fishmeal). Calculated values were not only replicable, but also reproducible. The advantages of the new approach, including the benefits of the optical adhesive used for permanent slide mounting and the experimental conditions that need to be met to implement the new method correctly, are discussed.
Iwasaki, Motoki; Mukai, Tomomi; Takachi, Ribeka; Ishihara, Junko; Totsuka, Yukari; Tsugane, Shoichiro
2014-08-01
Clarification of the putative etiologic role of heterocyclic aromatic amines (HAAs) in the development of cancer requires a validated assessment tool for dietary HAAs. This study primarily aimed to evaluate the validity of a food frequency questionnaire (FFQ) in estimating HAA intake, using 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP) level in human hair as the reference method. We first updated analytical methods of PhIP using liquid chromatography-electrospray ionization/tandem mass spectrometry (LC-ESI/MS/MS) and measured 44 fur samples from nine rats from a feeding study as part-verification of the quantitative performance of LC-ESI/MS/MS. We next measured PhIP level in human hair samples from a validation study of the FFQ (n = 65). HAA intake from the FFQ was estimated using information on intake from six fish items and seven meat items and data on HAA content in each food item. Correlation coefficients between PhIP level in human hair and HAA intake from the FFQ were calculated. The animal feeding study of PhIP found a significant dose-response relationship between dosage and PhIP in rat fur. Mean level was 53.8 pg/g hair among subjects with values over the limit of detection (LOD) (n = 57). We found significant positive correlation coefficients between PhIP in human hair and HAA intake from the FFQ, with Spearman rank correlation coefficients of 0.35 for all subjects, 0.21 for subjects with over LOD values, and 0.34 for subjects with over limit of quantification. Findings from the validation study suggest that the FFQ is reasonably valid for the assessment of HAA intake.
Uncertainties in Projecting Risks of Late Effects from Space Radiation
NASA Astrophysics Data System (ADS)
Cucinotta, F.; Schimmerling, W.; Peterson, L.; Wilson, J.; Saganti, P.; Dicello, J.
The health risks faced by astronauts from space radiation include cancer, cataracts, hereditary effects, CNS risks, and non - cancer morbidity and mortality risks related to the diseases of the old age. Methods used to project risks in low -Earth orbit are of questionable merit for exploration missions because of the limited radiobiology data and knowledge of galactic cosmic ray (GCR) heavy ions, which causes estimates of the risk of late effects to be highly uncertain. Risk projections involve a product of many biological and physical factors, each of which has a differential range of uncertainty due to lack of data and knowledge. Within the linear-additivity model, we use Monte-Carlo sampling from subjective uncertainty distributions in each factor to obtain a maximum likelihood estimate of the overall uncertainty in risk projections. The resulting methodology is applied to several human space exploration mission scenarios including ISS, lunar station, deep space outpost, and Mar's missions of duration of 360, 660, and 1000 days. The major results are the quantification of the uncertainties in current risk estimates, the identification of the primary factors that dominate risk projection uncertainties, and the development of a method to quantify candidate approaches to reduce uncertainties or mitigate risks. The large uncertainties in GCR risk projections lead to probability distributions of risk that mask any potential risk reduction using the "optimization" of shielding materials or configurations. In contrast, the design of shielding optimization approaches for solar particle events and trapped protons can be made at this time, and promising technologies can be shown to have merit using our approach. The methods used also make it possible to express risk management objectives in terms of quantitative objectives, i.e., number of days in space without exceeding a given risk level within well defined confidence limits
Multi-material decomposition of spectral CT images
NASA Astrophysics Data System (ADS)
Mendonça, Paulo R. S.; Bhotika, Rahul; Maddah, Mahnaz; Thomsen, Brian; Dutta, Sandeep; Licato, Paul E.; Joshi, Mukta C.
2010-04-01
Spectral Computed Tomography (Spectral CT), and in particular fast kVp switching dual-energy computed tomography, is an imaging modality that extends the capabilities of conventional computed tomography (CT). Spectral CT enables the estimation of the full linear attenuation curve of the imaged subject at each voxel in the CT volume, instead of a scalar image in Hounsfield units. Because the space of linear attenuation curves in the energy ranges of medical applications can be accurately described through a two-dimensional manifold, this decomposition procedure would be, in principle, limited to two materials. This paper describes an algorithm that overcomes this limitation, allowing for the estimation of N-tuples of material-decomposed images. The algorithm works by assuming that the mixing of substances and tissue types in the human body has the physicochemical properties of an ideal solution, which yields a model for the density of the imaged material mix. Under this model the mass attenuation curve of each voxel in the image can be estimated, immediately resulting in a material-decomposed image triplet. Decomposition into an arbitrary number of pre-selected materials can be achieved by automatically selecting adequate triplets from an application-specific material library. The decomposition is expressed in terms of the volume fractions of each constituent material in the mix; this provides for a straightforward, physically meaningful interpretation of the data. One important application of this technique is in the digital removal of contrast agent from a dual-energy exam, producing a virtual nonenhanced image, as well as in the quantification of the concentration of contrast observed in a targeted region, thus providing an accurate measure of tissue perfusion.
Herrero, P; Bäuerlein, P S; Emke, E; Pocurull, E; de Voogt, P
2014-08-22
In this short communication we report on the technical implementations of coupling an asymmetric flow field-flow fractionation (AF4) instrument to a high resolution mass spectrometer (Orbitrap) using an atmospheric photoionisation interface. This will allow for the first time online identification of different fullerenes in aqueous samples after their aggregates have been fractionated in the FFF channel. Quality parameters such as limits of detection (LODs), limits of quantification (LOQs) or linear range were evaluated and they were in the range of hundreds ng/L for LODs and LOQs and the detector response was linear in the range tested (up to ∼20 μg/L). The low detection and quantification limits make this technique useful for future environmental or ecotoxicology studies in which low concentration levels are expected for fullerenes and common on-line detectors such as UV or MALS do not have enough sensitivity and selectivity. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Magnetic Susceptibility as a B0 Field Strength Independent MRI Biomarker of Liver Iron Overload
Hernando, Diego; Cook, Rachel J.; Diamond, Carol; Reeder, Scott B.
2013-01-01
Purpose MR-based quantification of liver magnetic susceptibility may enable field strength-independent measurement of liver iron concentration (LIC). However, susceptibility quantification is challenging, due to non-local effects of susceptibility on the B0 field. The purpose of this work is to demonstrate feasibility of susceptibility-based LIC quantification using a fat-referenced approach. Methods Phantoms consisting of vials with increasing iron concentrations immersed between oil/water layers, and twenty-seven subjects (9 controls/18 subjects with liver iron overload) were scanned. Ferriscan (1.5T) provided R2-based reference LIC. Multi-echo 3D-SPGR (1.5T/3T) enabled fat-water, B0- and R2*-mapping. Phantom iron concentration (mg Fe/l) was estimated from B0 differences (ΔB0) between vials and neighboring oil. Liver susceptibility and LIC (mg Fe/g dry tissue) was estimated from ΔB0 between the lateral right lobe of the liver and adjacent subcutaneous adipose tissue (SAT). Results Estimated phantom iron concentrations had good correlation with true iron concentrations (1.5T:slope=0.86, intercept=0.72, r2=0.98; 3T:slope=0.85, intercept=1.73, r2=0.98). In liver, ΔB0 correlated strongly with R2* (1.5T:r2=0.86; 3T:r2=0.93) and B0-LIC had good agreement with Ferriscan-LIC (slopes/intercepts nearly 1.0/0.0, 1.5T:r2=0.67, slope=0.93±0.13, p≈0.50, intercept=1.93±0.78, p≈0.02; 3T:r2=0.84, slope=1.01±0.09, p≈0.90, intercept=0.23±0.52, p≈0.68). Discussion Fat-referenced, susceptibility-based LIC estimation is feasible at both field strengths. This approach may enable improved susceptibility mapping in the abdomen. PMID:23801540
Tuerk, Andreas; Wiktorin, Gregor; Güler, Serhat
2017-05-01
Accuracy of transcript quantification with RNA-Seq is negatively affected by positional fragment bias. This article introduces Mix2 (rd. "mixquare"), a transcript quantification method which uses a mixture of probability distributions to model and thereby neutralize the effects of positional fragment bias. The parameters of Mix2 are trained by Expectation Maximization resulting in simultaneous transcript abundance and bias estimates. We compare Mix2 to Cufflinks, RSEM, eXpress and PennSeq; state-of-the-art quantification methods implementing some form of bias correction. On four synthetic biases we show that the accuracy of Mix2 overall exceeds the accuracy of the other methods and that its bias estimates converge to the correct solution. We further evaluate Mix2 on real RNA-Seq data from the Microarray and Sequencing Quality Control (MAQC, SEQC) Consortia. On MAQC data, Mix2 achieves improved correlation to qPCR measurements with a relative increase in R2 between 4% and 50%. Mix2 also yields repeatable concentration estimates across technical replicates with a relative increase in R2 between 8% and 47% and reduced standard deviation across the full concentration range. We further observe more accurate detection of differential expression with a relative increase in true positives between 74% and 378% for 5% false positives. In addition, Mix2 reveals 5 dominant biases in MAQC data deviating from the common assumption of a uniform fragment distribution. On SEQC data, Mix2 yields higher consistency between measured and predicted concentration ratios. A relative error of 20% or less is obtained for 51% of transcripts by Mix2, 40% of transcripts by Cufflinks and RSEM and 30% by eXpress. Titration order consistency is correct for 47% of transcripts for Mix2, 41% for Cufflinks and RSEM and 34% for eXpress. We, further, observe improved repeatability across laboratory sites with a relative increase in R2 between 8% and 44% and reduced standard deviation.
Food waste quantification in primary production - The Nordic countries as a case study.
Hartikainen, Hanna; Mogensen, Lisbeth; Svanes, Erik; Franke, Ulrika
2018-01-01
Our understanding of food waste in the food supply chain has increased, but very few studies have been published on food waste in primary production. The overall aims of this study were to quantify the total amount of food waste in primary production in Finland, Sweden, Norway and Denmark, and to create a framework for how to define and quantify food waste in primary production. The quantification of food waste was based on case studies conducted in the present study and estimates published in scientific literature. The chosen scope of the study was to quantify the amount of edible food (excluding inedible parts like peels and bones) produced for human consumption that did not end up as food. As a result, the quantification was different from the existing guidelines. One of the main differences is that food that ends up as animal feed is included in the present study, whereas this is not the case for the recently launched food waste definition of the FUSIONS project. To distinguish the 'food waste' definition of the present study from the existing definitions and to avoid confusion with established usage of the term, a new term 'side flow' (SF) was introduced as a synonym for food waste in primary production. A rough estimate of the total amount of food waste in primary production in Finland, Sweden, Norway and Denmark was made using SF and 'FUSIONS Food Waste' (FFW) definitions. The SFs in primary production in the four Nordic countries were an estimated 800,000 tonnes per year with an additional 100,000 tonnes per year from the rearing phase of animals. The 900,000 tonnes per year of SF corresponds to 3.7% of the total production of 24,000,000 tonnes per year of edible primary products. When using the FFW definition proposed by the FUSIONS project, the FFW amount was estimated at 330,000 tonnes per year, or 1% of the total production. Copyright © 2017 Elsevier Ltd. All rights reserved.
High-throughput quantification of hydroxyproline for determination of collagen.
Hofman, Kathleen; Hall, Bronwyn; Cleaver, Helen; Marshall, Susan
2011-10-15
An accurate and high-throughput assay for collagen is essential for collagen research and development of collagen products. Hydroxyproline is routinely assayed to provide a measurement for collagen quantification. The time required for sample preparation using acid hydrolysis and neutralization prior to assay is what limits the current method for determining hydroxyproline. This work describes the conditions of alkali hydrolysis that, when combined with the colorimetric assay defined by Woessner, provide a high-throughput, accurate method for the measurement of hydroxyproline. Copyright © 2011 Elsevier Inc. All rights reserved.
Targeted methods for quantitative analysis of protein glycosylation
Goldman, Radoslav; Sanda, Miloslav
2018-01-01
Quantification of proteins by LC-MS/MS-MRM has become a standard method with broad projected clinical applicability. MRM quantification of protein modifications is, however, far less utilized, especially in the case of glycoproteins. This review summarizes current methods for quantitative analysis of protein glycosylation with a focus on MRM methods. We describe advantages of this quantitative approach, analytical parameters that need to be optimized to achieve reliable measurements, and point out the limitations. Differences between major classes of N- and O-glycopeptides are described and class-specific glycopeptide assays are demonstrated. PMID:25522218
Ecosystem evapotranspiration: Challenges in measurements, estimates, and modeling
USDA-ARS?s Scientific Manuscript database
Evapotranspiration (ET) processes at the leaf-to-landscape scales in multiple land uses have important controls and feedbacks for the local, regional and global climate and water resource systems. Innovative methods, tools, and technologies for improved understanding and quantification of ET and cro...
Hahn, Andreas; Nics, Lukas; Baldinger, Pia; Ungersböck, Johanna; Dolliner, Peter; Frey, Richard; Birkfellner, Wolfgang; Mitterhauser, Markus; Wadsak, Wolfgang; Karanikas, Georgios; Kasper, Siegfried; Lanzenberger, Rupert
2012-08-01
image- derived input functions (IDIFs) represent a promising technique for a simpler and less invasive quantification of PET studies as compared to arterial cannulation. However, a number of limitations complicate the routine use of IDIFs in clinical research protocols and the full substitution of manual arterial samples by venous ones has hardly been evaluated. This study aims for a direct validation of IDIFs and venous data for the quantification of serotonin-1A receptor binding (5-HT(1A)) with [carbonyl-(11)C]WAY-100635 before and after hormone treatment. Fifteen PET measurements with arterial and venous blood sampling were obtained from 10 healthy women, 8 scans before and 7 after eight weeks of hormone replacement therapy. Image-derived input functions were derived automatically from cerebral blood vessels, corrected for partial volume effects and combined with venous manual samples from 10 min onward (IDIF+VIF). Corrections for plasma/whole-blood ratio and metabolites were done separately with arterial and venous samples. 5-HT(1A) receptor quantification was achieved with arterial input functions (AIF) and IDIF+VIF using a two-tissue compartment model. Comparison between arterial and venous manual blood samples yielded excellent reproducibility. Variability (VAR) was less than 10% for whole-blood activity (p>0.4) and below 2% for plasma to whole-blood ratios (p>0.4). Variability was slightly higher for parent fractions (VARmax=24% at 5 min, p<0.05 and VAR<13% after 20 min, p>0.1) but still within previously reported values. IDIFs after partial volume correction had peak values comparable to AIFs (mean difference Δ=-7.6 ± 16.9 kBq/ml, p>0.1), whereas AIFs exhibited a delay (Δ=4 ± 6.4s, p<0.05) and higher peak width (Δ=15.9 ± 5.2s, p<0.001). Linear regression analysis showed strong agreement for 5-HT(1A) binding as obtained with AIF and IDIF+VIF at baseline (R(2)=0.95), after treatment (R(2)=0.93) and when pooling all scans (R(2)=0.93), with slopes and intercepts in the range of 0.97 to 1.07 and -0.05 to 0.16, respectively. In addition to the region of interest analysis, the approach yielded virtually identical results for voxel-wise quantification as compared to the AIF. Despite the fast metabolism of the radioligand, manual arterial blood samples can be substituted by venous ones for parent fractions and plasma to whole-blood ratios. Moreover, the combination of image-derived and venous input functions provides a reliable quantification of 5-HT(1A) receptors. This holds true for 5-HT(1A) binding estimates before and after treatment for both regions of interest-based and voxel-wise modeling. Taken together, the approach provides less invasive receptor quantification by full independence of arterial cannulation. This offers great potential for the routine use in clinical research protocols and encourages further investigation for other radioligands with different kinetic characteristics. Copyright © 2012 Elsevier Inc. All rights reserved.
Seyer, Alexandre; Fenaille, François; Féraudet-Tarisse, Cecile; Volland, Hervé; Popoff, Michel R; Tabet, Jean-Claude; Junot, Christophe; Becher, François
2012-06-05
Epsilon toxin (ETX) is one of the most lethal toxins produced by Clostridium species and is considered as a potential bioterrorist weapon. Here, we present a rapid mass spectrometry-based method for ETX quantification in complex matrixes. As a prerequisite, naturally occurring prototoxin and toxin species were first structurally characterized by top-down and bottom-up experiments, to identify the most pertinent peptides for quantification. Following selective ETX immunoextraction and trypsin digestion, two proteotypic peptides shared by all the toxin forms were separated by ultraperformance liquid chromatography (UPLC) and monitored by ESI-MS (electrospray ionization-mass spectrometry) operating in the multiple reaction monitoring mode (MRM) with collision-induced dissociation. Thorough protocol optimization, i.e., a 15 min immunocapture, a 2 h enzymatic digestion, and an UPLC-MS/MS detection, allowed the whole quantification process including the calibration curve to be performed in less than 4 h, without compromising assay robustness and sensitivity. The assay sensitivity in milk and serum was estimated at 5 ng·mL(-1) for ETX, making this approach complementary to enzyme linked immunosorbent assay (ELISA) techniques.
EPRI/NRC-RES fire human reliability analysis guidelines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Stuart R.; Cooper, Susan E.; Najafi, Bijan
2010-03-01
During the 1990s, the Electric Power Research Institute (EPRI) developed methods for fire risk analysis to support its utility members in the preparation of responses to Generic Letter 88-20, Supplement 4, 'Individual Plant Examination - External Events' (IPEEE). This effort produced a Fire Risk Assessment methodology for operations at power that was used by the majority of U.S. nuclear power plants (NPPs) in support of the IPEEE program and several NPPs overseas. Although these methods were acceptable for accomplishing the objectives of the IPEEE, EPRI and the U.S. Nuclear Regulatory Commission (NRC) recognized that they required upgrades to support currentmore » requirements for risk-informed, performance-based (RI/PB) applications. In 2001, EPRI and the USNRC's Office of Nuclear Regulatory Research (RES) embarked on a cooperative project to improve the state-of-the-art in fire risk assessment to support a new risk-informed environment in fire protection. This project produced a consensus document, NUREG/CR-6850 (EPRI 1011989), entitled 'Fire PRA Methodology for Nuclear Power Facilities' which addressed fire risk for at power operations. NUREG/CR-6850 developed high level guidance on the process for identification and inclusion of human failure events (HFEs) into the fire PRA (FPRA), and a methodology for assigning quantitative screening values to these HFEs. It outlined the initial considerations of performance shaping factors (PSFs) and related fire effects that may need to be addressed in developing best-estimate human error probabilities (HEPs). However, NUREG/CR-6850 did not describe a methodology to develop best-estimate HEPs given the PSFs and the fire-related effects. In 2007, EPRI and RES embarked on another cooperative project to develop explicit guidance for estimating HEPs for human failure events under fire generated conditions, building upon existing human reliability analysis (HRA) methods. This document provides a methodology and guidance for conducting a fire HRA. This process includes identification and definition of post-fire human failure events, qualitative analysis, quantification, recovery, dependency, and uncertainty. This document provides three approaches to quantification: screening, scoping, and detailed HRA. Screening is based on the guidance in NUREG/CR-6850, with some additional guidance for scenarios with long time windows. Scoping is a new approach to quantification developed specifically to support the iterative nature of fire PRA quantification. Scoping is intended to provide less conservative HEPs than screening, but requires fewer resources than a detailed HRA analysis. For detailed HRA quantification, guidance has been developed on how to apply existing methods to assess post-fire fire HEPs.« less
Wang, Charlie Y; Liu, Yuchi; Huang, Shuying; Griswold, Mark A; Seiberlich, Nicole; Yu, Xin
2017-12-01
The purpose of this work was to develop a 31 P spectroscopic magnetic resonance fingerprinting (MRF) method for fast quantification of the chemical exchange rate between phosphocreatine (PCr) and adenosine triphosphate (ATP) via creatine kinase (CK). A 31 P MRF sequence (CK-MRF) was developed to quantify the forward rate constant of ATP synthesis via CK ( kfCK), the T 1 relaxation time of PCr ( T1PCr), and the PCr-to-ATP concentration ratio ( MRPCr). The CK-MRF sequence used a balanced steady-state free precession (bSSFP)-type excitation with ramped flip angles and a unique saturation scheme sensitive to the exchange between PCr and γATP. Parameter estimation was accomplished by matching the acquired signals to a dictionary generated using the Bloch-McConnell equation. Simulation studies were performed to examine the susceptibility of the CK-MRF method to several potential error sources. The accuracy of nonlocalized CK-MRF measurements before and after an ischemia-reperfusion (IR) protocol was compared with the magnetization transfer (MT-MRS) method in rat hindlimb at 9.4 T (n = 14). The reproducibility of CK-MRF was also assessed by comparing CK-MRF measurements with both MT-MRS (n = 17) and four angle saturation transfer (FAST) (n = 7). Simulation results showed that CK-MRF quantification of kfCK was robust, with less than 5% error in the presence of model inaccuracies including dictionary resolution, metabolite T 2 values, inorganic phosphate metabolism, and B 1 miscalibration. Estimation of kfCK by CK-MRF (0.38 ± 0.02 s -1 at baseline and 0.42 ± 0.03 s -1 post-IR) showed strong agreement with MT-MRS (0.39 ± 0.03 s -1 at baseline and 0.44 ± 0.04 s -1 post-IR). kfCK estimation was also similar between CK-MRF and FAST (0.38 ± 0.02 s -1 for CK-MRF and 0.38 ± 0.11 s -1 for FAST). The coefficient of variation from 20 s CK-MRF quantification of kfCK was 42% of that by 150 s MT-MRS acquisition and was 12% of that by 20 s FAST acquisition. This study demonstrates the potential of a 31 P spectroscopic MRF framework for rapid, accurate and reproducible quantification of chemical exchange rate of CK in vivo. Copyright © 2017 John Wiley & Sons, Ltd.
Adjoint-Based Uncertainty Quantification with MCNP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seifried, Jeffrey E.
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence inmore » the simulation is acquired.« less
Quantification and characterization of leakage errors
NASA Astrophysics Data System (ADS)
Wood, Christopher J.; Gambetta, Jay M.
2018-03-01
We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.
Jain, P S; Patel, M K; Gorle, A P; Chaudhari, A J; Surana, S J
2012-09-01
A simple, specific, accurate and precise stability-indicating reversed-phase high-performance liquid chromatographic method was developed for simultaneous estimation of olmesartan medoxomile (OLME), amlodipine besylate (AMLO) and hydrochlorothiazide (HCTZ) in tablet dosage form. The method was developed using an RP C18 base deactivated silica column (250 × 4.6 mm, 5 µm) with a mobile phase consisting of triethylamine (pH 3.0) adjusted with orthophosphoric acid (A) and acetonitrile (B), with a timed gradient program of T/%B: 0/30, 7/70, 8/30, 10/30 with a flow rate of 1.4 mL/min. Ultraviolet detection was used at 236 nm. The retention times for OLME, AMLO and HCTZ were found to be 6.72, 4.28 and 2.30, respectively. The proposed method was validated for precision, accuracy, linearity, range, robustness, ruggedness and force degradation study. The calibration curves of OLME, AMLO and HCTZ were linear over the range of 50-150, 12.5-37.5 and 31-93 µg/mL, respectively. The method was found to be sensitive. The limits of detection of OLME, AMLO and HCTZ were determined 0.19, 0.16 and 0.22 µg/mL and limits of quantification of OLME, AMLO and HCTZ were determined 0.57, 0.49 and 0.66, respectively. Forced degradation study was performed according to International Conference on Harmonization guidelines.