Sample records for human error quantification

  1. On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification

    PubMed Central

    Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.

    2014-01-01

    Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362

  2. Fifty Years of THERP and Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring

    2012-06-01

    In 1962 at a Human Factors Society symposium, Alan Swain presented a paper introducing a Technique for Human Error Rate Prediction (THERP). This was followed in 1963 by a Sandia Laboratories monograph outlining basic human error quantification using THERP and, in 1964, by a special journal edition of Human Factors on quantification of human performance. Throughout the 1960s, Swain and his colleagues focused on collecting human performance data for the Sandia Human Error Rate Bank (SHERB), primarily in connection with supporting the reliability of nuclear weapons assembly in the US. In 1969, Swain met with Jens Rasmussen of Risø Nationalmore » Laboratory and discussed the applicability of THERP to nuclear power applications. By 1975, in WASH-1400, Swain had articulated the use of THERP for nuclear power applications, and the approach was finalized in the watershed publication of the NUREG/CR-1278 in 1983. THERP is now 50 years old, and remains the most well known and most widely used HRA method. In this paper, the author discusses the history of THERP, based on published reports and personal communication and interviews with Swain. The author also outlines the significance of THERP. The foundations of human reliability analysis are found in THERP: human failure events, task analysis, performance shaping factors, human error probabilities, dependence, event trees, recovery, and pre- and post-initiating events were all introduced in THERP. While THERP is not without its detractors, and it is showing signs of its age in the face of newer technological applications, the longevity of THERP is a testament of its tremendous significance. THERP started the field of human reliability analysis. This paper concludes with a discussion of THERP in the context of newer methods, which can be seen as extensions of or departures from Swain’s pioneering work.« less

  3. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. JOe; Ronald L. Boring

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less

  4. Dotette: Programmable, high-precision, plug-and-play droplet pipetting.

    PubMed

    Fan, Jinzhen; Men, Yongfan; Hao Tseng, Kuo; Ding, Yi; Ding, Yunfeng; Villarreal, Fernando; Tan, Cheemeng; Li, Baoqing; Pan, Tingrui

    2018-05-01

    Manual micropipettes are the most heavily used liquid handling devices in biological and chemical laboratories; however, they suffer from low precision for volumes under 1  μ l and inevitable human errors. For a manual device, the human errors introduced pose potential risks of failed experiments, inaccurate results, and financial costs. Meanwhile, low precision under 1  μ l can cause severe quantification errors and high heterogeneity of outcomes, becoming a bottleneck of reaction miniaturization for quantitative research in biochemical labs. Here, we report Dotette, a programmable, plug-and-play microfluidic pipetting device based on nanoliter liquid printing. With automated control, protocols designed on computers can be directly downloaded into Dotette, enabling programmable operation processes. Utilizing continuous nanoliter droplet dispensing, the precision of the volume control has been successfully improved from traditional 20%-50% to less than 5% in the range of 100 nl to 1000 nl. Such a highly automated, plug-and-play add-on to existing pipetting devices not only improves precise quantification in low-volume liquid handling and reduces chemical consumptions but also facilitates and automates a variety of biochemical and biological operations.

  5. Impact of gradient timing error on the tissue sodium concentration bioscale measured using flexible twisted projection imaging

    NASA Astrophysics Data System (ADS)

    Lu, Aiming; Atkinson, Ian C.; Vaughn, J. Thomas; Thulborn, Keith R.

    2011-12-01

    The rapid biexponential transverse relaxation of the sodium MR signal from brain tissue requires efficient k-space sampling for quantitative imaging in a time that is acceptable for human subjects. The flexible twisted projection imaging (flexTPI) sequence has been shown to be suitable for quantitative sodium imaging with an ultra-short echo time to minimize signal loss. The fidelity of the k-space center location is affected by the readout gradient timing errors on the three physical axes, which is known to cause image distortion for projection-based acquisitions. This study investigated the impact of these timing errors on the voxel-wise accuracy of the tissue sodium concentration (TSC) bioscale measured with the flexTPI sequence. Our simulations show greater than 20% spatially varying quantification errors when the gradient timing errors are larger than 10 μs on all three axes. The quantification is more tolerant of gradient timing errors on the Z-axis. An existing method was used to measure the gradient timing errors with <1 μs error. The gradient timing error measurement is shown to be RF coil dependent, and timing error differences of up to ˜16 μs have been observed between different RF coils used on the same scanner. The measured timing errors can be corrected prospectively or retrospectively to obtain accurate TSC values.

  6. Error quantification of abnormal extreme high waves in Operational Oceanographic System in Korea

    NASA Astrophysics Data System (ADS)

    Jeong, Sang-Hun; Kim, Jinah; Heo, Ki-Young; Park, Kwang-Soon

    2017-04-01

    In winter season, large-height swell-like waves have occurred on the East coast of Korea, causing property damages and loss of human life. It is known that those waves are generated by a local strong wind made by temperate cyclone moving to eastward in the East Sea of Korean peninsula. Because the waves are often occurred in the clear weather, in particular, the damages are to be maximized. Therefore, it is necessary to predict and forecast large-height swell-like waves to prevent and correspond to the coastal damages. In Korea, an operational oceanographic system (KOOS) has been developed by the Korea institute of ocean science and technology (KIOST) and KOOS provides daily basis 72-hours' ocean forecasts such as wind, water elevation, sea currents, water temperature, salinity, and waves which are computed from not only meteorological and hydrodynamic model (WRF, ROMS, MOM, and MOHID) but also wave models (WW-III and SWAN). In order to evaluate the model performance and guarantee a certain level of accuracy of ocean forecasts, a Skill Assessment (SA) system was established as a one of module in KOOS. It has been performed through comparison of model results with in-situ observation data and model errors have been quantified with skill scores. Statistics which are used in skill assessment are including a measure of both errors and correlations such as root-mean-square-error (RMSE), root-mean-square-error percentage (RMSE%), mean bias (MB), correlation coefficient (R), scatter index (SI), circular correlation (CC) and central frequency (CF) that is a frequency with which errors lie within acceptable error criteria. It should be utilized simultaneously not only to quantify an error but also to improve an accuracy of forecasts by providing a feedback interactively. However, in an abnormal phenomena such as high-height swell-like waves in the East coast of Korea, it requires more advanced and optimized error quantification method that allows to predict the abnormal waves well and to improve the accuracy of forecasts by supporting modification of physics and numeric on numerical models through sensitivity test. In this study, we proposed an appropriate method of error quantification especially on abnormal high waves which are occurred by local weather condition. Furthermore, we introduced that how the quantification errors are contributed to improve wind-wave modeling by applying data assimilation and utilizing reanalysis data.

  7. Simple Quantification of Pentosidine in Human Urine and Plasma by High-Performance Liquid Chromatography

    PubMed Central

    Lee, Ji Sang; Chung, Yoon-Sok; Chang, Sun Young

    2017-01-01

    Pentosidine is an advanced glycation end-product (AGE) and fluorescent cross-link compound. A simple high-performance liquid chromatographic (HPLC) method was developed for the detection and quantification of pentosidine in human urine and plasma. The mobile phase used a gradient system to improve separation of pentosidine from endogenous peaks, and chromatograms were monitored by fluorescent detector set at excitation and emission wavelengths of 328 and 378 nm, respectively. The retention time for pentosidine was 24.3 min and the lower limits of quantification (LLOQ) in human urine and plasma were 1 nM. The intraday assay precisions (coefficients of variation) were generally low and found to be in the range of 5.19–7.49% and 4.96–8.78% for human urine and plasma, respectively. The corresponding values of the interday assay precisions were 9.45% and 4.27%. Accuracies (relative errors) ranged from 87.9% to 115%. Pentosidine was stable in a range of pH solutions, human urine, and plasma. In summary, this HPLC method can be applied in future preclinical and clinical evaluation of pentosidine in the diabetic patients. PMID:29181026

  8. Metering error quantification under voltage and current waveform distortion

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran

    2017-09-01

    With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.

  9. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    PubMed

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Multi-muscle FES force control of the human arm for arbitrary goals.

    PubMed

    Schearer, Eric M; Liao, Yu-Wei; Perreault, Eric J; Tresch, Matthew C; Memberg, William D; Kirsch, Robert F; Lynch, Kevin M

    2014-05-01

    We present a method for controlling a neuroprosthesis for a paralyzed human arm using functional electrical stimulation (FES) and characterize the errors of the controller. The subject has surgically implanted electrodes for stimulating muscles in her shoulder and arm. Using input/output data, a model mapping muscle stimulations to isometric endpoint forces measured at the subject's hand was identified. We inverted the model of this redundant and coupled multiple-input multiple-output system by minimizing muscle activations and used this inverse for feedforward control. The magnitude of the total root mean square error over a grid in the volume of achievable isometric endpoint force targets was 11% of the total range of achievable forces. Major sources of error were random error due to trial-to-trial variability and model bias due to nonstationary system properties. Because the muscles working collectively are the actuators of the skeletal system, the quantification of errors in force control guides designs of motion controllers for multi-joint, multi-muscle FES systems that can achieve arbitrary goals.

  11. Simultaneous quantification of cholesterol sulfate, androgen sulfates, and progestagen sulfates in human serum by LC-MS/MS[S

    PubMed Central

    Sánchez-Guijo, Alberto; Oji, Vinzenz; Hartmann, Michaela F.; Traupe, Heiko; Wudy, Stefan A.

    2015-01-01

    Steroids are primarily present in human fluids in their sulfated forms. Profiling of these compounds is important from both diagnostic and physiological points of view. Here, we present a novel method for the quantification of 11 intact steroid sulfates in human serum by LC-MS/MS. The compounds analyzed in our method, some of which are quantified for the first time in blood, include cholesterol sulfate, pregnenolone sulfate, 17-hydroxy-pregnenolone sulfate, 16-α-hydroxy-dehydroepiandrosterone sulfate, dehydroepiandrosterone sulfate, androstenediol sulfate, androsterone sulfate, epiandrosterone sulfate, testosterone sulfate, epitestosterone sulfate, and dihydrotestosterone sulfate. The assay was conceived to quantify sulfated steroids in a broad range of concentrations, requiring only 300 μl of serum. The method has been validated and its performance was studied at three quality controls, selected for each compound according to its physiological concentration. The assay showed good linearity (R2 > 0.99) and recovery for all the compounds, with limits of quantification ranging between 1 and 80 ng/ml. Averaged intra-day and between-day precisions (coefficient of variation) and accuracies (relative errors) were below 10%. The method has been successfully applied to study the sulfated steroidome in diseases such as steroid sulfatase deficiency, proving its diagnostic value. This is, to our best knowledge, the most comprehensive method available for the quantification of sulfated steroids in human blood. PMID:26239050

  12. Statistical image quantification toward optimal scan fusion and change quantification

    NASA Astrophysics Data System (ADS)

    Potesil, Vaclav; Zhou, Xiang Sean

    2007-03-01

    Recent advance of imaging technology has brought new challenges and opportunities for automatic and quantitative analysis of medical images. With broader accessibility of more imaging modalities for more patients, fusion of modalities/scans from one time point and longitudinal analysis of changes across time points have become the two most critical differentiators to support more informed, more reliable and more reproducible diagnosis and therapy decisions. Unfortunately, scan fusion and longitudinal analysis are both inherently plagued with increased levels of statistical errors. A lack of comprehensive analysis by imaging scientists and a lack of full awareness by physicians pose potential risks in clinical practice. In this paper, we discuss several key error factors affecting imaging quantification, studying their interactions, and introducing a simulation strategy to establish general error bounds for change quantification across time. We quantitatively show that image resolution, voxel anisotropy, lesion size, eccentricity, and orientation are all contributing factors to quantification error; and there is an intricate relationship between voxel anisotropy and lesion shape in affecting quantification error. Specifically, when two or more scans are to be fused at feature level, optimal linear fusion analysis reveals that scans with voxel anisotropy aligned with lesion elongation should receive a higher weight than other scans. As a result of such optimal linear fusion, we will achieve a lower variance than naïve averaging. Simulated experiments are used to validate theoretical predictions. Future work based on the proposed simulation methods may lead to general guidelines and error lower bounds for quantitative image analysis and change detection.

  13. Simultaneous quantification of cholesterol sulfate, androgen sulfates, and progestagen sulfates in human serum by LC-MS/MS.

    PubMed

    Sánchez-Guijo, Alberto; Oji, Vinzenz; Hartmann, Michaela F; Traupe, Heiko; Wudy, Stefan A

    2015-09-01

    Steroids are primarily present in human fluids in their sulfated forms. Profiling of these compounds is important from both diagnostic and physiological points of view. Here, we present a novel method for the quantification of 11 intact steroid sulfates in human serum by LC-MS/MS. The compounds analyzed in our method, some of which are quantified for the first time in blood, include cholesterol sulfate, pregnenolone sulfate, 17-hydroxy-pregnenolone sulfate, 16-α-hydroxy-dehydroepiandrosterone sulfate, dehydroepiandrosterone sulfate, androstenediol sulfate, androsterone sulfate, epiandrosterone sulfate, testosterone sulfate, epitestosterone sulfate, and dihydrotestosterone sulfate. The assay was conceived to quantify sulfated steroids in a broad range of concentrations, requiring only 300 μl of serum. The method has been validated and its performance was studied at three quality controls, selected for each compound according to its physiological concentration. The assay showed good linearity (R(2) > 0.99) and recovery for all the compounds, with limits of quantification ranging between 1 and 80 ng/ml. Averaged intra-day and between-day precisions (coefficient of variation) and accuracies (relative errors) were below 10%. The method has been successfully applied to study the sulfated steroidome in diseases such as steroid sulfatase deficiency, proving its diagnostic value. This is, to our best knowledge, the most comprehensive method available for the quantification of sulfated steroids in human blood. Copyright © 2015 by the American Society for Biochemistry and Molecular Biology, Inc.

  14. Automated quantification of renal interstitial fibrosis for computer-aided diagnosis: A comprehensive tissue structure segmentation method.

    PubMed

    Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon

    2018-03-01

    Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures through knowledge-based rules employing colour space transformations and structural features extraction from the images. In particular, the renal glomerulus identification is based on a multiscale textural feature analysis and a support vector machine. The regions in the biopsy representing interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area. The experiments conducted evaluate the system in terms of quantification accuracy, intra- and inter-observer variability in visual quantification by pathologists, and the effect introduced by the automated quantification system on the pathologists' diagnosis. A 40-image ground truth dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated an average error of 9 percentage points in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists involving samples from 70 kidney patients also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. The accuracy of the proposed quantification system has been validated with the ground truth dataset and compared against the pathologists' quantification results. It has been shown that the correlation between different pathologists' estimation of interstitial fibrosis area has significantly improved, demonstrating the effectiveness of the quantification system as a diagnostic aide. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Philosophy of ATHEANA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bley, D.C.; Cooper, S.E.; Forester, J.A.

    ATHEANA, a second-generation Human Reliability Analysis (HRA) method integrates advances in psychology with engineering, human factors, and Probabilistic Risk Analysis (PRA) disciplines to provide an HRA quantification process and PRA modeling interface that can accommodate and represent human performance in real nuclear power plant events. The method uses the characteristics of serious accidents identified through retrospective analysis of serious operational events to set priorities in a search process for significant human failure events, unsafe acts, and error-forcing context (unfavorable plant conditions combined with negative performance-shaping factors). ATHEANA has been tested in a demonstration project at an operating pressurized water reactor.

  16. Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method

    DOE PAGES

    Liao, Huafei N.; Groth, Katrina; Stevens-Adams, Susan

    2015-07-29

    Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification aremore » discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.« less

  17. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  18. Quantification of the Uncertainties for the Ares I A106 Ascent Aerodynamic Database

    NASA Technical Reports Server (NTRS)

    Houlden, Heather P.; Favaregh, Amber L.

    2010-01-01

    A detailed description of the quantification of uncertainties for the Ares I ascent aero 6-DOF wind tunnel database is presented. The database was constructed from wind tunnel test data and CFD results. The experimental data came from tests conducted in the Boeing Polysonic Wind Tunnel in St. Louis and the Unitary Plan Wind Tunnel at NASA Langley Research Center. The major sources of error for this database were: experimental error (repeatability), database modeling errors, and database interpolation errors.

  19. Intellicount: High-Throughput Quantification of Fluorescent Synaptic Protein Puncta by Machine Learning

    PubMed Central

    Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.

    2017-01-01

    Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324

  20. Quantification of airport community noise impact in terms of noise levels, population density, and human subjective response

    NASA Technical Reports Server (NTRS)

    Deloach, R.

    1981-01-01

    The Fraction Impact Method (FIM), developed by the National Research Council (NRC) for assessing the amount and physiological effect of noise, is described. Here, the number of people exposed to a given level of noise is multiplied by a weighting factor that depends on noise level. It is pointed out that the Aircraft-noise Levels and Annoyance MOdel (ALAMO), recently developed at NASA Langley Research Center, can perform the NRC fractional impact calculations for given modes of operation at any U.S. airport. The sensitivity of these calculations to errors in estimates of population, noise level, and human subjective response is discussed. It is found that a change in source noise causes a substantially smaller change in contour area than would be predicted simply on the basis of inverse square law considerations. Another finding is that the impact calculations are generally less sensitive to source noise errors than to systematic errors in population or subjective response.

  1. A new analytical method for quantification of olive and palm oil in blends with other vegetable edible oils based on the chromatographic fingerprints from the methyl-transesterified fraction.

    PubMed

    Jiménez-Carvelo, Ana M; González-Casado, Antonio; Cuadros-Rodríguez, Luis

    2017-03-01

    A new analytical method for the quantification of olive oil and palm oil in blends with other vegetable edible oils (canola, safflower, corn, peanut, seeds, grapeseed, linseed, sesame and soybean) using normal phase liquid chromatography, and applying chemometric tools was developed. The procedure for obtaining of chromatographic fingerprint from the methyl-transesterified fraction from each blend is described. The multivariate quantification methods used were Partial Least Square-Regression (PLS-R) and Support Vector Regression (SVR). The quantification results were evaluated by several parameters as the Root Mean Square Error of Validation (RMSEV), Mean Absolute Error of Validation (MAEV) and Median Absolute Error of Validation (MdAEV). It has to be highlighted that the new proposed analytical method, the chromatographic analysis takes only eight minutes and the results obtained showed the potential of this method and allowed quantification of mixtures of olive oil and palm oil with other vegetable oils. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  3. Quantification of the Uncertainties for the Space Launch System Liftoff/Transition and Ascent Databases

    NASA Technical Reports Server (NTRS)

    Favaregh, Amber L.; Houlden, Heather P.; Pinier, Jeremy T.

    2016-01-01

    A detailed description of the uncertainty quantification process for the Space Launch System Block 1 vehicle configuration liftoff/transition and ascent 6-Degree-of-Freedom (DOF) aerodynamic databases is presented. These databases were constructed from wind tunnel test data acquired in the NASA Langley Research Center 14- by 22-Foot Subsonic Wind Tunnel and the Boeing Polysonic Wind Tunnel in St. Louis, MO, respectively. The major sources of error for these databases were experimental error and database modeling errors.

  4. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yueqi; Lava, Pascal; Reu, Phillip

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  5. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE PAGES

    Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...

    2015-12-23

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  6. Mapping DNA polymerase errors by single-molecule sequencing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, David F.; Lu, Jenny; Chang, Seungwoo

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less

  7. Mapping DNA polymerase errors by single-molecule sequencing

    DOE PAGES

    Lee, David F.; Lu, Jenny; Chang, Seungwoo; ...

    2016-05-16

    Genomic integrity is compromised by DNA polymerase replication errors, which occur in a sequence-dependent manner across the genome. Accurate and complete quantification of a DNA polymerase's error spectrum is challenging because errors are rare and difficult to detect. We report a high-throughput sequencing assay to map in vitro DNA replication errors at the single-molecule level. Unlike previous methods, our assay is able to rapidly detect a large number of polymerase errors at base resolution over any template substrate without quantification bias. To overcome the high error rate of high-throughput sequencing, our assay uses a barcoding strategy in which each replicationmore » product is tagged with a unique nucleotide sequence before amplification. Here, this allows multiple sequencing reads of the same product to be compared so that sequencing errors can be found and removed. We demonstrate the ability of our assay to characterize the average error rate, error hotspots and lesion bypass fidelity of several DNA polymerases.« less

  8. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2014-04-01

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael

  9. Image-guided spatial localization of heterogeneous compartments for magnetic resonance

    PubMed Central

    An, Li; Shen, Jun

    2015-01-01

    Purpose: Image-guided localization SPectral Localization Achieved by Sensitivity Heterogeneity (SPLASH) allows rapid measurement of signals from irregularly shaped anatomical compartments without using phase encoding gradients. Here, the authors propose a novel method to address the issue of heterogeneous signal distribution within the localized compartments. Methods: Each compartment was subdivided into multiple subcompartments and their spectra were solved by Tikhonov regularization to enforce smoothness within each compartment. The spectrum of a given compartment was generated by combining the spectra of the components of that compartment. The proposed method was first tested using Monte Carlo simulations and then applied to reconstructing in vivo spectra from irregularly shaped ischemic stroke and normal tissue compartments. Results: Monte Carlo simulations demonstrate that the proposed regularized SPLASH method significantly reduces localization and metabolite quantification errors. In vivo results show that the intracompartment regularization results in ∼40% reduction of error in metabolite quantification. Conclusions: The proposed method significantly reduces localization errors and metabolite quantification errors caused by intracompartment heterogeneous signal distribution. PMID:26328977

  10. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics.

    PubMed

    Röst, Hannes L; Liu, Yansheng; D'Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-09-01

    Next-generation mass spectrometric (MS) techniques such as SWATH-MS have substantially increased the throughput and reproducibility of proteomic analysis, but ensuring consistent quantification of thousands of peptide analytes across multiple liquid chromatography-tandem MS (LC-MS/MS) runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we developed TRIC (http://proteomics.ethz.ch/tric/), a software tool that utilizes fragment-ion data to perform cross-run alignment, consistent peak-picking and quantification for high-throughput targeted proteomics. TRIC reduced the identification error compared to a state-of-the-art SWATH-MS analysis without alignment by more than threefold at constant recall while correcting for highly nonlinear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups. Thus, TRIC fills a gap in the pipeline for automated analysis of massively parallel targeted proteomics data sets.

  11. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.

  12. Quantification of GC-biased gene conversion in the human genome

    PubMed Central

    Glémin, Sylvain; Arndt, Peter F.; Messer, Philipp W.; Petrov, Dmitri; Galtier, Nicolas; Duret, Laurent

    2015-01-01

    Much evidence indicates that GC-biased gene conversion (gBGC) has a major impact on the evolution of mammalian genomes. However, a detailed quantification of the process is still lacking. The strength of gBGC can be measured from the analysis of derived allele frequency spectra (DAF), but this approach is sensitive to a number of confounding factors. In particular, we show by simulations that the inference is pervasively affected by polymorphism polarization errors and by spatial heterogeneity in gBGC strength. We propose a new general method to quantify gBGC from DAF spectra, incorporating polarization errors, taking spatial heterogeneity into account, and jointly estimating mutation bias. Applying it to human polymorphism data from the 1000 Genomes Project, we show that the strength of gBGC does not differ between hypermutable CpG sites and non-CpG sites, suggesting that in humans gBGC is not caused by the base-excision repair machinery. Genome-wide, the intensity of gBGC is in the nearly neutral area. However, given that recombination occurs primarily within recombination hotspots, 1%–2% of the human genome is subject to strong gBGC. On average, gBGC is stronger in African than in non-African populations, reflecting differences in effective population sizes. However, due to more heterogeneous recombination landscapes, the fraction of the genome affected by strong gBGC is larger in non-African than in African populations. Given that the location of recombination hotspots evolves very rapidly, our analysis predicts that, in the long term, a large fraction of the genome is affected by short episodes of strong gBGC. PMID:25995268

  13. Novel isotopic N, N-Dimethyl Leucine (iDiLeu) Reagents Enable Absolute Quantification of Peptides and Proteins Using a Standard Curve Approach

    NASA Astrophysics Data System (ADS)

    Greer, Tyler; Lietz, Christopher B.; Xiang, Feng; Li, Lingjun

    2015-01-01

    Absolute quantification of protein targets using liquid chromatography-mass spectrometry (LC-MS) is a key component of candidate biomarker validation. One popular method combines multiple reaction monitoring (MRM) using a triple quadrupole instrument with stable isotope-labeled standards (SIS) for absolute quantification (AQUA). LC-MRM AQUA assays are sensitive and specific, but they are also expensive because of the cost of synthesizing stable isotope peptide standards. While the chemical modification approach using mass differential tags for relative and absolute quantification (mTRAQ) represents a more economical approach when quantifying large numbers of peptides, these reagents are costly and still suffer from lower throughput because only two concentration values per peptide can be obtained in a single LC-MS run. Here, we have developed and applied a set of five novel mass difference reagents, isotopic N, N-dimethyl leucine (iDiLeu). These labels contain an amine reactive group, triazine ester, are cost effective because of their synthetic simplicity, and have increased throughput compared with previous LC-MS quantification methods by allowing construction of a four-point standard curve in one run. iDiLeu-labeled peptides show remarkably similar retention time shifts, slightly lower energy thresholds for higher-energy collisional dissociation (HCD) fragmentation, and high quantification accuracy for trypsin-digested protein samples (median errors <15%). By spiking in an iDiLeu-labeled neuropeptide, allatostatin, into mouse urine matrix, two quantification methods are validated. The first uses one labeled peptide as an internal standard to normalize labeled peptide peak areas across runs (<19% error), whereas the second enables standard curve creation and analyte quantification in one run (<8% error).

  14. Two low-cost digital camera-based platforms for quantitative creatinine analysis in urine.

    PubMed

    Debus, Bruno; Kirsanov, Dmitry; Yaroshenko, Irina; Sidorova, Alla; Piven, Alena; Legin, Andrey

    2015-10-01

    In clinical analysis creatinine is a routine biomarker for the assessment of renal and muscular dysfunctions. Although several techniques have been proposed for a fast and accurate quantification of creatinine in human serum or urine, most of them require expensive or complex apparatus, advanced sample preparation or skilled operators. To circumvent these issues, we propose two home-made platforms based on a CD Spectroscope (CDS) and Computer Screen Photo-assisted Technique (CSPT) for the rapid assessment of creatinine level in human urine. Both systems display a linear range (r(2) = 0.9967 and 0.9972, respectively) from 160 μmol L(-1) to 1.6 mmol L(-1) for standard creatinine solutions (n = 15) with respective detection limits of 89 μmol L(-1) and 111 μmol L(-1). Good repeatability was observed for intra-day (1.7-2.9%) and inter-day (3.6-6.5%) measurements evaluated on three consecutive days. The performance of CDS and CSPT was also validated in real human urine samples (n = 26) using capillary electrophoresis data as reference. Corresponding Partial Least-Squares (PLS) regression models provided for mean relative errors below 10% in creatinine quantification. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Quantification and characterization of leakage errors

    NASA Astrophysics Data System (ADS)

    Wood, Christopher J.; Gambetta, Jay M.

    2018-03-01

    We present a general framework for the quantification and characterization of leakage errors that result when a quantum system is encoded in the subspace of a larger system. To do this we introduce metrics for quantifying the coherent and incoherent properties of the resulting errors and we illustrate this framework with several examples relevant to superconducting qubits. In particular, we propose two quantities, the leakage and seepage rates, which together with average gate fidelity allow for characterizing the average performance of quantum gates in the presence of leakage and show how the randomized benchmarking protocol can be modified to enable the robust estimation of all three quantities for a Clifford gate set.

  16. Accelerated T1ρ acquisition for knee cartilage quantification using compressed sensing and data-driven parallel imaging: A feasibility study.

    PubMed

    Pandit, Prachi; Rivoire, Julien; King, Kevin; Li, Xiaojuan

    2016-03-01

    Quantitative T1ρ imaging is beneficial for early detection for osteoarthritis but has seen limited clinical use due to long scan times. In this study, we evaluated the feasibility of accelerated T1ρ mapping for knee cartilage quantification using a combination of compressed sensing (CS) and data-driven parallel imaging (ARC-Autocalibrating Reconstruction for Cartesian sampling). A sequential combination of ARC and CS, both during data acquisition and reconstruction, was used to accelerate the acquisition of T1ρ maps. Phantom, ex vivo (porcine knee), and in vivo (human knee) imaging was performed on a GE 3T MR750 scanner. T1ρ quantification after CS-accelerated acquisition was compared with non CS-accelerated acquisition for various cartilage compartments. Accelerating image acquisition using CS did not introduce major deviations in quantification. The coefficient of variation for the root mean squared error increased with increasing acceleration, but for in vivo measurements, it stayed under 5% for a net acceleration factor up to 2, where the acquisition was 25% faster than the reference (only ARC). To the best of our knowledge, this is the first implementation of CS for in vivo T1ρ quantification. These early results show that this technique holds great promise in making quantitative imaging techniques more accessible for clinical applications. © 2015 Wiley Periodicals, Inc.

  17. Quantitative proton magnetic resonance spectroscopy without water suppression

    NASA Astrophysics Data System (ADS)

    Özdemir, M. S.; DeDeene, Y.; Fieremans, E.; Lemahieu, I.

    2009-06-01

    The suppression of the abundant water signal has been traditionally employed to decrease the dynamic range of the NMR signal in proton MRS (1H MRS) in vivo. When using this approach, if the intent is to utilize the water signal as an internal reference for the absolute quantification of metabolites, additional measurements are required for the acquisition of the water signal. This can be prohibitively time-consuming and is not desired clinically. Additionally, traditional water suppression can lead to metabolite alterations. This can be overcome by performing quantitative 1H MRS without water suppression. However, the non-water-suppressed spectra suffer from gradient-induced frequency modulations, resulting in sidebands in the spectrum. Sidebands may overlap with the metabolites, which renders the spectral analysis and quantification problematic. In this paper, we performed absolute quantification of metabolites without water suppression. Sidebands were removed by utilizing the phase of an external reference signal of single resonance to observe the time-varying the static field fluctuations induced by gradient-vibration and deconvolving this phase contamination from the desired NMR signal. The quantification of metabolites was determined after sideband correction by calibrating the metabolite signal intensities against the recorded water signal. The method was evaluated by phantom and in vivo measurements in human brain. The maximum systematic error for the quantified metabolite concentrations was found to be 10.8%, showing the feasibility of the quantification after sideband correction.

  18. Quantitative detection of codeine in human plasma using surface-enhanced Raman scattering via adaptation of the isotopic labelling principle.

    PubMed

    Subaihi, Abdu; Muhamadali, Howbeer; Mutter, Shaun T; Blanch, Ewan; Ellis, David I; Goodacre, Royston

    2017-03-27

    In this study surface enhanced Raman scattering (SERS) combined with the isotopic labelling (IL) principle has been used for the quantification of codeine spiked into both water and human plasma. Multivariate statistical approaches were employed for the analysis of these SERS spectral data, particularly partial least squares regression (PLSR) which was used to generate models using the full SERS spectral data for quantification of codeine with, and without, an internal isotopic labelled standard. The PLSR models provided accurate codeine quantification in water and human plasma with high prediction accuracy (Q 2 ). In addition, the employment of codeine-d 6 as the internal standard further improved the accuracy of the model, by increasing the Q 2 from 0.89 to 0.94 and decreasing the low root-mean-square error of predictions (RMSEP) from 11.36 to 8.44. Using the peak area at 1281 cm -1 assigned to C-N stretching, C-H wagging and ring breathing, the limit of detection was calculated in both water and human plasma to be 0.7 μM (209.55 ng mL -1 ) and 1.39 μM (416.12 ng mL -1 ), respectively. Due to a lack of definitive codeine vibrational assignments, density functional theory (DFT) calculations have also been used to assign the spectral bands with their corresponding vibrational modes, which were in excellent agreement with our experimental Raman and SERS findings. Thus, we have successfully demonstrated the application of SERS with isotope labelling for the absolute quantification of codeine in human plasma for the first time with a high degree of accuracy and reproducibility. The use of the IL principle which employs an isotopolog (that is to say, a molecule which is only different by the substitution of atoms by isotopes) improves quantification and reproducibility because the competition of the codeine and codeine-d 6 for the metal surface used for SERS is equal and this will offset any difference in the number of particles under analysis or any fluctuations in laser fluence. It is our belief that this may open up new exciting opportunities for testing SERS in real-world samples and applications which would be an area of potential future studies.

  19. Quantification of immobilized Candida antarctica lipase B (CALB) using ICP-AES combined with Bradford method.

    PubMed

    Nicolás, Paula; Lassalle, Verónica L; Ferreira, María L

    2017-02-01

    The aim of this manuscript was to study the application of a new method of protein quantification in Candida antarctica lipase B commercial solutions. Error sources associated to the traditional Bradford technique were demonstrated. Eight biocatalysts based on C. antarctica lipase B (CALB) immobilized onto magnetite nanoparticles were used. Magnetite nanoparticles were coated with chitosan (CHIT) and modified with glutaraldehyde (GLUT) and aminopropyltriethoxysilane (APTS). Later, CALB was adsorbed on the modified support. The proposed novel protein quantification method included the determination of sulfur (from protein in CALB solution) by means of Atomic Emission by Inductive Coupling Plasma (AE-ICP). Four different protocols were applied combining AE-ICP and classical Bradford assays, besides Carbon, Hydrogen and Nitrogen (CHN) analysis. The calculated error in protein content using the "classic" Bradford method with bovine serum albumin as standard ranged from 400 to 1200% when protein in CALB solution was quantified. These errors were calculated considering as "true protein content values" the results of the amount of immobilized protein obtained with the improved method. The optimum quantification procedure involved the combination of Bradford method, ICP and CHN analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Precision and accuracy of clinical quantification of myocardial blood flow by dynamic PET: A technical perspective.

    PubMed

    Moody, Jonathan B; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2015-10-01

    A number of exciting advances in PET/CT technology and improvements in methodology have recently converged to enhance the feasibility of routine clinical quantification of myocardial blood flow and flow reserve. Recent promising clinical results are pointing toward an important role for myocardial blood flow in the care of patients. Absolute blood flow quantification can be a powerful clinical tool, but its utility will depend on maintaining precision and accuracy in the face of numerous potential sources of methodological errors. Here we review recent data and highlight the impact of PET instrumentation, image reconstruction, and quantification methods, and we emphasize (82)Rb cardiac PET which currently has the widest clinical application. It will be apparent that more data are needed, particularly in relation to newer PET technologies, as well as clinical standardization of PET protocols and methods. We provide recommendations for the methodological factors considered here. At present, myocardial flow reserve appears to be remarkably robust to various methodological errors; however, with greater attention to and more detailed understanding of these sources of error, the clinical benefits of stress-only blood flow measurement may eventually be more fully realized.

  1. Errors in the estimation of approximate entropy and other recurrence-plot-derived indices due to the finite resolution of RR time series.

    PubMed

    García-González, Miguel A; Fernández-Chimeno, Mireya; Ramos-Castro, Juan

    2009-02-01

    An analysis of the errors due to the finite resolution of RR time series in the estimation of the approximate entropy (ApEn) is described. The quantification errors in the discrete RR time series produce considerable errors in the ApEn estimation (bias and variance) when the signal variability or the sampling frequency is low. Similar errors can be found in indices related to the quantification of recurrence plots. An easy way to calculate a figure of merit [the signal to resolution of the neighborhood ratio (SRN)] is proposed in order to predict when the bias in the indices could be high. When SRN is close to an integer value n, the bias is higher than when near n - 1/2 or n + 1/2. Moreover, if SRN is close to an integer value, the lower this value, the greater the bias is.

  2. TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS

    EPA Science Inventory

    Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...

  3. Centrifugal ultrafiltration of human serum for improving immunoglobulin A quantification using attenuated total reflectance infrared spectroscopy.

    PubMed

    Elsohaby, Ibrahim; McClure, J Trenton; Riley, Christopher B; Bryanton, Janet; Bigsby, Kathryn; Shaw, R Anthony

    2018-02-20

    Attenuated total reflectance infrared (ATR-IR) spectroscopy is a simple, rapid and cost-effective method for the analysis of serum. However, the complex nature of serum remains a limiting factor to the reliability of this method. We investigated the benefits of coupling the centrifugal ultrafiltration with ATR-IR spectroscopy for quantification of human serum IgA concentration. Human serum samples (n = 196) were analyzed for IgA using an immunoturbidimetric assay. ATR-IR spectra were acquired for whole serum samples and for the retentate (residue) reconstituted with saline following 300 kDa centrifugal ultrafiltration. IR-based analytical methods were developed for each of the two spectroscopic datasets, and the accuracy of each of the two methods compared. Analytical methods were based upon partial least squares regression (PLSR) calibration models - one with 5-PLS factors (for whole serum) and the second with 9-PLS factors (for the reconstituted retentate). Comparison of the two sets of IR-based analytical results to reference IgA values revealed improvements in the Pearson correlation coefficient (from 0.66 to 0.76), and the root mean squared error of prediction in IR-based IgA concentrations (from 102 to 79 mg/dL) for the ultrafiltration retentate-based method as compared to the method built upon whole serum spectra. Depleting human serum low molecular weight proteins using a 300 kDa centrifugal filter thus enhances the accuracy IgA quantification by ATR-IR spectroscopy. Further evaluation and optimization of this general approach may ultimately lead to routine analysis of a range of high molecular-weight analytical targets that are otherwise unsuitable for IR-based analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. EPRI/NRC-RES fire human reliability analysis guidelines.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Stuart R.; Cooper, Susan E.; Najafi, Bijan

    2010-03-01

    During the 1990s, the Electric Power Research Institute (EPRI) developed methods for fire risk analysis to support its utility members in the preparation of responses to Generic Letter 88-20, Supplement 4, 'Individual Plant Examination - External Events' (IPEEE). This effort produced a Fire Risk Assessment methodology for operations at power that was used by the majority of U.S. nuclear power plants (NPPs) in support of the IPEEE program and several NPPs overseas. Although these methods were acceptable for accomplishing the objectives of the IPEEE, EPRI and the U.S. Nuclear Regulatory Commission (NRC) recognized that they required upgrades to support currentmore » requirements for risk-informed, performance-based (RI/PB) applications. In 2001, EPRI and the USNRC's Office of Nuclear Regulatory Research (RES) embarked on a cooperative project to improve the state-of-the-art in fire risk assessment to support a new risk-informed environment in fire protection. This project produced a consensus document, NUREG/CR-6850 (EPRI 1011989), entitled 'Fire PRA Methodology for Nuclear Power Facilities' which addressed fire risk for at power operations. NUREG/CR-6850 developed high level guidance on the process for identification and inclusion of human failure events (HFEs) into the fire PRA (FPRA), and a methodology for assigning quantitative screening values to these HFEs. It outlined the initial considerations of performance shaping factors (PSFs) and related fire effects that may need to be addressed in developing best-estimate human error probabilities (HEPs). However, NUREG/CR-6850 did not describe a methodology to develop best-estimate HEPs given the PSFs and the fire-related effects. In 2007, EPRI and RES embarked on another cooperative project to develop explicit guidance for estimating HEPs for human failure events under fire generated conditions, building upon existing human reliability analysis (HRA) methods. This document provides a methodology and guidance for conducting a fire HRA. This process includes identification and definition of post-fire human failure events, qualitative analysis, quantification, recovery, dependency, and uncertainty. This document provides three approaches to quantification: screening, scoping, and detailed HRA. Screening is based on the guidance in NUREG/CR-6850, with some additional guidance for scenarios with long time windows. Scoping is a new approach to quantification developed specifically to support the iterative nature of fire PRA quantification. Scoping is intended to provide less conservative HEPs than screening, but requires fewer resources than a detailed HRA analysis. For detailed HRA quantification, guidance has been developed on how to apply existing methods to assess post-fire fire HEPs.« less

  5. Online restricted-access material combined with high-performance liquid chromatography and tandem mass spectrometry for the simultaneous determination of vanillin and its vanillic acid metabolite in human plasma.

    PubMed

    Li, De-Qiang; Zhang, Zhi-Qing; Yang, Xiu-Ling; Zhou, Chun-Hua; Qi, Jin-Long

    2016-09-01

    An automated online solid-phase extraction with restricted-access material combined with high-performance liquid chromatography and tandem mass spectrometry was developed and validated for the simultaneous quantification of vanillin and its vanillic acid metabolite in human plasma. After protein precipitation by methanol, which contained the internal standards, the supernatant of plasma samples was injected to the system, the endogenous large molecules were flushed out, and target analytes were trapped and enriched on the adsorbent, resulting in a minimization of sample complexity and ion suppression effects. Calibration curves were linear over the concentrations of 5-1000 ng/mL for vanillin and 10-5000 ng/mL for vanillic acid with a coefficient of determination >0.999 for the determined compounds. The lower limits of quantification of vanillin and vanillic acid were 5.0 and 10.0 ng/mL, respectively. The intra- and inter-run precisions expressed as the relative standard deviation were 2.6-8.6 and 3.2-10.2%, respectively, and the accuracies expressed as the relative error were in the range of -6.1 to 7.3%. Extraction recoveries of analytes were between 89.5 and 97.4%. There was no notable matrix effect for any analyte concentration. The developed method was proved to be sensitive, repeatable, and accurate for the quantification of vanillin and its vanillic acid metabolite in human plasma. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies.

    PubMed

    Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza

    2018-03-26

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.

  7. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    PubMed Central

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  8. 3.8 Proposed approach to uncertainty quantification and sensitivity analysis in the next PA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, Greg; Wohlwend, Jen

    2017-10-02

    This memorandum builds upon Section 3.8 of SRNL (2016) and Flach (2017) by defining key error analysis, uncertainty quantification, and sensitivity analysis concepts and terms, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.

  9. UMI-tools: modeling sequencing errors in Unique Molecular Identifiers to improve quantification accuracy

    PubMed Central

    2017-01-01

    Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package. PMID:28100584

  10. Quantification of strontium in human serum by ICP-MS using alternate analyte-free matrix and its application to a pilot bioequivalence study of two strontium ranelate oral formulations in healthy Chinese subjects.

    PubMed

    Zhang, Dan; Wang, Xiaolin; Liu, Man; Zhang, Lina; Deng, Ming; Liu, Huichen

    2015-01-01

    A rapid, sensitive and accurate ICP-MS method using alternate analyte-free matrix for calibration standards preparation and a rapid direct dilution procedure for sample preparation was developed and validated for the quantification of exogenous strontium (Sr) from the drug in human serum. Serum was prepared by direct dilution (1:29, v/v) in an acidic solution consisting of nitric acid (0.1%) and germanium (Ge) added as internal standard (IS), to obtain simple and high-throughput preparation procedure with minimized matrix effect, and good repeatability. ICP-MS analysis was performed using collision cell technology (CCT) mode. Alternate matrix method by using distilled water as an alternate analyte-free matrix for the preparation of calibration standards (CS) was used to avoid the influence of endogenous Sr in serum on the quantification. The method was validated in terms of selectivity, carry-over, matrix effects, lower limit of quantification (LLOQ), linearity, precision and accuracy, and stability. Instrumental linearity was verified in the range of 1.00-500ng/mL, corresponding to a concentration range of 0.0300-15.0μg/mL in 50μL sample of serum matrix and alternate matrix. Intra- and inter-day precision as relative standard deviation (RSD) were less than 8.0% and accuracy as relative error (RE) was within ±3.0%. The method allowed a high sample throughput, and was sensitive and accurate enough for a pilot bioequivalence study in healthy male Chinese subjects following single oral administration of two strontium ranelate formulations containing 2g strontium ranelate. Copyright © 2014 Elsevier GmbH. All rights reserved.

  11. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  12. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  13. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    NASA Astrophysics Data System (ADS)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  14. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    DOE PAGES

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; ...

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis ismore » conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. In conclusion, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.« less

  15. Simple high-performance liquid chromatography method for formaldehyde determination in human tissue through derivatization with 2,4-dinitrophenylhydrazine.

    PubMed

    Yilmaz, Bilal; Asci, Ali; Kucukoglu, Kaan; Albayrak, Mevlut

    2016-08-01

    A simple high-performance liquid chromatography method has been developed for the determination of formaldehyde in human tissue. FA Formaldehyde was derivatized with 2,4-dinitrophenylhydrazine. It was extracted from human tissue with ethyl acetate by liquid-liquid extraction and analyzed by high-performance liquid chromatography. The calibration curve was linear in the concentration range of 5.0-200 μg/mL. Intra- and interday precision values for formaldehyde in tissue were <6.9%, and accuracy (relative error) was better than 6.5%. The extraction recoveries of formaldehyde from human tissue were between 88 and 98%. The limits of detection and quantification of formaldehyde were 1.5 and 5.0 μg/mL, respectively. Also, this assay was applied to liver samples taken from a biopsy material. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.

    PubMed

    Mehranian, Abolfazl; Zaidi, Habib

    2015-04-01

    Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  17. OR14-V-Uncertainty-PD2La Uncertainty Quantification for Nuclear Safeguards and Nondestructive Assay Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Andrew D.; Croft, Stephen; McElroy, Robert Dennis

    2017-08-01

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically provide error bars and also partition total uncertainty into “random” and “systematic” components so that, for example, an error bar can be developed for the total mass estimate in multiple items. Uncertainty Quantification (UQ) for NDA has always been important, but itmore » is recognized that greater rigor is needed and achievable using modern statistical methods.« less

  18. Development & validation of a quantitative anti-protective antigen IgG enzyme linked immunosorbent assay for serodiagnosis of cutaneous anthrax.

    PubMed

    Ghosh, N; Gunti, D; Lukka, H; Reddy, B R; Padmaja, Jyothi; Goel, A K

    2015-08-01

    Anthrax caused by Bacillus anthracis is primarily a disease of herbivorous animals, although several mammals are vulnerable to it. ELISA is the most widely accepted serodiagnostic assay for large scale surveillance of cutaneous anthrax. The aims of this study were to develop and evaluate a quantitative ELISA for determination of IgG antibodies against B. anthracis protective antigen (PA) in human cutaneous anthrax cases. Quantitative ELISA was developed using the recombinant PA for coating and standard reference serum AVR801 for quantification. A total of 116 human test and control serum samples were used in the study. The assay was evaluated for its precision, accuracy and linearity. The minimum detection limit and lower limit of quantification of the assay for anti-PA IgG were 3.2 and 4 µg/ml, respectively. The serum samples collected from the anthrax infected patients were found to have anti-PA IgG concentrations of 5.2 to 166.3 µg/ml. The intra-assay precision per cent CV within an assay and within an operator ranged from 0.99 to 7.4 per cent and 1.7 to 3.9 per cent, respectively. The accuracy of the assay was high with a per cent error of 6.5 - 24.1 per cent. The described assay was found to be linear between the range of 4 to 80 ng/ml (R [2] = 0.9982; slope = 0.9186; intercept = 0.1108). The results suggested that the developed assay could be a useful tool for quantification of anti-PA IgG response in human after anthrax infection or vaccination.

  19. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies

    PubMed Central

    Zeleke, Berihun M.; Abramson, Michael J.; Benke, Geza

    2018-01-01

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship. PMID:29587425

  20. Analysis of laser fluorosensor systems for remote algae detection and quantification

    NASA Technical Reports Server (NTRS)

    Browell, E. V.

    1977-01-01

    The development and performance of single- and multiple-wavelength laser fluorosensor systems for use in the remote detection and quantification of algae are discussed. The appropriate equation for the fluorescence power received by a laser fluorosensor system is derived in detail. Experimental development of a single wavelength system and a four wavelength system, which selectively excites the algae contained in the four primary algal color groups, is reviewed, and test results are presented. A comprehensive error analysis is reported which evaluates the uncertainty in the remote determination of the chlorophyll a concentration contained in algae by single- and multiple-wavelength laser fluorosensor systems. Results of the error analysis indicate that the remote quantification of chlorophyll a by a laser fluorosensor system requires optimum excitation wavelength(s), remote measurement of marine attenuation coefficients, and supplemental instrumentation to reduce uncertainties in the algal fluorescence cross sections.

  1. Model Uncertainty Quantification Methods In Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  2. Quick, sensitive and specific detection and evaluation of quantification of minor variants by high-throughput sequencing.

    PubMed

    Leung, Ross Ka-Kit; Dong, Zhi Qiang; Sa, Fei; Chong, Cheong Meng; Lei, Si Wan; Tsui, Stephen Kwok-Wing; Lee, Simon Ming-Yuen

    2014-02-01

    Minor variants have significant implications in quasispecies evolution, early cancer detection and non-invasive fetal genotyping but their accurate detection by next-generation sequencing (NGS) is hampered by sequencing errors. We generated sequencing data from mixtures at predetermined ratios in order to provide insight into sequencing errors and variations that can arise for which simulation cannot be performed. The information also enables better parameterization in depth of coverage, read quality and heterogeneity, library preparation techniques, technical repeatability for mathematical modeling, theory development and simulation experimental design. We devised minor variant authentication rules that achieved 100% accuracy in both testing and validation experiments. The rules are free from tedious inspection of alignment accuracy, sequencing read quality or errors introduced by homopolymers. The authentication processes only require minor variants to: (1) have minimum depth of coverage larger than 30; (2) be reported by (a) four or more variant callers, or (b) DiBayes or LoFreq, plus SNVer (or BWA when no results are returned by SNVer), and with the interassay coefficient of variation (CV) no larger than 0.1. Quantification accuracy undermined by sequencing errors could neither be overcome by ultra-deep sequencing, nor recruiting more variant callers to reach a consensus, such that consistent underestimation and overestimation (i.e. low CV) were observed. To accommodate stochastic error and adjust the observed ratio within a specified accuracy, we presented a proof of concept for the use of a double calibration curve for quantification, which provides an important reference towards potential industrial-scale fabrication of calibrants for NGS.

  3. Automated solid-phase extraction-liquid chromatography-tandem mass spectrometry analysis of 6-acetylmorphine in human urine specimens: application for a high-throughput urine analysis laboratory.

    PubMed

    Robandt, P V; Bui, H M; Scancella, J M; Klette, K L

    2010-10-01

    An automated solid-phase extraction-liquid chromatography- tandem mass spectrometry (SPE-LC-MS-MS) method using the Spark Holland Symbiosis Pharma SPE-LC coupled to a Waters Quattro Micro MS-MS was developed for the analysis of 6-acetylmorphine (6-AM) in human urine specimens. The method was linear (R² = 0.9983) to 100 ng/mL, with no carryover at 200 ng/mL. Limits of quantification and detection were found to be 2 ng/mL. Interrun precision calculated as percent coefficient of variation (%CV) and evaluated by analyzing five specimens at 10 ng/mL over nine batches (n = 45) was 3.6%. Intrarun precision evaluated from 0 to 100 ng/mL ranged from 1.0 to 4.4%CV. Other opioids (codeine, morphine, oxycodone, oxymorphone, hydromorphone, hydrocodone, and norcodeine) did not interfere in the detection, quantification, or chromatography of 6-AM or the deuterated internal standard. The quantified values for 41 authentic human urine specimens previously found to contain 6-AM by a validated gas chromatography (GC)-MS method were compared to those obtained by the SPE-LC-MS-MS method. The SPE-LC-MS-MS procedure eliminates the human factors of specimen handling, extraction, and derivatization, thereby reducing labor costs and rework resulting from human error or technique issues. The time required for extraction and analysis was reduced by approximately 50% when compared to a validated 6-AM procedure using manual SPE and GC-MS analysis.

  4. Quantification of susceptibility change at high-concentrated SPIO-labeled target by characteristic phase gradient recognition.

    PubMed

    Zhu, Haitao; Nie, Binbin; Liu, Hua; Guo, Hua; Demachi, Kazuyuki; Sekino, Masaki; Shan, Baoci

    2016-05-01

    Phase map cross-correlation detection and quantification may produce highlighted signal at superparamagnetic iron oxide nanoparticles, and distinguish them from other hypointensities. The method may quantify susceptibility change by performing least squares analysis between a theoretically generated magnetic field template and an experimentally scanned phase image. Because characteristic phase recognition requires the removal of phase wrap and phase background, additional steps of phase unwrapping and filtering may increase the chance of computing error and enlarge the inconsistence among algorithms. To solve problem, phase gradient cross-correlation and quantification method is developed by recognizing characteristic phase gradient pattern instead of phase image because phase gradient operation inherently includes unwrapping and filtering functions. However, few studies have mentioned the detectable limit of currently used phase gradient calculation algorithms. The limit may lead to an underestimation of large magnetic susceptibility change caused by high-concentrated iron accumulation. In this study, mathematical derivation points out the value of maximum detectable phase gradient calculated by differential chain algorithm in both spatial and Fourier domain. To break through the limit, a modified quantification method is proposed by using unwrapped forward differentiation for phase gradient generation. The method enlarges the detectable range of phase gradient measurement and avoids the underestimation of magnetic susceptibility. Simulation and phantom experiments were used to quantitatively compare different methods. In vivo application performs MRI scanning on nude mice implanted by iron-labeled human cancer cells. Results validate the limit of detectable phase gradient and the consequent susceptibility underestimation. Results also demonstrate the advantage of unwrapped forward differentiation compared with differential chain algorithms for susceptibility quantification at high-concentrated iron accumulation. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Liquid chromatography-tandem mass spectrometry method of loxoprofen in human plasma.

    PubMed

    Lee, Hye Won; Ji, Hye Young; Sohn, Dong Hwan; Kim, Se-Mi; Lee, Yong Bok; Lee, Hye Suk

    2009-07-01

    A rapid, sensitive and selective liquid chromatography-electrospray ionization mass spectrometric method for the determination of loxoprofen in human plasma was developed. Loxoprofen and ketoprofen (internal standard) were extracted from 20 microL of human plasma sample using ethyl acetate at acidic pH and analyzed on an Atlantis dC(18) column with the mobile phase of methanol:water (75:25, v/v). The analytes were quantified in the selected reaction monitoring mode. The standard curve was linear over the concentration range of 0.1-20 microg/mL with a lower limit of quantification of 0.1 microg/mL. The coefficient of variation and relative error for intra- and inter-assay at four quality control levels were 2.8-5.2 and 4.8-7.0%, respectively. The recoveries of loxoprofen and ketoprofen were 69.7 and 67.6%, respectively. The matrix effects for loxoprofen and ketoprofen were practically absent. This method was successfully applied to the pharmacokinetic study of loxoprofen in humans. (c) 2009 John Wiley & Sons, Ltd.

  6. The effects of cracks on the quantification of the cancellous bone fabric tensor in fossil and archaeological specimens: a simulation study.

    PubMed

    Bishop, Peter J; Clemente, Christofer J; Hocknull, Scott A; Barrett, Rod S; Lloyd, David G

    2017-03-01

    Cancellous bone is very sensitive to its prevailing mechanical environment, and study of its architecture has previously aided interpretations of locomotor biomechanics in extinct animals or archaeological populations. However, quantification of architectural features may be compromised by poor preservation in fossil and archaeological specimens, such as post mortem cracking or fracturing. In this study, the effects of post mortem cracks on the quantification of cancellous bone fabric were investigated through the simulation of cracks in otherwise undamaged modern bone samples. The effect on both scalar (degree of fabric anisotropy, fabric elongation index) and vector (principal fabric directions) variables was assessed through comparing the results of architectural analyses of cracked vs. non-cracked samples. Error was found to decrease as the relative size of the crack decreased, and as the orientation of the crack approached the orientation of the primary fabric direction. However, even in the best-case scenario simulated, error remained substantial, with at least 18% of simulations showing a > 10% error when scalar variables were considered, and at least 6.7% of simulations showing a > 10° error when vector variables were considered. As a 10% (scalar) or 10° (vector) difference is probably too large for reliable interpretation of a fossil or archaeological specimen, these results suggest that cracks should be avoided if possible when analysing cancellous bone architecture in such specimens. © 2016 Anatomical Society.

  7. Quantification of plume opacity by digital photography.

    PubMed

    Du, Ke; Rood, Mark J; Kim, Byung J; Kemme, Michael R; Franek, Bill; Mattison, Kevin

    2007-02-01

    The United States Environmental Protection Agency (USEPA) developed Method 9 to describe how plume opacity can be quantified by humans. However, use of observations by humans introduces subjectivity, and is expensive due to semiannual certification requirements of the observers. The Digital Opacity Method (DOM) was developed to quantify plume opacity at lower cost, with improved objectivity, and to provide a digital record. Photographs of plumes were taken with a calibrated digital camera under specified conditions. Pixel values from those photographs were then interpreted to quantify the plume's opacity using a contrast model and a transmission model. The contrast model determines plume opacity based on pixel values that are related to the change in contrast between two backgrounds that are located behind and next to the plume. The transmission model determines the plume's opacity based on pixel values that are related to radiances from the plume and its background. DOM was field tested with a smoke generator. The individual and average opacity errors of DOM were within the USEPA Method 9 acceptable error limits for both field campaigns. Such results are encouraging and support the use of DOM as an alternative to Method 9.

  8. The application of Gaussian mixture models for signal quantification in MALDI-TOF mass spectrometry of peptides.

    PubMed

    Spainhour, John Christian G; Janech, Michael G; Schwacke, John H; Velez, Juan Carlos Q; Ramakrishnan, Viswanathan

    2014-01-01

    Matrix assisted laser desorption/ionization time-of-flight (MALDI-TOF) coupled with stable isotope standards (SIS) has been used to quantify native peptides. This peptide quantification by MALDI-TOF approach has difficulties quantifying samples containing peptides with ion currents in overlapping spectra. In these overlapping spectra the currents sum together, which modify the peak heights and make normal SIS estimation problematic. An approach using Gaussian mixtures based on known physical constants to model the isotopic cluster of a known compound is proposed here. The characteristics of this approach are examined for single and overlapping compounds. The approach is compared to two commonly used SIS quantification methods for single compound, namely Peak Intensity method and Riemann sum area under the curve (AUC) method. For studying the characteristics of the Gaussian mixture method, Angiotensin II, Angiotensin-2-10, and Angiotenisn-1-9 and their associated SIS peptides were used. The findings suggest, Gaussian mixture method has similar characteristics as the two methods compared for estimating the quantity of isolated isotopic clusters for single compounds. All three methods were tested using MALDI-TOF mass spectra collected for peptides of the renin-angiotensin system. The Gaussian mixture method accurately estimated the native to labeled ratio of several isolated angiotensin peptides (5.2% error in ratio estimation) with similar estimation errors to those calculated using peak intensity and Riemann sum AUC methods (5.9% and 7.7%, respectively). For overlapping angiotensin peptides, (where the other two methods are not applicable) the estimation error of the Gaussian mixture was 6.8%, which is within the acceptable range. In summary, for single compounds the Gaussian mixture method is equivalent or marginally superior compared to the existing methods of peptide quantification and is capable of quantifying overlapping (convolved) peptides within the acceptable margin of error.

  9. TRIC: an automated alignment strategy for reproducible protein quantification in targeted proteomics

    PubMed Central

    Röst, Hannes L.; Liu, Yansheng; D’Agostino, Giuseppe; Zanella, Matteo; Navarro, Pedro; Rosenberger, George; Collins, Ben C.; Gillet, Ludovic; Testa, Giuseppe; Malmström, Lars; Aebersold, Ruedi

    2016-01-01

    Large scale, quantitative proteomic studies have become essential for the analysis of clinical cohorts, large perturbation experiments and systems biology studies. While next-generation mass spectrometric techniques such as SWATH-MS have substantially increased throughput and reproducibility, ensuring consistent quantification of thousands of peptide analytes across multiple LC-MS/MS runs remains a challenging and laborious manual process. To produce highly consistent and quantitatively accurate proteomics data matrices in an automated fashion, we have developed the TRIC software which utilizes fragment ion data to perform cross-run alignment, consistent peak-picking and quantification for high throughput targeted proteomics. TRIC uses a graph-based alignment strategy based on non-linear retention time correction to integrate peak elution information from all LC-MS/MS runs acquired in a study. When compared to state-of-the-art SWATH-MS data analysis, the algorithm was able to reduce the identification error by more than 3-fold at constant recall, while correcting for highly non-linear chromatographic effects. On a pulsed-SILAC experiment performed on human induced pluripotent stem (iPS) cells, TRIC was able to automatically align and quantify thousands of light and heavy isotopic peak groups and substantially increased the quantitative completeness and biological information in the data, providing insights into protein dynamics of iPS cells. Overall, this study demonstrates the importance of consistent quantification in highly challenging experimental setups, and proposes an algorithm to automate this task, constituting the last missing piece in a pipeline for automated analysis of massively parallel targeted proteomics datasets. PMID:27479329

  10. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  11. Systematic Errors in Peptide and Protein Identification and Quantification by Modified Peptides*

    PubMed Central

    Bogdanow, Boris; Zauber, Henrik; Selbach, Matthias

    2016-01-01

    The principle of shotgun proteomics is to use peptide mass spectra in order to identify corresponding sequences in a protein database. The quality of peptide and protein identification and quantification critically depends on the sensitivity and specificity of this assignment process. Many peptides in proteomic samples carry biochemical modifications, and a large fraction of unassigned spectra arise from modified peptides. Spectra derived from modified peptides can erroneously be assigned to wrong amino acid sequences. However, the impact of this problem on proteomic data has not yet been investigated systematically. Here we use combinations of different database searches to show that modified peptides can be responsible for 20–50% of false positive identifications in deep proteomic data sets. These false positive hits are particularly problematic as they have significantly higher scores and higher intensities than other false positive matches. Furthermore, these wrong peptide assignments lead to hundreds of false protein identifications and systematic biases in protein quantification. We devise a “cleaned search” strategy to address this problem and show that this considerably improves the sensitivity and specificity of proteomic data. In summary, we show that modified peptides cause systematic errors in peptide and protein identification and quantification and should therefore be considered to further improve the quality of proteomic data annotation. PMID:27215553

  12. Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles

    PubMed Central

    Cortés, Camilo; Unzueta, Luis; de los Reyes-Guzmán, Ana; Ruiz, Oscar E.; Flórez, Julián

    2016-01-01

    In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR. PMID:27403044

  13. Generating standardized image data for testing and calibrating quantification of volumes, surfaces, lengths, and object counts in fibrous and porous materials using X-ray microtomography.

    PubMed

    Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk

    2018-06-01

    Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.

  14. In vivo quantification of brain metabolites by 1H-MRS using water as an internal standard.

    PubMed

    Christiansen, P; Henriksen, O; Stubgaard, M; Gideon, P; Larsson, H B

    1993-01-01

    The reliability of absolute quantification of average metabolite concentrations in the human brain in vivo by 1H-MRS using the fully relaxed water signal as an internal standard was tested in a number of in vitro as well as in vivo measurements. The experiments were carried out on a SIEMENS HELICON SP 63/84 wholebody MR-scanner operating at 1.5 T using a STEAM sequence. In vitro studies indicate a very high correlation between metabolite signals (area under peaks) and concentration, R = 0.99 as well as between metabolite signals and the volume of the selected voxel, R = 1.00. The error in quantification of N-acetyl aspartate (NAA) concentration was about 1-2 mM (6-12%). Also in vivo a good linearity between water signal and selected voxel size was seen. The same was true for the studied metabolites, N-acetyl aspartate (NAA), creatine/phosphocreatine (Cr/PCr), and choline (Cho). Calculated average concentrations of NAA, Cr/PCr, and Cho in the occipital lobe of the brain in five healthy volunteers were (mean +/- 1 SD) 11.6 +/- 1.3 mM, 7.6 +/- 1.4 mM, and 1.7 +/- 0.5 mM. The results indicate that the method presented offers reasonable estimation of metabolite concentrations in the brain in vivo and therefore is useful in clinical research.

  15. Tomographic brain imaging with nucleolar detail and automatic cell counting

    NASA Astrophysics Data System (ADS)

    Hieber, Simone E.; Bikis, Christos; Khimchenko, Anna; Schweighauser, Gabriel; Hench, Jürgen; Chicherova, Natalia; Schulz, Georg; Müller, Bert

    2016-09-01

    Brain tissue evaluation is essential for gaining in-depth insight into its diseases and disorders. Imaging the human brain in three dimensions has always been a challenge on the cell level. In vivo methods lack spatial resolution, and optical microscopy has a limited penetration depth. Herein, we show that hard X-ray phase tomography can visualise a volume of up to 43 mm3 of human post mortem or biopsy brain samples, by demonstrating the method on the cerebellum. We automatically identified 5,000 Purkinje cells with an error of less than 5% at their layer and determined the local surface density to 165 cells per mm2 on average. Moreover, we highlight that three-dimensional data allows for the segmentation of sub-cellular structures, including dendritic tree and Purkinje cell nucleoli, without dedicated staining. The method suggests that automatic cell feature quantification of human tissues is feasible in phase tomograms obtained with isotropic resolution in a label-free manner.

  16. Application of synchrotron radiation computed microtomography for quantification of bone microstructure in human and rat bones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parreiras Nogueira, Liebert; Barroso, Regina Cely; Pereira de Almeida, Andre

    2012-05-17

    This work aims to evaluate histomorphometric quantification by synchrotron radiation computed microto-mography in bones of human and rat specimens. Bones specimens are classified as normal and pathological (for human samples) and irradiated and non-irradiated samples (for rat ones). Human bones are specimens which were affected by some injury, or not. Rat bones are specimens which were irradiated, simulating radiotherapy procedures, or not. Images were obtained on SYRMEP beamline at the Elettra Synchrotron Laboratory in Trieste, Italy. The system generated 14 {mu}m tomographic images. The quantification of bone structures were performed directly by the 3D rendered images using a home-made software.more » Resolution yielded was excellent what facilitate quantification of bone microstructures.« less

  17. Detection and quantification of soil-transmitted helminths in environmental samples: A review of current state-of-the-art and future perspectives.

    PubMed

    Amoah, Isaac Dennis; Singh, Gulshan; Stenström, Thor Axel; Reddy, Poovendhree

    2017-05-01

    It is estimated that over a billion people are infected with soil-transmitted helminths (STHs) globally with majority occurring in tropical and subtropical regions of the world. The roundworm (Ascaris lumbricoides), whipworm (Trichuris trichiura), and hookworms (Ancylostoma duodenale and Necator americanus) are the main species infecting people. These infections are mostly gained through exposure to faecally contaminated water, soil or contaminated food and with an increase in the risk of infections due to wastewater and sludge reuse in agriculture. Different methods have been developed for the detection and quantification of STHs eggs in environmental samples. However, there is a lack of a universally accepted technique which creates a challenge for comparative assessments of helminths egg concentrations both in different samples matrices as well as between locations. This review presents a comparison of reported methodologies for the detection of STHs eggs, an assessment of the relative performance of available detection methods and a discussion of new emerging techniques that could be applied for detection and quantification. It is based on a literature search using PubMed and Science Direct considering all geographical locations. Original research articles were selected based on their methodology and results sections. Methods reported in these articles were grouped into conventional, molecular and emerging techniques, the main steps in each method were then compared and discussed. The inclusion of a dissociation step aimed at detaching helminth eggs from particulate matter was found to improve the recovery of eggs. Additionally the selection and application of flotation solutions that take into account the relative densities of the eggs of different species of STHs also results in higher egg recovery. Generally the use of conventional methods was shown to be laborious and time consuming and prone to human error. The alternate use of nucleic acid-based techniques has improved the sensitivity of detection and made species specific identification possible. However, these nucleic acid based methods are expensive and less suitable in regions with limited resources and skill. The loop mediated isothermal amplification method shows promise for application in these settings due to its simplicity and use of basic equipment. In addition, the development of imaging soft-ware for the detection and quantification of STHs shows promise to further reduce human error associated with the analysis of environmental samples. It may be concluded that there is a need to comparatively assess the performance of different methods to determine their applicability in different settings as well as for use with different sample matrices (wastewater, sludge, compost, soil, vegetables etc.). Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Measurement uncertainty associated with chromatic confocal profilometry for 3D surface texture characterization of natural human enamel.

    PubMed

    Mullan, F; Bartlett, D; Austin, R S

    2017-06-01

    To investigate the measurement performance of a chromatic confocal profilometer for quantification of surface texture of natural human enamel in vitro. Contributions to the measurement uncertainty from all potential sources of measurement error using a chromatic confocal profilometer and surface metrology software were quantified using a series of surface metrology calibration artifacts and pre-worn enamel samples. The 3D surface texture analysis protocol was optimized across 0.04mm 2 of natural and unpolished enamel undergoing dietary acid erosion (pH 3.2, titratable acidity 41.3mmolOH/L). Flatness deviations due to the x, y stage mechanical movement were the major contribution to the measurement uncertainty; with maximum Sz flatness errors of 0.49μm. Whereas measurement noise; non-linearity's in x, y, z and enamel sample dimensional instability contributed minimal errors. The measurement errors were propagated into an uncertainty budget following a Type B uncertainty evaluation in order to calculate the Standard Combined Uncertainty (u c ), which was ±0.28μm. Statistically significant increases in the median (IQR) roughness (Sa) of the polished samples occurred after 15 (+0.17 (0.13)μm), 30 (+0.12 (0.09)μm) and 45 (+0.18 (0.15)μm) min of erosion (P<0.001 vs. baseline). In contrast, natural unpolished enamel samples revealed a statistically significant decrease in Sa roughness of -0.14 (0.34) μm only after 45min erosion (P<0.05s vs. baseline). The main contribution to measurement uncertainty using chromatic confocal profilometry was from flatness deviations however by optimizing measurement protocols the profilometer successfully characterized surface texture changes in enamel from erosive wear in vitro. Copyright © 2017 The Academy of Dental Materials. All rights reserved.

  19. Map misclassifications can cause large errors in landscape pattern indices: examples from habitat fragmentation.

    Treesearch

    William T. Langford; Sarah E. Gergel; Thomas G. Dietterich; Warren Cohen

    2006-01-01

    Although habitat fragmentation is one of the greatest threats to biodiversity worldwide, virtually no attention has been paid to the quantification of error in fragmentation statistics. Landscape pattern indices (LPIs), such as mean patch size and number of patches, are routinely used to quantify fragmentation and are often calculated using remote sensing imagery that...

  20. Multiscale Modeling and Uncertainty Quantification for Nuclear Fuel Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estep, Donald; El-Azab, Anter; Pernice, Michael

    2017-03-23

    In this project, we will address the challenges associated with constructing high fidelity multiscale models of nuclear fuel performance. We (*) propose a novel approach for coupling mesoscale and macroscale models, (*) devise efficient numerical methods for simulating the coupled system, and (*) devise and analyze effective numerical approaches for error and uncertainty quantification for the coupled multiscale system. As an integral part of the project, we will carry out analysis of the effects of upscaling and downscaling, investigate efficient methods for stochastic sensitivity analysis of the individual macroscale and mesoscale models, and carry out a posteriori error analysis formore » computed results. We will pursue development and implementation of solutions in software used at Idaho National Laboratories on models of interest to the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program.« less

  1. Bayesian Treatment of Uncertainty in Environmental Modeling: Optimization, Sampling and Data Assimilation Using the DREAM Software Package

    NASA Astrophysics Data System (ADS)

    Vrugt, J. A.

    2012-12-01

    In the past decade much progress has been made in the treatment of uncertainty in earth systems modeling. Whereas initial approaches has focused mostly on quantification of parameter and predictive uncertainty, recent methods attempt to disentangle the effects of parameter, forcing (input) data, model structural and calibration data errors. In this talk I will highlight some of our recent work involving theory, concepts and applications of Bayesian parameter and/or state estimation. In particular, new methods for sequential Monte Carlo (SMC) and Markov Chain Monte Carlo (MCMC) simulation will be presented with emphasis on massively parallel distributed computing and quantification of model structural errors. The theoretical and numerical developments will be illustrated using model-data synthesis problems in hydrology, hydrogeology and geophysics.

  2. Toward computer-aided emphysema quantification on ultralow-dose CT: reproducibility of ventrodorsal gravity effect measurement and correction

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Opfer, Roland; Bülow, Thomas; Rogalla, Patrik; Steinberg, Amnon; Dharaiya, Ekta; Subramanyan, Krishna

    2007-03-01

    Computer aided quantification of emphysema in high resolution CT data is based on identifying low attenuation areas below clinically determined Hounsfield thresholds. However, the emphysema quantification is prone to error since a gravity effect can influence the mean attenuation of healthy lung parenchyma up to +/- 50 HU between ventral and dorsal lung areas. Comparing ultra-low-dose (7 mAs) and standard-dose (70 mAs) CT scans of each patient we show that measurement of the ventrodorsal gravity effect is patient specific but reproducible. It can be measured and corrected in an unsupervised way using robust fitting of a linear function.

  3. Variation of haemoglobin extinction coefficients can cause errors in the determination of haemoglobin concentration measured by near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Kim, J. G.; Liu, H.

    2007-10-01

    Near-infrared spectroscopy or imaging has been extensively applied to various biomedical applications since it can detect the concentrations of oxyhaemoglobin (HbO2), deoxyhaemoglobin (Hb) and total haemoglobin (Hbtotal) from deep tissues. To quantify concentrations of these haemoglobin derivatives, the extinction coefficient values of HbO2 and Hb have to be employed. However, it was not well recognized among researchers that small differences in extinction coefficients could cause significant errors in quantifying the concentrations of haemoglobin derivatives. In this study, we derived equations to estimate errors of haemoglobin derivatives caused by the variation of haemoglobin extinction coefficients. To prove our error analysis, we performed experiments using liquid-tissue phantoms containing 1% Intralipid in a phosphate-buffered saline solution. The gas intervention of pure oxygen was given in the solution to examine the oxygenation changes in the phantom, and 3 mL of human blood was added twice to show the changes in [Hbtotal]. The error calculation has shown that even a small variation (0.01 cm-1 mM-1) in extinction coefficients can produce appreciable relative errors in quantification of Δ[HbO2], Δ[Hb] and Δ[Hbtotal]. We have also observed that the error of Δ[Hbtotal] is not always larger than those of Δ[HbO2] and Δ[Hb]. This study concludes that we need to be aware of any variation in haemoglobin extinction coefficients, which could result from changes in temperature, and to utilize corresponding animal's haemoglobin extinction coefficients for the animal experiments, in order to obtain more accurate values of Δ[HbO2], Δ[Hb] and Δ[Hbtotal] from in vivo tissue measurements.

  4. Itô-SDE MCMC method for Bayesian characterization of errors associated with data limitations in stochastic expansion methods for uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Arnst, M.; Abello Álvarez, B.; Ponthot, J.-P.; Boman, R.

    2017-11-01

    This paper is concerned with the characterization and the propagation of errors associated with data limitations in polynomial-chaos-based stochastic methods for uncertainty quantification. Such an issue can arise in uncertainty quantification when only a limited amount of data is available. When the available information does not suffice to accurately determine the probability distributions that must be assigned to the uncertain variables, the Bayesian method for assigning these probability distributions becomes attractive because it allows the stochastic model to account explicitly for insufficiency of the available information. In previous work, such applications of the Bayesian method had already been implemented by using the Metropolis-Hastings and Gibbs Markov Chain Monte Carlo (MCMC) methods. In this paper, we present an alternative implementation, which uses an alternative MCMC method built around an Itô stochastic differential equation (SDE) that is ergodic for the Bayesian posterior. We draw together from the mathematics literature a number of formal properties of this Itô SDE that lend support to its use in the implementation of the Bayesian method, and we describe its discretization, including the choice of the free parameters, by using the implicit Euler method. We demonstrate the proposed methodology on a problem of uncertainty quantification in a complex nonlinear engineering application relevant to metal forming.

  5. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malik, Afshan N., E-mail: afshan.malik@kcl.ac.uk; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that themore » methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.« less

  6. Recommendations for Improving Identification and Quantification in Non-Targeted, GC-MS-Based Metabolomic Profiling of Human Plasma

    PubMed Central

    Wang, Hanghang; Muehlbauer, Michael J.; O’Neal, Sara K.; Newgard, Christopher B.; Hauser, Elizabeth R.; Shah, Svati H.

    2017-01-01

    The field of metabolomics as applied to human disease and health is rapidly expanding. In recent efforts of metabolomics research, greater emphasis has been placed on quality control and method validation. In this study, we report an experience with quality control and a practical application of method validation. Specifically, we sought to identify and modify steps in gas chromatography-mass spectrometry (GC-MS)-based, non-targeted metabolomic profiling of human plasma that could influence metabolite identification and quantification. Our experimental design included two studies: (1) a limiting-dilution study, which investigated the effects of dilution on analyte identification and quantification; and (2) a concentration-specific study, which compared the optimal plasma extract volume established in the first study with the volume used in the current institutional protocol. We confirmed that contaminants, concentration, repeatability and intermediate precision are major factors influencing metabolite identification and quantification. In addition, we established methods for improved metabolite identification and quantification, which were summarized to provide recommendations for experimental design of GC-MS-based non-targeted profiling of human plasma. PMID:28841195

  7. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  8. Quantification Bias Caused by Plasmid DNA Conformation in Quantitative Real-Time PCR Assay

    PubMed Central

    Lin, Chih-Hui; Chen, Yu-Chieh; Pan, Tzu-Ming

    2011-01-01

    Quantitative real-time PCR (qPCR) is the gold standard for the quantification of specific nucleic acid sequences. However, a serious concern has been revealed in a recent report: supercoiled plasmid standards cause significant over-estimation in qPCR quantification. In this study, we investigated the effect of plasmid DNA conformation on the quantification of DNA and the efficiency of qPCR. Our results suggest that plasmid DNA conformation has significant impact on the accuracy of absolute quantification by qPCR. DNA standard curves shifted significantly among plasmid standards with different DNA conformations. Moreover, the choice of DNA measurement method and plasmid DNA conformation may also contribute to the measurement error of DNA standard curves. Due to the multiple effects of plasmid DNA conformation on the accuracy of qPCR, efforts should be made to assure the highest consistency of plasmid standards for qPCR. Thus, we suggest that the conformation, preparation, quantification, purification, handling, and storage of standard plasmid DNA should be described and defined in the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) to assure the reproducibility and accuracy of qPCR absolute quantification. PMID:22194997

  9. Computational analysis of PET by AIBL (CapAIBL): a cloud-based processing pipeline for the quantification of PET images

    NASA Astrophysics Data System (ADS)

    Bourgeat, Pierrick; Dore, Vincent; Fripp, Jurgen; Villemagne, Victor L.; Rowe, Chris C.; Salvado, Olivier

    2015-03-01

    With the advances of PET tracers for β-Amyloid (Aβ) detection in neurodegenerative diseases, automated quantification methods are desirable. For clinical use, there is a great need for PET-only quantification method, as MR images are not always available. In this paper, we validate a previously developed PET-only quantification method against MR-based quantification using 6 tracers: 18F-Florbetaben (N=148), 18F-Florbetapir (N=171), 18F-NAV4694 (N=47), 18F-Flutemetamol (N=180), 11C-PiB (N=381) and 18F-FDG (N=34). The results show an overall mean absolute percentage error of less than 5% for each tracer. The method has been implemented as a remote service called CapAIBL (http://milxcloud.csiro.au/capaibl). PET images are uploaded to a cloud platform where they are spatially normalised to a standard template and quantified. A report containing global as well as local quantification, along with surface projection of the β-Amyloid deposition is automatically generated at the end of the pipeline and emailed to the user.

  10. An automated construction of error models for uncertainty quantification and model calibration

    NASA Astrophysics Data System (ADS)

    Josset, L.; Lunati, I.

    2015-12-01

    To reduce the computational cost of stochastic predictions, it is common practice to rely on approximate flow solvers (or «proxy»), which provide an inexact, but computationally inexpensive response [1,2]. Error models can be constructed to correct the proxy response: based on a learning set of realizations for which both exact and proxy simulations are performed, a transformation is sought to map proxy into exact responses. Once the error model is constructed a prediction of the exact response is obtained at the cost of a proxy simulation for any new realization. Despite its effectiveness [2,3], the methodology relies on several user-defined parameters, which impact the accuracy of the predictions. To achieve a fully automated construction, we propose a novel methodology based on an iterative scheme: we first initialize the error model with a small training set of realizations; then, at each iteration, we add a new realization both to improve the model and to evaluate its performance. More specifically, at each iteration we use the responses predicted by the updated model to identify the realizations that need to be considered to compute the quantity of interest. Another user-defined parameter is the number of dimensions of the response spaces between which the mapping is sought. To identify the space dimensions that optimally balance mapping accuracy and risk of overfitting, we follow a Leave-One-Out Cross Validation. Also, the definition of a stopping criterion is central to an automated construction. We use a stability measure based on bootstrap techniques to stop the iterative procedure when the iterative model has converged. The methodology is illustrated with two test cases in which an inverse problem has to be solved and assess the performance of the method. We show that an iterative scheme is crucial to increase the applicability of the approach. [1] Josset, L., and I. Lunati, Local and global error models for improving uncertainty quantification, Math.ematical Geosciences, 2013 [2] Josset, L., D. Ginsbourger, and I. Lunati, Functional Error Modeling for uncertainty quantification in hydrogeology, Water Resources Research, 2015 [3] Josset, L., V. Demyanov, A.H. Elsheikhb, and I. Lunati, Accelerating Monte Carlo Markov chains with proxy and error models, Computer & Geosciences, 2015 (In press)

  11. Automated Quantification of Pneumothorax in CT

    PubMed Central

    Do, Synho; Salvaggio, Kristen; Gupta, Supriya; Kalra, Mannudeep; Ali, Nabeel U.; Pien, Homer

    2012-01-01

    An automated, computer-aided diagnosis (CAD) algorithm for the quantification of pneumothoraces from Multidetector Computed Tomography (MDCT) images has been developed. Algorithm performance was evaluated through comparison to manual segmentation by expert radiologists. A combination of two-dimensional and three-dimensional processing techniques was incorporated to reduce required processing time by two-thirds (as compared to similar techniques). Volumetric measurements on relative pneumothorax size were obtained and the overall performance of the automated method shows an average error of just below 1%. PMID:23082091

  12. Computational solution verification and validation applied to a thermal model of a ruggedized instrumentation package

    DOE PAGES

    Scott, Sarah Nicole; Templeton, Jeremy Alan; Hough, Patricia Diane; ...

    2014-01-01

    This study details a methodology for quantification of errors and uncertainties of a finite element heat transfer model applied to a Ruggedized Instrumentation Package (RIP). The proposed verification and validation (V&V) process includes solution verification to examine errors associated with the code's solution techniques, and model validation to assess the model's predictive capability for quantities of interest. The model was subjected to mesh resolution and numerical parameters sensitivity studies to determine reasonable parameter values and to understand how they change the overall model response and performance criteria. To facilitate quantification of the uncertainty associated with the mesh, automatic meshing andmore » mesh refining/coarsening algorithms were created and implemented on the complex geometry of the RIP. Automated software to vary model inputs was also developed to determine the solution’s sensitivity to numerical and physical parameters. The model was compared with an experiment to demonstrate its accuracy and determine the importance of both modelled and unmodelled physics in quantifying the results' uncertainty. An emphasis is placed on automating the V&V process to enable uncertainty quantification within tight development schedules.« less

  13. Learning and Information Approaches for Inference in Dynamic Data-Driven Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Ravela, S.

    2015-12-01

    Many Geophysical inference problems are characterized by non-linear processes, high-dimensional models and complex uncertainties. A dynamic coupling between models, estimation, and sampling is typically sought to efficiently characterize and reduce uncertainty. This process is however fraught with several difficulties. Among them, the key difficulties are the ability to deal with model errors, efficacy of uncertainty quantification and data assimilation. In this presentation, we present three key ideas from learning and intelligent systems theory and apply them to two geophysical applications. The first idea is the use of Ensemble Learning to compensate for model error, the second is to develop tractable Information Theoretic Learning to deal with non-Gaussianity in inference, and the third is a Manifold Resampling technique for effective uncertainty quantification. We apply these methods, first to the development of a cooperative autonomous observing system using sUAS for studying coherent structures. We apply this to Second, we apply this to the problem of quantifying risk from hurricanes and storm surges in a changing climate. Results indicate that learning approaches can enable new effectiveness in cases where standard approaches to model reduction, uncertainty quantification and data assimilation fail.

  14. PeakCaller: an automated graphical interface for the quantification of intracellular calcium obtained by high-content screening.

    PubMed

    Artimovich, Elena; Jackson, Russell K; Kilander, Michaela B C; Lin, Yu-Chih; Nestor, Michael W

    2017-10-16

    Intracellular calcium is an important ion involved in the regulation and modulation of many neuronal functions. From regulating cell cycle and proliferation to initiating signaling cascades and regulating presynaptic neurotransmitter release, the concentration and timing of calcium activity governs the function and fate of neurons. Changes in calcium transients can be used in high-throughput screening applications as a basic measure of neuronal maturity, especially in developing or immature neuronal cultures derived from stem cells. Using human induced pluripotent stem cell derived neurons and dissociated mouse cortical neurons combined with the calcium indicator Fluo-4, we demonstrate that PeakCaller reduces type I and type II error in automated peak calling when compared to the oft-used PeakFinder algorithm under both basal and pharmacologically induced conditions. Here we describe PeakCaller, a novel MATLAB script and graphical user interface for the quantification of intracellular calcium transients in neuronal cultures. PeakCaller allows the user to set peak parameters and smoothing algorithms to best fit their data set. This new analysis script will allow for automation of calcium measurements and is a powerful software tool for researchers interested in high-throughput measurements of intracellular calcium.

  15. Quantification of Wine Mixtures with an Electronic Nose and a Human Panel.

    PubMed

    Aleixandre, Manuel; Cabellos, Juan M; Arroyo, Teresa; Horrillo, M C

    2018-01-01

    In this work, an electronic nose and a human panel were used for the quantification of wines formed by binary mixtures of four white grape varieties and two varieties of red wines at different percentages (from 0 to 100% in 10% steps for the electronic nose and from 0 to 100% in 25% steps for the human panel). The wines were prepared using the traditional method with commercial yeasts. Both techniques were able to quantify the mixtures tested, but it is important to note that the technology of the electronic nose is faster, simpler, and more objective than the human panel. In addition, better results of quantification were also obtained using the electronic nose.

  16. quantGenius: implementation of a decision support system for qPCR-based gene quantification.

    PubMed

    Baebler, Špela; Svalina, Miha; Petek, Marko; Stare, Katja; Rotter, Ana; Pompe-Novak, Maruša; Gruden, Kristina

    2017-05-25

    Quantitative molecular biology remains a challenge for researchers due to inconsistent approaches for control of errors in the final results. Due to several factors that can influence the final result, quantitative analysis and interpretation of qPCR data are still not trivial. Together with the development of high-throughput qPCR platforms, there is a need for a tool allowing for robust, reliable and fast nucleic acid quantification. We have developed "quantGenius" ( http://quantgenius.nib.si ), an open-access web application for a reliable qPCR-based quantification of nucleic acids. The quantGenius workflow interactively guides the user through data import, quality control (QC) and calculation steps. The input is machine- and chemistry-independent. Quantification is performed using the standard curve approach, with normalization to one or several reference genes. The special feature of the application is the implementation of user-guided QC-based decision support system, based on qPCR standards, that takes into account pipetting errors, assay amplification efficiencies, limits of detection and quantification of the assays as well as the control of PCR inhibition in individual samples. The intermediate calculations and final results are exportable in a data matrix suitable for further statistical analysis or visualization. We additionally compare the most important features of quantGenius with similar advanced software tools and illustrate the importance of proper QC system in the analysis of qPCR data in two use cases. To our knowledge, quantGenius is the only qPCR data analysis tool that integrates QC-based decision support and will help scientists to obtain reliable results which are the basis for biologically meaningful data interpretation.

  17. Quantification of fructo-oligosaccharides based on the evaluation of oligomer ratios using an artificial neural network.

    PubMed

    Onofrejová, Lucia; Farková, Marta; Preisler, Jan

    2009-04-13

    The application of an internal standard in quantitative analysis is desirable in order to correct for variations in sample preparation and instrumental response. In mass spectrometry of organic compounds, the internal standard is preferably labelled with a stable isotope, such as (18)O, (15)N or (13)C. In this study, a method for the quantification of fructo-oligosaccharides using matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI TOF MS) was proposed and tested on raftilose, a partially hydrolysed inulin with a degree of polymeration 2-7. A tetraoligosaccharide nystose, which is chemically identical to the raftilose tetramer, was used as an internal standard rather than an isotope-labelled analyte. Two mathematical approaches used for data processing, conventional calculations and artificial neural networks (ANN), were compared. The conventional data processing relies on the assumption that a constant oligomer dispersion profile will change after the addition of the internal standard and some simple numerical calculations. On the other hand, ANN was found to compensate for a non-linear MALDI response and variations in the oligomer dispersion profile with raftilose concentration. As a result, the application of ANN led to lower quantification errors and excellent day-to-day repeatability compared to the conventional data analysis. The developed method is feasible for MS quantification of raftilose in the range of 10-750 pg with errors below 7%. The content of raftilose was determined in dietary cream; application can be extended to other similar polymers. It should be stressed that no special optimisation of the MALDI process was carried out. A common MALDI matrix and sample preparation were used and only the basic parameters, such as sampling and laser energy, were optimised prior to quantification.

  18. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging.

    PubMed

    Hunter, Chad R R N; Klein, Ran; Beanlands, Rob S; deKemp, Robert A

    2016-04-01

    Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET-CT misalignment. A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers was resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET-CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.

  19. Feasibility and accuracy of dual-layer spectral detector computed tomography for quantification of gadolinium: a phantom study.

    PubMed

    van Hamersvelt, Robbert W; Willemink, Martin J; de Jong, Pim A; Milles, Julien; Vlassenbroek, Alain; Schilham, Arnold M R; Leiner, Tim

    2017-09-01

    The aim of this study was to evaluate the feasibility and accuracy of dual-layer spectral detector CT (SDCT) for the quantification of clinically encountered gadolinium concentrations. The cardiac chamber of an anthropomorphic thoracic phantom was equipped with 14 tubular inserts containing different gadolinium concentrations, ranging from 0 to 26.3 mg/mL (0.0, 0.1, 0.2, 0.4, 0.5, 1.0, 2.0, 3.0, 4.0, 5.1, 10.6, 15.7, 20.7 and 26.3 mg/mL). Images were acquired using a novel 64-detector row SDCT system at 120 and 140 kVp. Acquisitions were repeated five times to assess reproducibility. Regions of interest (ROIs) were drawn on three slices per insert. A spectral plot was extracted for every ROI and mean attenuation profiles were fitted to known attenuation profiles of water and pure gadolinium using in-house-developed software to calculate gadolinium concentrations. At both 120 and 140 kVp, excellent correlations between scan repetitions and true and measured gadolinium concentrations were found (R > 0.99, P < 0.001; ICCs > 0.99, CI 0.99-1.00). Relative mean measurement errors stayed below 10% down to 2.0 mg/mL true gadolinium concentration at 120 kVp and below 5% down to 1.0 mg/mL true gadolinium concentration at 140 kVp. SDCT allows for accurate quantification of gadolinium at both 120 and 140 kVp. Lowest measurement errors were found for 140 kVp acquisitions. • Gadolinium quantification may be useful in patients with contraindication to iodine. • Dual-layer spectral detector CT allows for overall accurate quantification of gadolinium. • Interscan variability of gadolinium quantification using SDCT material decomposition is excellent.

  20. Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE

    NASA Astrophysics Data System (ADS)

    Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.

    2015-12-01

    Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.

  1. Quantification of HCV RNA in Clinical Specimens by Branched DNA (bDNA) Technology.

    PubMed

    Wilber, J C; Urdea, M S

    1999-01-01

    The diagnosis and monitoring of hepatitis C virus (HCV) infection have been aided by the development of HCV RNA quantification assays A direct measure of viral load, HCV RNA quantification has the advantage of providing information on viral kinetics and provides unique insight into the disease process. Branched DNA (bDNA) signal amplification technology provides a novel approach for the direct quantification of HCV RNA in patient specimens. The bDNA assay measures HCV RNA at physiological levels by boosting the reporter signal, rather than by replicating target sequences as the means of detection, and thus avoids the errors inherent in the extraction and amplification of target sequences. Inherently quantitative and nonradioactive, the bDNA assay is amenable to routine use in a clinical research setting, and has been used by several groups to explore the natural history, pathogenesis, and treatment of HCV infection.

  2. Detection and Quantification of Human Fecal Pollution with Real-Time PCR

    EPA Science Inventory

    ABSTRACT Assessment of health risk and fecal bacteria loads associated with human fecal pollution requires a reliable host-specific genetic marker and a rapid quantification method. We report the development of quantitative PCR assays for enumeration of two recently described ...

  3. Determination of Flurbiprofen in Human Plasma by High-Performance Liquid Chromatography.

    PubMed

    Yilmaz, Bilal; Erdem, Ali Fuat

    2015-10-01

    A simple high-performance liquid chromatography method has been developed for determination of flurbiprofen in human plasma. The method was validated on an Ace C18 column using UV detection. The mobile phase was acetonitrile-0.05 M potassium dihydrogen phosphate solution (60:40, v/v) adjusted to pH 3.5 with phosphoric acid. The calibration curve was linear between the concentration range of 0.10-5.0 μg/mL. Intra- and inter-day precision values for flurbiprofen in plasma were <4.47, and accuracy (relative error) was better than 3.67%. The extraction recoveries of flurbiprofen from human plasma were between 93.0 and 98.9%. The limits of detection and quantification of flurbiprofen were 0.03 and 0.10 μg/mL, respectively. In addition, this assay was applied to determine the pharmacokinetic parameters of flurbiprofen in six healthy Turkish volunteers who had been given 100 mg flurbiprofen. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Archetypal Analysis for Sparse Representation-Based Hyperspectral Sub-Pixel Quantification

    NASA Astrophysics Data System (ADS)

    Drees, L.; Roscher, R.

    2017-05-01

    This paper focuses on the quantification of land cover fractions in an urban area of Berlin, Germany, using simulated hyperspectral EnMAP data with a spatial resolution of 30m×30m. For this, sparse representation is applied, where each pixel with unknown surface characteristics is expressed by a weighted linear combination of elementary spectra with known land cover class. The elementary spectra are determined from image reference data using simplex volume maximization, which is a fast heuristic technique for archetypal analysis. In the experiments, the estimation of class fractions based on the archetypal spectral library is compared to the estimation obtained by a manually designed spectral library by means of reconstruction error, mean absolute error of the fraction estimates, sum of fractions and the number of used elementary spectra. We will show, that a collection of archetypes can be an adequate and efficient alternative to the spectral library with respect to mentioned criteria.

  5. Easy, Fast, and Reproducible Quantification of Cholesterol and Other Lipids in Human Plasma by Combined High Resolution MSX and FTMS Analysis

    NASA Astrophysics Data System (ADS)

    Gallego, Sandra F.; Højlund, Kurt; Ejsing, Christer S.

    2018-01-01

    Reliable, cost-effective, and gold-standard absolute quantification of non-esterified cholesterol in human plasma is of paramount importance in clinical lipidomics and for the monitoring of metabolic health. Here, we compared the performance of three mass spectrometric approaches available for direct detection and quantification of cholesterol in extracts of human plasma. These approaches are high resolution full scan Fourier transform mass spectrometry (FTMS) analysis, parallel reaction monitoring (PRM), and novel multiplexed MS/MS (MSX) technology, where fragments from selected precursor ions are detected simultaneously. Evaluating the performance of these approaches in terms of dynamic quantification range, linearity, and analytical precision showed that the MSX-based approach is superior to that of the FTMS and PRM-based approaches. To further show the efficacy of this approach, we devised a simple routine for extensive plasma lipidome characterization using only 8 μL of plasma, using a new commercially available ready-to-spike-in mixture with 14 synthetic lipid standards, and executing a single 6 min sample injection with combined MSX analysis for cholesterol quantification and FTMS analysis for quantification of sterol esters, glycerolipids, glycerophospholipids, and sphingolipids. Using this simple routine afforded reproducible and absolute quantification of 200 lipid species encompassing 13 lipid classes in human plasma samples. Notably, the analysis time of this procedure can be shortened for high throughput-oriented clinical lipidomics studies or extended with more advanced MSALL technology (Almeida R. et al., J. Am. Soc. Mass Spectrom. 26, 133-148 [1]) to support in-depth structural elucidation of lipid molecules. [Figure not available: see fulltext.

  6. Easy, Fast, and Reproducible Quantification of Cholesterol and Other Lipids in Human Plasma by Combined High Resolution MSX and FTMS Analysis.

    PubMed

    Gallego, Sandra F; Højlund, Kurt; Ejsing, Christer S

    2018-01-01

    Reliable, cost-effective, and gold-standard absolute quantification of non-esterified cholesterol in human plasma is of paramount importance in clinical lipidomics and for the monitoring of metabolic health. Here, we compared the performance of three mass spectrometric approaches available for direct detection and quantification of cholesterol in extracts of human plasma. These approaches are high resolution full scan Fourier transform mass spectrometry (FTMS) analysis, parallel reaction monitoring (PRM), and novel multiplexed MS/MS (MSX) technology, where fragments from selected precursor ions are detected simultaneously. Evaluating the performance of these approaches in terms of dynamic quantification range, linearity, and analytical precision showed that the MSX-based approach is superior to that of the FTMS and PRM-based approaches. To further show the efficacy of this approach, we devised a simple routine for extensive plasma lipidome characterization using only 8 μL of plasma, using a new commercially available ready-to-spike-in mixture with 14 synthetic lipid standards, and executing a single 6 min sample injection with combined MSX analysis for cholesterol quantification and FTMS analysis for quantification of sterol esters, glycerolipids, glycerophospholipids, and sphingolipids. Using this simple routine afforded reproducible and absolute quantification of 200 lipid species encompassing 13 lipid classes in human plasma samples. Notably, the analysis time of this procedure can be shortened for high throughput-oriented clinical lipidomics studies or extended with more advanced MS ALL technology (Almeida R. et al., J. Am. Soc. Mass Spectrom. 26, 133-148 [1]) to support in-depth structural elucidation of lipid molecules. Graphical Abstract ᅟ.

  7. An Improved Neutron Transport Algorithm for HZETRN2006

    NASA Astrophysics Data System (ADS)

    Slaba, Tony

    NASA's new space exploration initiative includes plans for long term human presence in space thereby placing new emphasis on space radiation analyses. In particular, a systematic effort of verification, validation and uncertainty quantification of the tools commonly used for radiation analysis for vehicle design and mission planning has begun. In this paper, the numerical error associated with energy discretization in HZETRN2006 is addressed; large errors in the low-energy portion of the neutron fluence spectrum are produced due to a numerical truncation error in the transport algorithm. It is shown that the truncation error results from the narrow energy domain of the neutron elastic spectral distributions, and that an extremely fine energy grid is required in order to adequately resolve the problem under the current formulation. Since adding a sufficient number of energy points will render the code computationally inefficient, we revisit the light-ion transport theory developed for HZETRN2006 and focus on neutron elastic interactions. The new approach that is developed numerically integrates with adequate resolution in the energy domain without affecting the run-time of the code and is easily incorporated into the current code. Efforts were also made to optimize the computational efficiency of the light-ion propagator; a brief discussion of the efforts is given along with run-time comparisons between the original and updated codes. Convergence testing is then completed by running the code for various environments and shielding materials with many different energy grids to ensure stability of the proposed method.

  8. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  9. Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan

    2013-01-01

    The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.

  10. Quantification of Wine Mixtures with an Electronic Nose and a Human Panel

    PubMed Central

    Aleixandre, Manuel; Cabellos, Juan M.; Arroyo, Teresa; Horrillo, M. C.

    2018-01-01

    In this work, an electronic nose and a human panel were used for the quantification of wines formed by binary mixtures of four white grape varieties and two varieties of red wines at different percentages (from 0 to 100% in 10% steps for the electronic nose and from 0 to 100% in 25% steps for the human panel). The wines were prepared using the traditional method with commercial yeasts. Both techniques were able to quantify the mixtures tested, but it is important to note that the technology of the electronic nose is faster, simpler, and more objective than the human panel. In addition, better results of quantification were also obtained using the electronic nose. PMID:29484296

  11. Individual Differences in Human Reliability Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. Joe; Ronald L. Boring

    2014-06-01

    While human reliability analysis (HRA) methods include uncertainty in quantification, the nominal model of human error in HRA typically assumes that operator performance does not vary significantly when they are given the same initiating event, indicators, procedures, and training, and that any differences in operator performance are simply aleatory (i.e., random). While this assumption generally holds true when performing routine actions, variability in operator response has been observed in multiple studies, especially in complex situations that go beyond training and procedures. As such, complexity can lead to differences in operator performance (e.g., operator understanding and decision-making). Furthermore, psychological research hasmore » shown that there are a number of known antecedents (i.e., attributable causes) that consistently contribute to observable and systematically measurable (i.e., not random) differences in behavior. This paper reviews examples of individual differences taken from operational experience and the psychological literature. The impact of these differences in human behavior and their implications for HRA are then discussed. We propose that individual differences should not be treated as aleatory, but rather as epistemic. Ultimately, by understanding the sources of individual differences, it is possible to remove some epistemic uncertainty from analyses.« less

  12. A Robust Error Model for iTRAQ Quantification Reveals Divergent Signaling between Oncogenic FLT3 Mutants in Acute Myeloid Leukemia*

    PubMed Central

    Zhang, Yi; Askenazi, Manor; Jiang, Jingrui; Luckey, C. John; Griffin, James D.; Marto, Jarrod A.

    2010-01-01

    The FLT3 receptor tyrosine kinase plays an important role in normal hematopoietic development and leukemogenesis. Point mutations within the activation loop and in-frame tandem duplications of the juxtamembrane domain represent the most frequent molecular abnormalities observed in acute myeloid leukemia. Interestingly these gain-of-function mutations correlate with different clinical outcomes, suggesting that signals from constitutive FLT3 mutants activate different downstream targets. In principle, mass spectrometry offers a powerful means to quantify protein phosphorylation and identify signaling events associated with constitutively active kinases or other oncogenic events. However, regulation of individual phosphorylation sites presents a challenging case for proteomics studies whereby quantification is based on individual peptides rather than an average across different peptides derived from the same protein. Here we describe a robust experimental framework and associated error model for iTRAQ-based quantification on an Orbitrap mass spectrometer that relates variance of peptide ratios to mass spectral peak height and provides for assignment of p value, q value, and confidence interval to every peptide identification, all based on routine measurements, obviating the need for detailed characterization of individual ion peaks. Moreover, we demonstrate that our model is stable over time and can be applied in a manner directly analogous to ubiquitously used external mass calibration routines. Application of our error model to quantitative proteomics data for FLT3 signaling provides evidence that phosphorylation of tyrosine phosphatase SHP1 abrogates the transformative potential, but not overall kinase activity, of FLT3-D835Y in acute myeloid leukemia. PMID:20019052

  13. Simultaneous quantification of 21 water soluble vitamin circulating forms in human plasma by liquid chromatography-mass spectrometry.

    PubMed

    Meisser Redeuil, Karine; Longet, Karin; Bénet, Sylvie; Munari, Caroline; Campos-Giménez, Esther

    2015-11-27

    This manuscript reports a validated analytical approach for the quantification of 21 water soluble vitamins and their main circulating forms in human plasma. Isotope dilution-based sample preparation consisted of protein precipitation using acidic methanol enriched with stable isotope labelled internal standards. Separation was achieved by reversed-phase liquid chromatography and detection performed by tandem mass spectrometry in positive electrospray ionization mode. Instrumental lower limits of detection and quantification reached <0.1-10nM and 0.2-25nM, respectively. Commercially available pooled human plasma was used to build matrix-matched calibration curves ranging 2-500, 5-1250, 20-5000 or 150-37500nM depending on the analyte. The overall performance of the method was considered adequate, with 2.8-20.9% and 5.2-20.0% intra and inter-day precision, respectively and averaged accuracy reaching 91-108%. Recovery experiments were also performed and reached in average 82%. This analytical approach was then applied for the quantification of circulating water soluble vitamins in human plasma single donor samples. The present report provides a sensitive and reliable approach for the quantification of water soluble vitamins and main circulating forms in human plasma. In the future, the application of this analytical approach will give more confidence to provide a comprehensive assessment of water soluble vitamins nutritional status and bioavailability studies in humans. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Quantification of LiDAR measurement uncertainty through propagation of errors due to sensor sub-systems and terrain morphology

    NASA Astrophysics Data System (ADS)

    Goulden, T.; Hopkinson, C.

    2013-12-01

    The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.

  15. A Study on Generic Representation of Skeletal Remains Replication of Prehistoric Burial

    NASA Astrophysics Data System (ADS)

    Shao, C.-W.; Chiu, H.-L.; Chang, S.-K.

    2015-08-01

    Generic representation of skeletal remains from burials consists of three dimensions which include physical anthropologists, replication technicians, and promotional educators. For the reason that archaeological excavation is irreversible and disruptive, detail documentation and replication technologies are surely needed for many purposes. Unearthed bones during the process of 3D digital scanning need to go through reverse procedure, 3D scanning, digital model superimposition, rapid prototyping, mould making, and the integrated errors generated from the presentation of colours and textures are important issues for the presentation of replicate skeleton remains among professional decisions conducted by physical anthropologists, subjective determination of makers, and the expectations of viewers. This study presents several cases and examines current issues on display and replication technologies for human skeletal remains of prehistoric burials. This study documented detail colour changes of human skeleton over time for the reference of reproduction. The tolerance errors of quantification and required technical qualification is acquired according to the precision of 3D scanning, the specification requirement of rapid prototyping machine, and the mould making process should following the professional requirement for physical anthropological study. Additionally, the colorimeter is adopted to record and analyse the "colour change" of the human skeletal remains from wet to dry condition. Then, the "colure change" is used to evaluate the "real" surface texture and colour presentation of human skeletal remains, and to limit the artistic presentation among the human skeletal remains reproduction. The"Lingdao man No.1", is a well preserved burial of early Neolithic period (8300 B.P.) excavated from Liangdao-Daowei site, Matsu, Taiwan , as the replicating object for this study. In this study, we examined the reproduction procedures step by step for ensuring the surface texture and colour of the replica matches the real human skeletal remains when discovered. The "colour change" of the skeleton documented and quantified in this study could be the reference for the future study and educational exhibition of human skeletal remain reproduction.

  16. Accuracy of iodine quantification using dual energy CT in latest generation dual source and dual layer CT.

    PubMed

    Pelgrim, Gert Jan; van Hamersvelt, Robbert W; Willemink, Martin J; Schmidt, Bernhard T; Flohr, Thomas; Schilham, Arnold; Milles, Julien; Oudkerk, Matthijs; Leiner, Tim; Vliegenthart, Rozemarijn

    2017-09-01

    To determine the accuracy of iodine quantification with dual energy computed tomography (DECT) in two high-end CT systems with different spectral imaging techniques. Five tubes with different iodine concentrations (0, 5, 10, 15, 20 mg/ml) were analysed in an anthropomorphic thoracic phantom. Adding two phantom rings simulated increased patient size. For third-generation dual source CT (DSCT), tube voltage combinations of 150Sn and 70, 80, 90, 100 kVp were analysed. For dual layer CT (DLCT), 120 and 140 kVp were used. Scans were repeated three times. Median normalized values and interquartile ranges (IQRs) were calculated for all kVp settings and phantom sizes. Correlation between measured and known iodine concentrations was excellent for both systems (R = 0.999-1.000, p < 0.0001). For DSCT, median measurement errors ranged from -0.5% (IQR -2.0, 2.0%) at 150Sn/70 kVp and -2.3% (IQR -4.0, -0.1%) at 150Sn/80 kVp to -4.0% (IQR -6.0, -2.8%) at 150Sn/90 kVp. For DLCT, median measurement errors ranged from -3.3% (IQR -4.9, -1.5%) at 140 kVp to -4.6% (IQR -6.0, -3.6%) at 120 kVp. Larger phantom sizes increased variability of iodine measurements (p < 0.05). Iodine concentration can be accurately quantified with state-of-the-art DECT systems from two vendors. The lowest absolute errors were found for DSCT using the 150Sn/70 kVp or 150Sn/80 kVp combinations, which was slightly more accurate than 140 kVp in DLCT. • High-end CT scanners allow accurate iodine quantification using different DECT techniques. • Lowest measurement error was found in scans with largest photon energy separation. • Dual-source CT quantified iodine slightly more accurately than dual layer CT.

  17. Accuracy of Area at Risk Quantification by Cardiac Magnetic Resonance According to the Myocardial Infarction Territory.

    PubMed

    Fernández-Friera, Leticia; García-Ruiz, José Manuel; García-Álvarez, Ana; Fernández-Jiménez, Rodrigo; Sánchez-González, Javier; Rossello, Xavier; Gómez-Talavera, Sandra; López-Martín, Gonzalo J; Pizarro, Gonzalo; Fuster, Valentín; Ibáñez, Borja

    2017-05-01

    Area at risk (AAR) quantification is important to evaluate the efficacy of cardioprotective therapies. However, postinfarction AAR assessment could be influenced by the infarcted coronary territory. Our aim was to determine the accuracy of T 2 -weighted short tau triple-inversion recovery (T 2 W-STIR) cardiac magnetic resonance (CMR) imaging for accurate AAR quantification in anterior, lateral, and inferior myocardial infarctions. Acute reperfused myocardial infarction was experimentally induced in 12 pigs, with 40-minute occlusion of the left anterior descending (n = 4), left circumflex (n = 4), and right coronary arteries (n = 4). Perfusion CMR was performed during selective intracoronary gadolinium injection at the coronary occlusion site (in vivo criterion standard) and, additionally, a 7-day CMR, including T 2 W-STIR sequences, was performed. Finally, all animals were sacrificed and underwent postmortem Evans blue staining (classic criterion standard). The concordance between the CMR-based criterion standard and T 2 W-STIR to quantify AAR was high for anterior and inferior infarctions (r = 0.73; P = .001; mean error = 0.50%; limits = -12.68%-13.68% and r = 0.87; P = .001; mean error = -1.5%; limits = -8.0%-5.8%, respectively). Conversely, the correlation for the circumflex territories was poor (r = 0.21, P = .37), showing a higher mean error and wider limits of agreement. A strong correlation between pathology and the CMR-based criterion standard was observed (r = 0.84, P < .001; mean error = 0.91%; limits = -7.55%-9.37%). T 2 W-STIR CMR sequences are accurate to determine the AAR for anterior and inferior infarctions; however, their accuracy for lateral infarctions is poor. These findings may have important implications for the design and interpretation of clinical trials evaluating the effectiveness of cardioprotective therapies. Copyright © 2016 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.

  18. 21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS EXTRALABEL DRUG USE IN ANIMALS Specific Provisions Relating to Extralabel Use of Animal and Human Drugs in Food-Producing Animals § 530.24 Procedure for announcing analytical methods for drug residue quantification. (a...

  19. Uncertainty Quantification for Polynomial Systems via Bernstein Expansions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.

  20. [A Quality Assurance (QA) System with a Web Camera for High-dose-rate Brachytherapy].

    PubMed

    Hirose, Asako; Ueda, Yoshihiro; Oohira, Shingo; Isono, Masaru; Tsujii, Katsutomo; Inui, Shouki; Masaoka, Akira; Taniguchi, Makoto; Miyazaki, Masayoshi; Teshima, Teruki

    2016-03-01

    The quality assurance (QA) system that simultaneously quantifies the position and duration of an (192)Ir source (dwell position and time) was developed and the performance of this system was evaluated in high-dose-rate brachytherapy. This QA system has two functions to verify and quantify dwell position and time by using a web camera. The web camera records 30 images per second in a range from 1,425 mm to 1,505 mm. A user verifies the source position from the web camera at real time. The source position and duration were quantified with the movie using in-house software which was applied with a template-matching technique. This QA system allowed verification of the absolute position in real time and quantification of dwell position and time simultaneously. It was evident from the verification of the system that the mean of step size errors was 0.31±0.1 mm and that of dwell time errors 0.1±0.0 s. Absolute position errors can be determined with an accuracy of 1.0 mm at all dwell points in three step sizes and dwell time errors with an accuracy of 0.1% in more than 10.0 s of the planned time. This system is to provide quick verification and quantification of the dwell position and time with high accuracy at various dwell positions without depending on the step size.

  1. Uncertainty quantification in application of the enrichment meter principle for nondestructive assay of special nuclear material

    DOE PAGES

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    2015-09-05

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  2. Efficacy and workload analysis of a fixed vertical couch position technique and a fixed‐action–level protocol in whole‐breast radiotherapy

    PubMed Central

    Verhoeven, Karolien; Weltens, Caroline; Van den Heuvel, Frank

    2015-01-01

    Quantification of the setup errors is vital to define appropriate setup margins preventing geographical misses. The no‐action–level (NAL) correction protocol reduces the systematic setup errors and, hence, the setup margins. The manual entry of the setup corrections in the record‐and‐verify software, however, increases the susceptibility of the NAL protocol to human errors. Moreover, the impact of the skin mobility on the anteroposterior patient setup reproducibility in whole‐breast radiotherapy (WBRT) is unknown. In this study, we therefore investigated the potential of fixed vertical couch position‐based patient setup in WBRT. The possibility to introduce a threshold for correction of the systematic setup errors was also explored. We measured the anteroposterior, mediolateral, and superior–inferior setup errors during fractions 1–12 and weekly thereafter with tangential angled single modality paired imaging. These setup data were used to simulate the residual setup errors of the NAL protocol, the fixed vertical couch position protocol, and the fixed‐action–level protocol with different correction thresholds. Population statistics of the setup errors of 20 breast cancer patients and 20 breast cancer patients with additional regional lymph node (LN) irradiation were calculated to determine the setup margins of each off‐line correction protocol. Our data showed the potential of the fixed vertical couch position protocol to restrict the systematic and random anteroposterior residual setup errors to 1.8 mm and 2.2 mm, respectively. Compared to the NAL protocol, a correction threshold of 2.5 mm reduced the frequency of mediolateral and superior–inferior setup corrections with 40% and 63%, respectively. The implementation of the correction threshold did not deteriorate the accuracy of the off‐line setup correction compared to the NAL protocol. The combination of the fixed vertical couch position protocol, for correction of the anteroposterior setup error, and the fixed‐action–level protocol with 2.5 mm correction threshold, for correction of the mediolateral and the superior–inferior setup errors, was proved to provide adequate and comparable patient setup accuracy in WBRT and WBRT with additional LN irradiation. PACS numbers: 87.53.Kn, 87.57.‐s

  3. The Accuracy of Al and Cu Film Thickness Determinations and the Implications for Electron Probe Microanalysis.

    PubMed

    Matthews, Mike B; Kearns, Stuart L; Buse, Ben

    2018-04-01

    The accuracy to which Cu and Al coatings can be determined, and the effect this has on the quantification of the substrate, is investigated. Cu and Al coatings of nominally 5, 10, 15, and 20 nm were sputter coated onto polished Bi using two configurations of coater: One with the film thickness monitor (FTM) sensor colocated with the samples, and one where the sensor is located to one side. The FTM thicknesses are compared against those calculated from measured Cu Lα and Al Kα k-ratios using PENEPMA, GMRFilm, and DTSA-II. Selected samples were also cross-sectioned using focused ion beam. Both systems produced repeatable coatings, the thickest coating being approximately four times the thinnest coating. The side-located FTM sensor indicated thicknesses less than half those of the software modeled results, propagating on to 70% errors in substrate quantification at 5 kV. The colocated FTM sensor produced errors in film thickness and substrate quantification of 10-20%. Over the range of film thicknesses and accelerating voltages modeled both the substrate and coating k-ratios can be approximated by linear trends as functions of film thickness. The Al films were found to have a reduced density of ~2 g/cm2.

  4. Determination of human insulin in dog plasma by a selective liquid chromatography-tandem mass spectrometry method: Application to a pharmacokinetic study.

    PubMed

    Dong, Shiqi; Zeng, Yong; Wei, Guangli; Si, Duanyun; Liu, Changxiao

    2018-03-01

    A simple, sensitive and selective LC-MS/MS method for quantitative analysis of human insulin was developed and validated in dog plasma. Insulin glargine was used as the internal standard. After a simple step of solid-phase extraction, the chromatographic separation of human insulin was achieved by using InertSustain Bio C18 column with a mobile phase of acetonitrile containing 1% formic acid (A)-water containing 1% formic acid (B). The detection was performed by positive ion electrospray ionization in multiple-reaction monitoring (MRM) mode. Good linearity was observed in the concentration range of 1-1000 μIU/mL (r 2  > 0.99), and the lower limit of quantification was 1 μIU/mL (equal to 38.46 pg/mL). The intra- and inter-day precision (expressed as relative standard deviation, RSD) of human insulin were ≤12.1% and ≤13.0%, respectively, and the accuracy (expressed as relative error, RE) was in the range of -7.23-11.9%. The recovery and matrix effect were both within acceptable limits. This method was successfully applied for the pharmacokinetic study of human insulin in dogs after subcutaneous administration. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, Chad R. R. N.; Kemp, Robert A. de, E-mail: RAdeKemp@ottawaheart.ca; Klein, Ran

    Purpose: Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET–CT misalignment. Methods: A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers wasmore » resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. Results: In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET–CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Conclusions: Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.« less

  6. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  7. Digital Droplet PCR: CNV Analysis and Other Applications.

    PubMed

    Mazaika, Erica; Homsy, Jason

    2014-07-14

    Digital droplet PCR (ddPCR) is an assay that combines state-of-the-art microfluidics technology with TaqMan-based PCR to achieve precise target DNA quantification at high levels of sensitivity and specificity. Because quantification is achieved without the need for standard assays in an easy to interpret, unambiguous digital readout, ddPCR is far simpler, faster, and less error prone than real-time qPCR. The basic protocol can be modified with minor adjustments to suit a wide range of applications, such as CNV analysis, rare variant detection, SNP genotyping, and transcript quantification. This unit describes the ddPCR workflow in detail for the Bio-Rad QX100 system, but the theory and data interpretation are generalizable to any ddPCR system. Copyright © 2014 John Wiley & Sons, Inc.

  8. Markerless attenuation correction for carotid MRI surface receiver coils in combined PET/MR imaging

    NASA Astrophysics Data System (ADS)

    Eldib, Mootaz; Bini, Jason; Robson, Philip M.; Calcagno, Claudia; Faul, David D.; Tsoumpas, Charalampos; Fayad, Zahi A.

    2015-06-01

    The purpose of the study was to evaluate the effect of attenuation of MR coils on quantitative carotid PET/MR exams. Additionally, an automated attenuation correction method for flexible carotid MR coils was developed and evaluated. The attenuation of the carotid coil was measured by imaging a uniform water phantom injected with 37 MBq of 18F-FDG in a combined PET/MR scanner for 24 min with and without the coil. In the same session, an ultra-short echo time (UTE) image of the coil on top of the phantom was acquired. Using a combination of rigid and non-rigid registration, a CT-based attenuation map was registered to the UTE image of the coil for attenuation and scatter correction. After phantom validation, the effect of the carotid coil attenuation and the attenuation correction method were evaluated in five subjects. Phantom studies indicated that the overall loss of PET counts due to the coil was 6.3% with local region-of-interest (ROI) errors reaching up to 18.8%. Our registration method to correct for attenuation from the coil decreased the global error and local error (ROI) to 0.8% and 3.8%, respectively. The proposed registration method accurately captured the location and shape of the coil with a maximum spatial error of 2.6 mm. Quantitative analysis in human studies correlated with the phantom findings, but was dependent on the size of the ROI used in the analysis. MR coils result in significant error in PET quantification and thus attenuation correction is needed. The proposed strategy provides an operator-free method for attenuation and scatter correction for a flexible MRI carotid surface coil for routine clinical use.

  9. Improved uncertainty quantification in nondestructive assay for nonproliferation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Ken

    2016-12-01

    This paper illustrates methods to improve uncertainty quantification (UQ) for non-destructive assay (NDA) measurements used in nuclear nonproliferation. First, it is shown that current bottom-up UQ applied to calibration data is not always adequate, for three main reasons: (1) Because there are errors in both the predictors and the response, calibration involves a ratio of random quantities, and calibration data sets in NDA usually consist of only a modest number of samples (3–10); therefore, asymptotic approximations involving quantities needed for UQ such as means and variances are often not sufficiently accurate; (2) Common practice overlooks that calibration implies a partitioningmore » of total error into random and systematic error, and (3) In many NDA applications, test items exhibit non-negligible departures in physical properties from calibration items, so model-based adjustments are used, but item-specific bias remains in some data. Therefore, improved bottom-up UQ using calibration data should predict the typical magnitude of item-specific bias, and the suggestion is to do so by including sources of item-specific bias in synthetic calibration data that is generated using a combination of modeling and real calibration data. Second, for measurements of the same nuclear material item by both the facility operator and international inspectors, current empirical (top-down) UQ is described for estimating operator and inspector systematic and random error variance components. A Bayesian alternative is introduced that easily accommodates constraints on variance components, and is more robust than current top-down methods to the underlying measurement error distributions.« less

  10. Rapid LC-MRM-MS assay for simultaneous quantification of choline, betaine, trimethylamine, trimethylamine N-oxide, and creatinine in human plasma and urine.

    PubMed

    Zhao, Xueqing; Zeisel, Steven H; Zhang, Shucha

    2015-06-17

    There is a growing interest in analyzing choline, betaine, and their gut microbial metabolites including trimethylamine (TMA) and trimethylamine N-oxide (TMAO) in body fluids due to the high relevance of these compounds for human health and diseases. A stable isotope dilution (SID)-LC-MRM-MS assay was developed for the simultaneous determination of choline, betaine, TMA, TMAO, and creatinine in human plasma and urine. The assay was validated using quality control (QC) plasma samples, spiked at low, medium, and high levels. Freeze-thaw stability was also evaluated. The utility of this assay for urine was demonstrated using a nutritional clinical study on the effect of various egg doses on TMAO production in humans. This assay has a wide dynamic range (R 2 > 0.994) for all the analytes (choline: 0.122-250 μM; betaine: 0.488-1000 μM; TMA: 0.244-250 μM; TMAO: 0.061-62.5 μM; and creatinine: 0.977-2000 μM). High intra- and inter-day precision (CV < 6%) and high accuracy (< 15% error) were observed from the QC plasma samples. The assay is reliable for samples undergoing multiple freeze-thaw cycles (tested up to eight cycles). The assay also works for urine samples as demonstrated by a clinical study in which we observed a significant, positive linear response to various egg doses for urinary concentrations of all the analytes except creatinine. A rapid SID-LC-MRM-MS assay for simultaneous quantification of choline, betaine, TMA, TMAO, and creatinine has been developed and validated, and is expected to find wide application in nutrition and cardiovascular studies as well as diagnosis and management of trimethylaminuria. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.

    PubMed

    Gerold, Chase T; Bakker, Eric; Henry, Charles S

    2018-04-03

    In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.

  12. Improvements in GRACE Gravity Field Determination through Stochastic Observation Modeling

    NASA Astrophysics Data System (ADS)

    McCullough, C.; Bettadpur, S. V.

    2016-12-01

    Current unconstrained Release 05 GRACE gravity field solutions from the Center for Space Research (CSR RL05) assume random observation errors following an independent multivariate Gaussian distribution. This modeling of observations, a simplifying assumption, fails to account for long period, correlated errors arising from inadequacies in the background force models. Fully modeling the errors inherent in the observation equations, through the use of a full observation covariance (modeling colored noise), enables optimal combination of GPS and inter-satellite range-rate data and obviates the need for estimating kinematic empirical parameters during the solution process. Most importantly, fully modeling the observation errors drastically improves formal error estimates of the spherical harmonic coefficients, potentially enabling improved uncertainty quantification of scientific results derived from GRACE and optimizing combinations of GRACE with independent data sets and a priori constraints.

  13. Adobe photoshop quantification (PSQ) rather than point-counting: A rapid and precise method for quantifying rock textural data and porosities

    NASA Astrophysics Data System (ADS)

    Zhang, Xuefeng; Liu, Bo; Wang, Jieqiong; Zhang, Zhe; Shi, Kaibo; Wu, Shuanglin

    2014-08-01

    Commonly used petrological quantification methods are visual estimation, counting, and image analyses. However, in this article, an Adobe Photoshop-based analyzing method (PSQ) is recommended for quantifying the rock textural data and porosities. Adobe Photoshop system provides versatile abilities in selecting an area of interest and the pixel number of a selection could be read and used to calculate its area percentage. Therefore, Adobe Photoshop could be used to rapidly quantify textural components, such as content of grains, cements, and porosities including total porosities and different genetic type porosities. This method was named as Adobe Photoshop Quantification (PSQ). The workflow of the PSQ method was introduced with the oolitic dolomite samples from the Triassic Feixianguan Formation, Northeastern Sichuan Basin, China, for example. And the method was tested by comparing with the Folk's and Shvetsov's "standard" diagrams. In both cases, there is a close agreement between the "standard" percentages and those determined by the PSQ method with really small counting errors and operator errors, small standard deviations and high confidence levels. The porosities quantified by PSQ were evaluated against those determined by the whole rock helium gas expansion method to test the specimen errors. Results have shown that the porosities quantified by the PSQ are well correlated to the porosities determined by the conventional helium gas expansion method. Generally small discrepancies (mostly ranging from -3% to 3%) are caused by microporosities which would cause systematic underestimation of 2% and/or by macroporosities causing underestimation or overestimation in different cases. Adobe Photoshop could be used to quantify rock textural components and porosities. This method has been tested to be precise and accurate. It is time saving compared with usual methods.

  14. Sensitive determination of THC and main metabolites in human plasma by means of microextraction in packed sorbent and gas chromatography-tandem mass spectrometry.

    PubMed

    Rosado, T; Fernandes, L; Barroso, M; Gallardo, E

    2017-02-01

    Cannabis is one of the most available and consumed illicit drug in the world and its identification and quantification in biological specimens can be a challenge given its low concentrations in body fluids. The present work describes a fast and fully validated procedure for the simultaneous detection and quantification of ▵ 9 -tetrahydrocannabinol (▵ 9_ THC) and its two main metabolites 11-hydroxy ▵ 9_ tetrahydrocannabinol (11-OH-THC) and 11-nor-9-carboxy-▵ 9 - tetrahydrocannbinol (THC-COOH) in plasma samples using microextraction by packed sorbent (MEPS) and gas chromatography-tandem mass spectrometry (GC-MS/MS). A small plasma volume (0.25mL) pre-diluted (1:20), was extracted with MEPS M1 sorbent as follows: conditioning (4 cycles of 250μL methanol and 4 cycles of 250μL 0.1% formic acid in water); sample load (26 cycles of 250μL); wash (100μL of 3% acetic acid in water followed by 100μL 5% methanol in water); and elution (6 cycles of 100μL of 10% ammonium hydroxide in methanol). The procedure allowed the quantification of all analytes in the range of 0.1-30ng/mL. Recoveries ranged from 53 to 78% (THC), 57 to 66% (11-OH-THC) and 62 to 65% (THC-COOH), allowing the limits of detection and quantification to be set at 0.1ng/mL for all compounds. Intra-day precision and accuracy revealed coefficients of variation (CVs) lower than 10% at the studied concentrations, with a mean relative error within±9%, while inter-day precision and accuracy showed CVs lower than 15% for all analytes at the tested concentrations, with an inaccuracy within±8%. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Preclinical evaluation of parametric image reconstruction of [18F]FMISO PET: correlation with ex vivo immunohistochemistry

    NASA Astrophysics Data System (ADS)

    Cheng, Xiaoyin; Bayer, Christine; Maftei, Constantin-Alin; Astner, Sabrina T.; Vaupel, Peter; Ziegler, Sibylle I.; Shi, Kuangyu

    2014-01-01

    Compared to indirect methods, direct parametric image reconstruction (PIR) has the advantage of high quality and low statistical errors. However, it is not yet clear if this improvement in quality is beneficial for physiological quantification. This study aimed to evaluate direct PIR for the quantification of tumor hypoxia using the hypoxic fraction (HF) assessed from immunohistological data as a physiological reference. Sixteen mice with xenografted human squamous cell carcinomas were scanned with dynamic [18F]FMISO PET. Afterward, tumors were sliced and stained with H&E and the hypoxia marker pimonidazole. The hypoxic signal was segmented using k-means clustering and HF was specified as the ratio of the hypoxic area over the viable tumor area. The parametric Patlak slope images were obtained by indirect voxel-wise modeling on reconstructed images using filtered back projection and ordered-subset expectation maximization (OSEM) and by direct PIR (e.g., parametric-OSEM, POSEM). The mean and maximum Patlak slopes of the tumor area were investigated and compared with HF. POSEM resulted in generally higher correlations between slope and HF among the investigated methods. A strategy for the delineation of the hypoxic tumor volume based on thresholding parametric images at half maximum of the slope is recommended based on the results of this study.

  16. Multiplex quantification of protein toxins in human biofluids and food matrices using immunoextraction and high-resolution targeted mass spectrometry.

    PubMed

    Dupré, Mathieu; Gilquin, Benoit; Fenaille, François; Feraudet-Tarisse, Cécile; Dano, Julie; Ferro, Myriam; Simon, Stéphanie; Junot, Christophe; Brun, Virginie; Becher, François

    2015-08-18

    The development of rapid methods for unambiguous identification and precise quantification of protein toxins in various matrices is essential for public health surveillance. Nowadays, analytical strategies classically rely on sensitive immunological assays, but mass spectrometry constitutes an attractive complementary approach thanks to direct measurement and protein characterization ability. We developed here an innovative multiplex immuno-LC-MS/MS method for the simultaneous and specific quantification of the three potential biological warfare agents, ricin, staphylococcal enterotoxin B, and epsilon toxin, in complex human biofluids and food matrices. At least 7 peptides were targeted for each toxin (43 peptides in total) with a quadrupole-Orbitrap high-resolution instrument for exquisite detection specificity. Quantification was performed using stable isotope-labeled toxin standards spiked early in the sample. Lower limits of quantification were determined at or close to 1 ng·mL(-1). The whole process was successfully applied to the quantitative analysis of toxins in complex samples such as milk, human urine, and plasma. Finally, we report new data on toxin stability with no evidence of toxin degradation in milk in a 48 h time frame, allowing relevant quantitative toxin analysis for samples collected in this time range.

  17. Uncertainty quantification and propagation in dynamic models using ambient vibration measurements, application to a 10-story building

    NASA Astrophysics Data System (ADS)

    Behmanesh, Iman; Yousefianmoghadam, Seyedsina; Nozari, Amin; Moaveni, Babak; Stavridis, Andreas

    2018-07-01

    This paper investigates the application of Hierarchical Bayesian model updating for uncertainty quantification and response prediction of civil structures. In this updating framework, structural parameters of an initial finite element (FE) model (e.g., stiffness or mass) are calibrated by minimizing error functions between the identified modal parameters and the corresponding parameters of the model. These error functions are assumed to have Gaussian probability distributions with unknown parameters to be determined. The estimated parameters of error functions represent the uncertainty of the calibrated model in predicting building's response (modal parameters here). The focus of this paper is to answer whether the quantified model uncertainties using dynamic measurement at building's reference/calibration state can be used to improve the model prediction accuracies at a different structural state, e.g., damaged structure. Also, the effects of prediction error bias on the uncertainty of the predicted values is studied. The test structure considered here is a ten-story concrete building located in Utica, NY. The modal parameters of the building at its reference state are identified from ambient vibration data and used to calibrate parameters of the initial FE model as well as the error functions. Before demolishing the building, six of its exterior walls were removed and ambient vibration measurements were also collected from the structure after the wall removal. These data are not used to calibrate the model; they are only used to assess the predicted results. The model updating framework proposed in this paper is applied to estimate the modal parameters of the building at its reference state as well as two damaged states: moderate damage (removal of four walls) and severe damage (removal of six walls). Good agreement is observed between the model-predicted modal parameters and those identified from vibration tests. Moreover, it is shown that including prediction error bias in the updating process instead of commonly-used zero-mean error function can significantly reduce the prediction uncertainties.

  18. Aniseikonia quantification: error rate of rule of thumb estimation.

    PubMed

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  19. AQuA: An Automated Quantification Algorithm for High-Throughput NMR-Based Metabolomics and Its Application in Human Plasma.

    PubMed

    Röhnisch, Hanna E; Eriksson, Jan; Müllner, Elisabeth; Agback, Peter; Sandström, Corine; Moazzami, Ali A

    2018-02-06

    A key limiting step for high-throughput NMR-based metabolomics is the lack of rapid and accurate tools for absolute quantification of many metabolites. We developed, implemented, and evaluated an algorithm, AQuA (Automated Quantification Algorithm), for targeted metabolite quantification from complex 1 H NMR spectra. AQuA operates based on spectral data extracted from a library consisting of one standard calibration spectrum for each metabolite. It uses one preselected NMR signal per metabolite for determining absolute concentrations and does so by effectively accounting for interferences caused by other metabolites. AQuA was implemented and evaluated using experimental NMR spectra from human plasma. The accuracy of AQuA was tested and confirmed in comparison with a manual spectral fitting approach using the ChenomX software, in which 61 out of 67 metabolites quantified in 30 human plasma spectra showed a goodness-of-fit (r 2 ) close to or exceeding 0.9 between the two approaches. In addition, three quality indicators generated by AQuA, namely, occurrence, interference, and positional deviation, were studied. These quality indicators permit evaluation of the results each time the algorithm is operated. The efficiency was tested and confirmed by implementing AQuA for quantification of 67 metabolites in a large data set comprising 1342 experimental spectra from human plasma, in which the whole computation took less than 1 s.

  20. Droplet Digital™ PCR Next-Generation Sequencing Library QC Assay.

    PubMed

    Heredia, Nicholas J

    2018-01-01

    Digital PCR is a valuable tool to quantify next-generation sequencing (NGS) libraries precisely and accurately. Accurately quantifying NGS libraries enable accurate loading of the libraries on to the sequencer and thus improve sequencing performance by reducing under and overloading error. Accurate quantification also benefits users by enabling uniform loading of indexed/barcoded libraries which in turn greatly improves sequencing uniformity of the indexed/barcoded samples. The advantages gained by employing the Droplet Digital PCR (ddPCR™) library QC assay includes the precise and accurate quantification in addition to size quality assessment, enabling users to QC their sequencing libraries with confidence.

  1. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    PubMed

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  2. Generation of digitized microfluidic filling flow by vent control.

    PubMed

    Yoon, Junghyo; Lee, Eundoo; Kim, Jaehoon; Han, Sewoon; Chung, Seok

    2017-06-15

    Quantitative microfluidic point-of-care testing has been translated into clinical applications to support a prompt decision on patient treatment. A nanointerstice-driven filling technique has been developed to realize the fast and robust filling of microfluidic channels with liquid samples, but it has failed to provide a consistent filling time owing to the wide variation in liquid viscosity, resulting in an increase in quantification errors. There is a strong demand for simple and quick flow control to ensure accurate quantification, without a serious increase in system complexity. A new control mechanism employing two-beam refraction and one solenoid valve was developed and found to successfully generate digitized filling flow, completely free from errors due to changes in viscosity. The validity of digitized filling flow was evaluated by the immunoassay, using liquids with a wide range of viscosity. This digitized microfluidic filling flow is a novel approach that could be applied in conventional microfluidic point-of-care testing. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. BATMAN--an R package for the automated quantification of metabolites from nuclear magnetic resonance spectra using a Bayesian model.

    PubMed

    Hao, Jie; Astle, William; De Iorio, Maria; Ebbels, Timothy M D

    2012-08-01

    Nuclear Magnetic Resonance (NMR) spectra are widely used in metabolomics to obtain metabolite profiles in complex biological mixtures. Common methods used to assign and estimate concentrations of metabolites involve either an expert manual peak fitting or extra pre-processing steps, such as peak alignment and binning. Peak fitting is very time consuming and is subject to human error. Conversely, alignment and binning can introduce artefacts and limit immediate biological interpretation of models. We present the Bayesian automated metabolite analyser for NMR spectra (BATMAN), an R package that deconvolutes peaks from one-dimensional NMR spectra, automatically assigns them to specific metabolites from a target list and obtains concentration estimates. The Bayesian model incorporates information on characteristic peak patterns of metabolites and is able to account for shifts in the position of peaks commonly seen in NMR spectra of biological samples. It applies a Markov chain Monte Carlo algorithm to sample from a joint posterior distribution of the model parameters and obtains concentration estimates with reduced error compared with conventional numerical integration and comparable to manual deconvolution by experienced spectroscopists. http://www1.imperial.ac.uk/medicine/people/t.ebbels/ t.ebbels@imperial.ac.uk.

  4. Human Error: A Concept Analysis

    NASA Technical Reports Server (NTRS)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  5. Quantification of error associated with stormwater and wastewater flow measurement devices

    EPA Science Inventory

    A novel flow testbed has been designed to evaluate the performance of flumes as flow measurement devices. The newly constructed testbed produces both steady and unsteady flows ranging from 10 to 1500 gpm. Two types of flumes (Parshall and trapezoidal) are evaluated under differen...

  6. Physics-based Simulation of Human Posture Using 3D Whole Body Scanning Technology for Astronaut Space Suit Evaluation

    NASA Technical Reports Server (NTRS)

    Kim, Kyu-Jung

    2005-01-01

    Over the past few years high precision three-dimensional (3D) full body laser scanners have been developed to be used as a powerful anthropometry tool for quantification of the morphology of the human body. The full body scanner can quickly extract body characteristics in non-contact fashion. It is required for the Anthropometry and Biomechanics Facility (ABF) to have capabilities for kinematics simulation of a digital human at various postures whereas the laser scanner only allows capturing a single static posture at each time. During this summer fellowship period a theoretical study has been conducted to estimate an arbitrary posture with a series of example postures through finite element (FE) approximation and found that four-point isoparametric FE approximation would result in reasonable maximum position errors less than 5%. Subsequent pilot scan experiments demonstrated that a bead marker with a nominal size of 6 mm could be used as a marker for digitizing 3-D coordinates of anatomical landmarks for further kinematic analysis. Two sessions of human subject testing were conducted for reconstruction of an arbitrary postures from a set of example postures for each joint motion for the forearm/hand complex and the whole upper extremity.

  7. HPLC/Fluorometric Detection of Carvedilol in Real Human Plasma Samples Using Liquid-Liquid Extraction.

    PubMed

    Yilmaz, Bilal; Arslan, Sakir

    2016-03-01

    A simple, rapid and sensitive high-performance liquid chromatography (HPLC) method has been developed to quantify carvedilol in human plasma using an isocratic system with fluorescence detection. The method included a single-step liquid-liquid extraction with diethylether and ethylacetate mixture (3 : 1, v/v). HPLC separation was carried out by reversed-phase chromatography with a mobile phase composed of 20 mM phosphate buffer (pH 7)-acetonitrile (65 : 35, v/v), pumped at a flow rate of 1.0 mL/min. Fluorescence detection was performed at 240 nm (excitation) and 330 nm (emission). The calibration curve for carvedilol was linear from 10 to 250 ng/mL. Intra- and interday precision values for carvedilol in human plasma were <4.93%, and accuracy (relative error) was better than 4.71%. The analytical recovery of carvedilol from human plasma averaged out to 91.8%. The limits of detection and quantification of carvedilol were 3.0 and 10 ng/mL, respectively. Also, the method was successfully applied to three patients with hypertension who had been given an oral tablet of 25 mg carvedilol. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Quantification of wall shear stress in large blood vessels using Lagrangian interpolation functions with cine phase-contrast magnetic resonance imaging.

    PubMed

    Cheng, Christopher P; Parker, David; Taylor, Charles A

    2002-09-01

    Arterial wall shear stress is hypothesized to be an important factor in the localization of atherosclerosis. Current methods to compute wall shear stress from magnetic resonance imaging (MRI) data do not account for flow profiles characteristic of pulsatile flow in noncircular vessel lumens. We describe a method to quantify wall shear stress in large blood vessels by differentiating velocity interpolation functions defined using cine phase-contrast MRI data on a band of elements in the neighborhood of the vessel wall. Validation was performed with software phantoms and an in vitro flow phantom. At an image resolution corresponding to in vivo imaging data of the human abdominal aorta, time-averaged, spatially averaged wall shear stress for steady and pulsatile flow were determined to be within 16% and 23% of the analytic solution, respectively. These errors were reduced to 5% and 8% with doubling in image resolution. For the pulsatile software phantom, the oscillation in shear stress was predicted to within 5%. The mean absolute error of circumferentially resolved shear stress for the nonaxisymmetric phantom decreased from 28% to 15% with a doubling in image resolution. The irregularly shaped phantom and in vitro investigation demonstrated convergence of the calculated values with increased image resolution. We quantified the shear stress at the supraceliac and infrarenal regions of a human abdominal aorta to be 3.4 and 2.3 dyn/cm2, respectively.

  9. Factors controlling volume errors through 2D gully erosion assessment: guidelines for optimal survey design

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Pérez, Rafael

    2017-04-01

    The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey densities required to achieve a certain accuracy given the cross-sectional variability of a gully and the measurement method applied. References Casali, J., Loizu, J., Campo, M.A., De Santisteban, L.M., Alvarez-Mozos, J., 2006. Accuracy of methods for field assessment of rill and ephemeral gully erosion. Catena 67, 128-138. doi:10.1016/j.catena.2006.03.005

  10. EasyLCMS: an asynchronous web application for the automated quantification of LC-MS data

    PubMed Central

    2012-01-01

    Background Downstream applications in metabolomics, as well as mathematical modelling, require data in a quantitative format, which may also necessitate the automated and simultaneous quantification of numerous metabolites. Although numerous applications have been previously developed for metabolomics data handling, automated calibration and calculation of the concentrations in terms of μmol have not been carried out. Moreover, most of the metabolomics applications are designed for GC-MS, and would not be suitable for LC-MS, since in LC, the deviation in the retention time is not linear, which is not taken into account in these applications. Moreover, only a few are web-based applications, which could improve stand-alone software in terms of compatibility, sharing capabilities and hardware requirements, even though a strong bandwidth is required. Furthermore, none of these incorporate asynchronous communication to allow real-time interaction with pre-processed results. Findings Here, we present EasyLCMS (http://www.easylcms.es/), a new application for automated quantification which was validated using more than 1000 concentration comparisons in real samples with manual operation. The results showed that only 1% of the quantifications presented a relative error higher than 15%. Using clustering analysis, the metabolites with the highest relative error distributions were identified and studied to solve recurrent mistakes. Conclusions EasyLCMS is a new web application designed to quantify numerous metabolites, simultaneously integrating LC distortions and asynchronous web technology to present a visual interface with dynamic interaction which allows checking and correction of LC-MS raw data pre-processing results. Moreover, quantified data obtained with EasyLCMS are fully compatible with numerous downstream applications, as well as for mathematical modelling in the systems biology field. PMID:22884039

  11. In-line monitoring of cocrystallization process and quantification of carbamazepine-nicotinamide cocrystal using Raman spectroscopy and chemometric tools.

    PubMed

    Soares, Frederico L F; Carneiro, Renato L

    2017-06-05

    A cocrystallization process may involve several molecular species, which are generally solid under ambient conditions. Thus, accurate monitoring of different components that might appear during the reaction is necessary, as well as quantification of the final product. This work reports for the first time the synthesis of carbamazepine-nicotinamide cocrystal in aqueous media with a full conversion. The reactions were monitored by Raman spectroscopy coupled with Multivariate Curve Resolution - Alternating Least Squares, and the quantification of the final product among its coformers was performed using Raman spectroscopy and Partial Least Squares regression. The slurry reaction was made in four different conditions: room temperature, 40°C, 60°C and 80°C. The slurry reaction at 80°C enabled a full conversion of initial substrates into the cocrystal form, using water as solvent for a greener method. The employment of MCR-ALS coupled with Raman spectroscopy enabled to observe the main steps of the reactions, such as drug dissolution, nucleation and crystallization of the cocrystal. The PLS models gave mean errors of cross validation around 2.0 (% wt/wt), and errors of validation between 2.5 and 8.2 (% wt/wt) for all components. These were good results since the spectra of cocrystals and the physical mixture of the coformers present some similar peaks. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. In-line monitoring of cocrystallization process and quantification of carbamazepine-nicotinamide cocrystal using Raman spectroscopy and chemometric tools

    NASA Astrophysics Data System (ADS)

    Soares, Frederico L. F.; Carneiro, Renato L.

    2017-06-01

    A cocrystallization process may involve several molecular species, which are generally solid under ambient conditions. Thus, accurate monitoring of different components that might appear during the reaction is necessary, as well as quantification of the final product. This work reports for the first time the synthesis of carbamazepine-nicotinamide cocrystal in aqueous media with a full conversion. The reactions were monitored by Raman spectroscopy coupled with Multivariate Curve Resolution - Alternating Least Squares, and the quantification of the final product among its coformers was performed using Raman spectroscopy and Partial Least Squares regression. The slurry reaction was made in four different conditions: room temperature, 40 °C, 60 °C and 80 °C. The slurry reaction at 80 °C enabled a full conversion of initial substrates into the cocrystal form, using water as solvent for a greener method. The employment of MCR-ALS coupled with Raman spectroscopy enabled to observe the main steps of the reactions, such as drug dissolution, nucleation and crystallization of the cocrystal. The PLS models gave mean errors of cross validation around 2.0 (% wt/wt), and errors of validation between 2.5 and 8.2 (% wt/wt) for all components. These were good results since the spectra of cocrystals and the physical mixture of the coformers present some similar peaks.

  13. Multi-analyte quantification in bioprocesses by Fourier-transform-infrared spectroscopy by partial least squares regression and multivariate curve resolution.

    PubMed

    Koch, Cosima; Posch, Andreas E; Goicoechea, Héctor C; Herwig, Christoph; Lendl, Bernhard

    2014-01-07

    This paper presents the quantification of Penicillin V and phenoxyacetic acid, a precursor, inline during Pencillium chrysogenum fermentations by FTIR spectroscopy and partial least squares (PLS) regression and multivariate curve resolution - alternating least squares (MCR-ALS). First, the applicability of an attenuated total reflection FTIR fiber optic probe was assessed offline by measuring standards of the analytes of interest and investigating matrix effects of the fermentation broth. Then measurements were performed inline during four fed-batch fermentations with online HPLC for the determination of Penicillin V and phenoxyacetic acid as reference analysis. PLS and MCR-ALS models were built using these data and validated by comparison of single analyte spectra with the selectivity ratio of the PLS models and the extracted spectral traces of the MCR-ALS models, respectively. The achieved root mean square errors of cross-validation for the PLS regressions were 0.22 g L(-1) for Penicillin V and 0.32 g L(-1) for phenoxyacetic acid and the root mean square errors of prediction for MCR-ALS were 0.23 g L(-1) for Penicillin V and 0.15 g L(-1) for phenoxyacetic acid. A general work-flow for building and assessing chemometric regression models for the quantification of multiple analytes in bioprocesses by FTIR spectroscopy is given. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Recurrence plots and recurrence quantification analysis of human motion data

    NASA Astrophysics Data System (ADS)

    Josiński, Henryk; Michalczuk, Agnieszka; Świtoński, Adam; Szczesna, Agnieszka; Wojciechowski, Konrad

    2016-06-01

    The authors present exemplary application of recurrence plots, cross recurrence plots and recurrence quantification analysis for the purpose of exploration of experimental time series describing selected aspects of human motion. Time series were extracted from treadmill gait sequences which were recorded in the Human Motion Laboratory (HML) of the Polish-Japanese Academy of Information Technology in Bytom, Poland by means of the Vicon system. Analysis was focused on the time series representing movements of hip, knee, ankle and wrist joints in the sagittal plane.

  15. Deep-Dive Targeted Quantification for Ultrasensitive Analysis of Proteins in Nondepleted Human Blood Plasma/Serum and Tissues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, Song; Shi, Tujin; Fillmore, Thomas L.

    Mass spectrometry-based targeted proteomics (e.g., selected reaction monitoring, SRM) is emerging as an attractive alternative to immunoassays for protein quantification. Recently we have made significant progress in SRM sensitivity for enabling quantification of low ng/mL to sub-ng/mL level proteins in nondepleted human blood plasma/serum without affinity enrichment. However, precise quantification of extremely low abundant but biologically important proteins (e.g., ≤100 pg/mL in blood plasma/serum) using targeted proteomics approaches still remains challenging. To address this need, we have developed an antibody-independent Deep-Dive SRM (DD-SRM) approach that capitalizes on multidimensional high-resolution reversed-phase liquid chromatography (LC) separation for target peptide enrichment combined withmore » precise selection of target peptide fractions of interest, significantly improving SRM sensitivity by ~5 orders of magnitude when compared to conventional LC-SRM. Application of DD-SRM to human serum and tissue has been demonstrated to enable precise quantification of endogenous proteins at ~10 pg/mL level in nondepleted serum and at <10 copies per cell level in tissue. Thus, DD-SRM holds great promise for precisely measuring extremely low abundance proteins or protein modifications, especially when high-quality antibody is not available.« less

  16. Targeted quantification of low ng/mL level proteins in human serum without immunoaffinity depletion

    PubMed Central

    Shi, Tujin; Sun, Xuefei; Gao, Yuqian; Fillmore, Thomas L.; Schepmoes, Athena A.; Zhao, Rui; He, Jintang; Moore, Ronald J.; Kagan, Jacob; Rodland, Karin D.; Liu, Tao; Liu, Alvin Y.; Smith, Richard D.; Tang, Keqi; Camp, David G.; Qian, Wei-Jun

    2013-01-01

    We recently reported an antibody-free targeted protein quantification strategy, termed high-pressure, high-resolution separations with intelligent selection and multiplexing (PRISM) for achieving significantly enhanced sensitivity using selected reaction monitoring (SRM) mass spectrometry. Integrating PRISM with front-end IgY14 immunoaffinity depletion, sensitive detection of targeted proteins at 50–100 pg/mL levels in human blood plasma/serum was demonstrated. However, immunoaffinity depletion is often associated with undesired losses of target proteins of interest. Herein we report further evaluation of PRISM-SRM quantification of low-abundance serum proteins without immunoaffinity depletion. Limits of quantification (LOQ) at low ng/mL levels with a median coefficient of variation (CV) of ~12% were achieved for proteins spiked into human female serum. PRISM-SRM provided >100-fold improvement in the LOQ when compared to conventional LC-SRM measurements. PRISM-SRM was then applied to measure several low-abundance endogenous serum proteins, including prostate-specific antigen (PSA), in clinical prostate cancer patient sera. PRISM-SRM enabled confident detection of all target endogenous serum proteins except the low pg/mL-level cardiac troponin T. A correlation coefficient >0.99 was observed for PSA between the results from PRISM-SRM and immunoassays. Our results demonstrate that PRISM-SRM can successful quantify low ng/mL proteins in human plasma or serum without depletion. We anticipate broad applications for PRISM-SRM quantification of low-abundance proteins in candidate biomarker verification and systems biology studies. PMID:23763644

  17. Absolute Quantification of Rifampicin by MALDI Imaging Mass Spectrometry Using Multiple TOF/TOF Events in a Single Laser Shot

    NASA Astrophysics Data System (ADS)

    Prentice, Boone M.; Chumbley, Chad W.; Caprioli, Richard M.

    2017-01-01

    Matrix-assisted laser desorption/ionization imaging mass spectrometry (MALDI IMS) allows for the visualization of molecular distributions within tissue sections. While providing excellent molecular specificity and spatial information, absolute quantification by MALDI IMS remains challenging. Especially in the low molecular weight region of the spectrum, analysis is complicated by matrix interferences and ionization suppression. Though tandem mass spectrometry (MS/MS) can be used to ensure chemical specificity and improve sensitivity by eliminating chemical noise, typical MALDI MS/MS modalities only scan for a single MS/MS event per laser shot. Herein, we describe TOF/TOF instrumentation that enables multiple fragmentation events to be performed in a single laser shot, allowing the intensity of the analyte to be referenced to the intensity of the internal standard in each laser shot while maintaining the benefits of MS/MS. This approach is illustrated by the quantitative analyses of rifampicin (RIF), an antibiotic used to treat tuberculosis, in pooled human plasma using rifapentine (RPT) as an internal standard. The results show greater than 4-fold improvements in relative standard deviation as well as improved coefficients of determination (R2) and accuracy (>93% quality controls, <9% relative errors). This technology is used as an imaging modality to measure absolute RIF concentrations in liver tissue from an animal dosed in vivo. Each microspot in the quantitative image measures the local RIF concentration in the tissue section, providing absolute pixel-to-pixel quantification from different tissue microenvironments. The average concentration determined by IMS is in agreement with the concentration determined by HPLC-MS/MS, showing a percent difference of 10.6%.

  18. Localized 2D COSY sequences: Method and experimental evaluation for a whole metabolite quantification approach

    NASA Astrophysics Data System (ADS)

    Martel, Dimitri; Tse Ve Koon, K.; Le Fur, Yann; Ratiney, Hélène

    2015-11-01

    Two-dimensional spectroscopy offers the possibility to unambiguously distinguish metabolites by spreading out the multiplet structure of J-coupled spin systems into a second dimension. Quantification methods that perform parametric fitting of the 2D MRS signal have recently been proposed for resolved PRESS (JPRESS) but not explicitly for Localized Correlation Spectroscopy (LCOSY). Here, through a whole metabolite quantification approach, correlation spectroscopy quantification performances are studied. The ability to quantify metabolite relaxation constant times is studied for three localized 2D MRS sequences (LCOSY, LCTCOSY and the JPRESS) in vitro on preclinical MR systems. The issues encountered during implementation and quantification strategies are discussed with the help of the Fisher matrix formalism. The described parameterized models enable the computation of the lower bound for error variance - generally known as the Cramér Rao bounds (CRBs), a standard of precision - on the parameters estimated from these 2D MRS signal fittings. LCOSY has a theoretical net signal loss of two per unit of acquisition time compared to JPRESS. A rapid analysis could point that the relative CRBs of LCOSY compared to JPRESS (expressed as a percentage of the concentration values) should be doubled but we show that this is not necessarily true. Finally, the LCOSY quantification procedure has been applied on data acquired in vivo on a mouse brain.

  19. Analogs and the BHP Risk Reduction Strategy for Future Spaceflight Missions

    NASA Technical Reports Server (NTRS)

    Whitmire, Sandra; Leveton, Lauren

    2011-01-01

    In preparation for future exploration missions to distant destinations (e.g., Moon, Near Earth Objects (NEO), and Mars), the NASA Human Research Program s (HRP) Behavioral Health and Performance Element (BHP) conducts and supports research to address four human health risks: Risk of Behavioral Conditions; Risk of Psychiatric Conditions; Risk of Performance Decrements Due to Inadequate Cooperation, Coordination, Communication, and Psychosocial Adaptation within a Team; and Risk of Performance Errors due to Sleep Loss, Fatigue, Circadian Desynchronization, and Work Overload (HRP Science Management Plan, 2008). BHP Research, in collaboration with internal and external research investigators, as well as subject matter experts within NASA operations including flight surgeons, astronauts, and mission planners and others within the Mission Operations Directorate (MOD), identifies knowledge and technology gaps within each Risk. BHP Research subsequently manages and conducts research tasks to address and close the gaps, either through risk assessment and quantification, or the development of countermeasures and monitoring technologies. The resulting deliverables, in many instances, also support current Medical Operations and/or Mission Operations for the International Space Station (ISS).

  20. Isotropic resolution diffusion tensor imaging of lumbosacral and sciatic nerves using a phase‐corrected diffusion‐prepared 3D turbo spin echo

    PubMed Central

    Van, Anh T.; Weidlich, Dominik; Kooijman, Hendrick; Hock, Andreas; Rummeny, Ernst J.; Gersing, Alexandra; Kirschke, Jan S.; Karampinos, Dimitrios C.

    2018-01-01

    Purpose To perform in vivo isotropic‐resolution diffusion tensor imaging (DTI) of lumbosacral and sciatic nerves with a phase‐navigated diffusion‐prepared (DP) 3D turbo spin echo (TSE) acquisition and modified reconstruction incorporating intershot phase‐error correction and to investigate the improvement on image quality and diffusion quantification with the proposed phase correction. Methods Phase‐navigated DP 3D TSE included magnitude stabilizers to minimize motion and eddy‐current effects on the signal magnitude. Phase navigation of motion‐induced phase errors was introduced before readout in 3D TSE. DTI of lower back nerves was performed in vivo using 3D TSE and single‐shot echo planar imaging (ss‐EPI) in 13 subjects. Diffusion data were phase‐corrected per k z plane with respect to T2‐weighted data. The effects of motion‐induced phase errors on DTI quantification was assessed for 3D TSE and compared with ss‐EPI. Results Non–phase‐corrected 3D TSE resulted in artifacts in diffusion‐weighted images and overestimated DTI parameters in the sciatic nerve (mean diffusivity [MD] = 2.06 ± 0.45). Phase correction of 3D TSE DTI data resulted in reductions in all DTI parameters (MD = 1.73 ± 0.26) of statistical significance (P ≤ 0.001) and in closer agreement with ss‐EPI DTI parameters (MD = 1.62 ± 0.21). Conclusion DP 3D TSE with phase correction allows distortion‐free isotropic diffusion imaging of lower back nerves with robustness to motion‐induced artifacts and DTI quantification errors. Magn Reson Med 80:609–618, 2018. © 2018 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes. PMID:29380414

  1. High-throughput telomere length quantification by FISH and its application to human population studies.

    PubMed

    Canela, Andrés; Vera, Elsa; Klatt, Peter; Blasco, María A

    2007-03-27

    A major limitation of studies of the relevance of telomere length to cancer and age-related diseases in human populations and to the development of telomere-based therapies has been the lack of suitable high-throughput (HT) assays to measure telomere length. We have developed an automated HT quantitative telomere FISH platform, HT quantitative FISH (Q-FISH), which allows the quantification of telomere length as well as percentage of short telomeres in large human sample sets. We show here that this technique provides the accuracy and sensitivity to uncover associations between telomere length and human disease.

  2. Quantification of optical absorption coefficient from acoustic spectra in the optical diffusive regime using photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Guo, Zijian; Favazza, Christopher; Wang, Lihong V.

    2012-02-01

    Photoacoustic (PA) tomography (PAT) can image optical absorption contrast with ultrasonic spatial resolution in the optical diffusive regime. Multi-wavelength PAT can noninvasively monitor hemoglobin oxygen saturation (sO2) with high sensitivity and fine spatial resolution. However, accurate quantification in PAT requires knowledge of the optical fluence distribution, acoustic wave attenuation, and detection system bandwidth. We propose a method to circumvent this requirement using acoustic spectra of PA signals acquired at two optical wavelengths. With the acoustic spectral method, the absorption coefficients of an oxygenated bovine blood phantom at 560 and 575 nm were quantified with errors of ><5%.

  3. Uncertainty Quantification in Alchemical Free Energy Methods.

    PubMed

    Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V

    2018-06-12

    Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.

  4. Experimental validation of 2D uncertainty quantification for DIC.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less

  5. Experimental validation of 2D uncertainty quantification for digital image correlation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less

  6. How smart is your BEOL? productivity improvement through intelligent automation

    NASA Astrophysics Data System (ADS)

    Schulz, Kristian; Egodage, Kokila; Tabbone, Gilles; Garetto, Anthony

    2017-07-01

    The back end of line (BEOL) workflow in the mask shop still has crucial issues throughout all standard steps which are inspection, disposition, photomask repair and verification of repair success. All involved tools are typically run by highly trained operators or engineers who setup jobs and recipes, execute tasks, analyze data and make decisions based on the results. No matter how experienced operators are and how good the systems perform, there is one aspect that always limits the productivity and effectiveness of the operation: the human aspect. Human errors can range from seemingly rather harmless slip-ups to mistakes with serious and direct economic impact including mask rejects, customer returns and line stops in the wafer fab. Even with the introduction of quality control mechanisms that help to reduce these critical but unavoidable faults, they can never be completely eliminated. Therefore the mask shop BEOL cannot run in the most efficient manner as unnecessary time and money are spent on processes that still remain labor intensive. The best way to address this issue is to automate critical segments of the workflow that are prone to human errors. In fact, manufacturing errors can occur for each BEOL step where operators intervene. These processes comprise of image evaluation, setting up tool recipes, data handling and all other tedious but required steps. With the help of smart solutions, operators can work more efficiently and dedicate their time to less mundane tasks. Smart solutions connect tools, taking over the data handling and analysis typically performed by operators and engineers. These solutions not only eliminate the human error factor in the manufacturing process but can provide benefits in terms of shorter cycle times, reduced bottlenecks and prediction of an optimized workflow. In addition such software solutions consist of building blocks that seamlessly integrate applications and allow the customers to use tailored solutions. To accommodate for the variability and complexity in mask shops today, individual workflows can be supported according to the needs of any particular manufacturing line with respect to necessary measurement and production steps. At the same time the efficiency of assets is increased by avoiding unneeded cycle time and waste of resources due to the presence of process steps that are very crucial for a given technology. In this paper we present details of which areas of the BEOL can benefit most from intelligent automation, what solutions exist and the quantification of benefits to a mask shop with full automation by the use of a back end of line model.

  7. Blood pool and tissue phase patient motion effects on 82rubidium PET myocardial blood flow quantification.

    PubMed

    Lee, Benjamin C; Moody, Jonathan B; Poitrasson-Rivière, Alexis; Melvin, Amanda C; Weinberg, Richard L; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2018-03-23

    Patient motion can lead to misalignment of left ventricular volumes of interest and subsequently inaccurate quantification of myocardial blood flow (MBF) and flow reserve (MFR) from dynamic PET myocardial perfusion images. We aimed to identify the prevalence of patient motion in both blood and tissue phases and analyze the effects of this motion on MBF and MFR estimates. We selected 225 consecutive patients that underwent dynamic stress/rest rubidium-82 chloride ( 82 Rb) PET imaging. Dynamic image series were iteratively reconstructed with 5- to 10-second frame durations over the first 2 minutes for the blood phase and 10 to 80 seconds for the tissue phase. Motion shifts were assessed by 3 physician readers from the dynamic series and analyzed for frequency, magnitude, time, and direction of motion. The effects of this motion isolated in time, direction, and magnitude on global and regional MBF and MFR estimates were evaluated. Flow estimates derived from the motion corrected images were used as the error references. Mild to moderate motion (5-15 mm) was most prominent in the blood phase in 63% and 44% of the stress and rest studies, respectively. This motion was observed with frequencies of 75% in the septal and inferior directions for stress and 44% in the septal direction for rest. Images with blood phase isolated motion had mean global MBF and MFR errors of 2%-5%. Isolating blood phase motion in the inferior direction resulted in mean MBF and MFR errors of 29%-44% in the RCA territory. Flow errors due to tissue phase isolated motion were within 1%. Patient motion was most prevalent in the blood phase and MBF and MFR errors increased most substantially with motion in the inferior direction. Motion correction focused on these motions is needed to reduce MBF and MFR errors.

  8. Simultaneous quantification of selective serotonin reuptake inhibitors and metabolites in human plasma by liquid chromatography-electrospray mass spectrometry for therapeutic drug monitoring.

    PubMed

    Ansermot, Nicolas; Brawand-Amey, Marlyse; Eap, Chin B

    2012-02-15

    A simple and sensitive liquid chromatography-electrospray ionization mass spectrometry method was developed for the simultaneous quantification in human plasma of all selective serotonin reuptake inhibitors (citalopram, fluoxetine, fluvoxamine, paroxetine and sertraline) and their main active metabolites (desmethyl-citalopram and norfluoxetine). A stable isotope-labeled internal standard was used for each analyte to compensate for the global method variability, including extraction and ionization variations. After sample (250μl) pre-treatment with acetonitrile (500μl) to precipitate proteins, a fast solid-phase extraction procedure was performed using mixed mode Oasis MCX 96-well plate. Chromatographic separation was achieved in less than 9.0min on a XBridge C18 column (2.1×100mm; 3.5μm) using a gradient of ammonium acetate (pH 8.1; 50mM) and acetonitrile as mobile phase at a flow rate of 0.3ml/min. The method was fully validated according to Société Française des Sciences et Techniques Pharmaceutiques protocols and the latest Food and Drug Administration guidelines. Six point calibration curves were used to cover a large concentration range of 1-500ng/ml for citalopram, desmethyl-citalopram, paroxetine and sertraline, 1-1000ng/ml for fluoxetine and fluvoxamine, and 2-1000ng/ml for norfluoxetine. Good quantitative performances were achieved in terms of trueness (84.2-109.6%), repeatability (0.9-14.6%) and intermediate precision (1.8-18.0%) in the entire assay range including the lower limit of quantification. Internal standard-normalized matrix effects were lower than 13%. The accuracy profiles (total error) were mainly included in the acceptance limits of ±30% for biological samples. The method was successfully applied for routine therapeutic drug monitoring of more than 1600 patient plasma samples over 9 months. The β-expectation tolerance intervals determined during the validation phase were coherent with the results of quality control samples analyzed during routine use. This method is therefore precise and suitable both for therapeutic drug monitoring and pharmacokinetic studies in most clinical laboratories. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Joint penalized-likelihood reconstruction of time-activity curves and regions-of-interest from projection data in brain PET

    NASA Astrophysics Data System (ADS)

    Krestyannikov, E.; Tohka, J.; Ruotsalainen, U.

    2008-06-01

    This paper presents a novel statistical approach for joint estimation of regions-of-interest (ROIs) and the corresponding time-activity curves (TACs) from dynamic positron emission tomography (PET) brain projection data. It is based on optimizing the joint objective function that consists of a data log-likelihood term and two penalty terms reflecting the available a priori information about the human brain anatomy. The developed local optimization strategy iteratively updates both the ROI and TAC parameters and is guaranteed to monotonically increase the objective function. The quantitative evaluation of the algorithm is performed with numerically and Monte Carlo-simulated dynamic PET brain data of the 11C-Raclopride and 18F-FDG tracers. The results demonstrate that the method outperforms the existing sequential ROI quantification approaches in terms of accuracy, and can noticeably reduce the errors in TACs arising due to the finite spatial resolution and ROI delineation.

  10. A mixed reality approach for stereo-tomographic quantification of lung nodules.

    PubMed

    Chen, Mianyi; Kalra, Mannudeep K; Yun, Wenbing; Cong, Wenxiang; Yang, Qingsong; Nguyen, Terry; Wei, Biao; Wang, Ge

    2016-05-25

    To reduce the radiation dose and the equipment cost associated with lung CT screening, in this paper we propose a mixed reality based nodule measurement method with an active shutter stereo imaging system. Without involving hundreds of projection views and subsequent image reconstruction, we generated two projections of an iteratively placed ellipsoidal volume in the field of view and merging these synthetic projections with two original CT projections. We then demonstrated the feasibility of measuring the position and size of a nodule by observing whether projections of an ellipsoidal volume and the nodule are overlapped from a human observer's visual perception through the active shutter 3D vision glasses. The average errors of measured nodule parameters are less than 1 mm in the simulated experiment with 8 viewers. Hence, it could measure real nodules accurately in the experiments with physically measured projections.

  11. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  12. Human Error In Complex Systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1991-01-01

    Report presents results of research aimed at understanding causes of human error in such complex systems as aircraft, nuclear powerplants, and chemical processing plants. Research considered both slips (errors of action) and mistakes (errors of intention), and influence of workload on them. Results indicated that: humans respond to conditions in which errors expected by attempting to reduce incidence of errors; and adaptation to conditions potent influence on human behavior in discretionary situations.

  13. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  14. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  15. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    NASA Technical Reports Server (NTRS)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  16. Evaluation of an Automatic Registration-Based Algorithm for Direct Measurement of Volume Change in Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkar, Saradwata; Johnson, Timothy D.; Ma, Bing

    2012-07-01

    Purpose: Assuming that early tumor volume change is a biomarker for response to therapy, accurate quantification of early volume changes could aid in adapting an individual patient's therapy and lead to shorter clinical trials. We investigated an image registration-based approach for tumor volume change quantification that may more reliably detect smaller changes that occur in shorter intervals than can be detected by existing algorithms. Methods and Materials: Variance and bias of the registration-based approach were evaluated using retrospective, in vivo, very-short-interval diffusion magnetic resonance imaging scans where true zero tumor volume change is unequivocally known and synthetic data, respectively. Themore » interval scans were nonlinearly registered using two similarity measures: mutual information (MI) and normalized cross-correlation (NCC). Results: The 95% confidence interval of the percentage volume change error was (-8.93% to 10.49%) for MI-based and (-7.69%, 8.83%) for NCC-based registrations. Linear mixed-effects models demonstrated that error in measuring volume change increased with increase in tumor volume and decreased with the increase in the tumor's normalized mutual information, even when NCC was the similarity measure being optimized during registration. The 95% confidence interval of the relative volume change error for the synthetic examinations with known changes over {+-}80% of reference tumor volume was (-3.02% to 3.86%). Statistically significant bias was not demonstrated. Conclusion: A low-noise, low-bias tumor volume change measurement algorithm using nonlinear registration is described. Errors in change measurement were a function of tumor volume and the normalized mutual information content of the tumor.« less

  17. Image processing for identification and quantification of filamentous bacteria in in situ acquired images.

    PubMed

    Dias, Philipe A; Dunkel, Thiemo; Fajado, Diego A S; Gallegos, Erika de León; Denecke, Martin; Wiedemann, Philipp; Schneider, Fabio K; Suhr, Hajo

    2016-06-11

    In the activated sludge process, problems of filamentous bulking and foaming can occur due to overgrowth of certain filamentous bacteria. Nowadays, these microorganisms are typically monitored by means of light microscopy, commonly combined with staining techniques. As drawbacks, these methods are susceptible to human errors, subjectivity and limited by the use of discontinuous microscopy. The in situ microscope appears as a suitable tool for continuous monitoring of filamentous bacteria, providing real-time examination, automated analysis and eliminating sampling, preparation and transport of samples. In this context, a proper image processing algorithm is proposed for automated recognition and measurement of filamentous objects. This work introduces a method for real-time evaluation of images without any staining, phase-contrast or dilution techniques, differently from studies present in the literature. Moreover, we introduce an algorithm which estimates the total extended filament length based on geodesic distance calculation. For a period of twelve months, samples from an industrial activated sludge plant were weekly collected and imaged without any prior conditioning, replicating real environment conditions. Trends of filament growth rate-the most important parameter for decision making-are correctly identified. For reference images whose filaments were marked by specialists, the algorithm correctly recognized 72 % of the filaments pixels, with a false positive rate of at most 14 %. An average execution time of 0.7 s per image was achieved. Experiments have shown that the designed algorithm provided a suitable quantification of filaments when compared with human perception and standard methods. The algorithm's average execution time proved its suitability for being optimally mapped into a computational architecture to provide real-time monitoring.

  18. Quantitative determination of rosuvastatin in human plasma by liquid chromatography with electrospray ionization tandem mass spectrometry.

    PubMed

    Xu, Dong-Hang; Ruan, Zou-Rong; Zhou, Quan; Yuan, Hong; Jiang, Bo

    2006-01-01

    A simple and sensitive liquid chromatography/tandem mass spectrometry method was developed and validated for determining rosuvastatin in human plasma, a new synthetic hydroxymethylglutaryl-coenzyme A reductase inhibitor. The analyte and internal standard (IS; cilostazol) were extracted by simple one-step liquid/liquid extraction with ether. The organic layer was separated and evaporated under a gentle stream of nitrogen at 40 degrees C. The chromatographic separation was performed on an Atlantis C18 column (2.1 mm x 150 mm, 5.0 microm) with a mobile phase consisting of 0.2% formic acid/methanol (30:70, v/v) at a flow rate of 0.20 mL/min. The analyses were carried out by multiple reaction monitoring (MRM) using the precursor-to-product combinations of m/z 482 --> 258 and m/z 370 --> 288. The areas of peaks from the analyte and the IS were used for quantification of rosuvastatin. The method was validated according to the FDA guidelines on bioanalytical method validation. Validation results indicated that the lower limit of quantification (LLOQ) was 0.2 ng/mL and the assay exhibited a linear range of 0.2-50.0 ng/mL and gave a correlation coefficient (r) of 0.9991 or better. Quality control samples (0.4, 8, 25 and 40 ng/mL) in six replicates from three different runs of analysis demonstrated an intra-assay precision (RSD) 7.97-15.94%, an inter-assay precision 3.19-15.27%, and an overall accuracy (relative error) of < 3.7%. The method can be applied to pharmacokinetic or bioequivalence studies of rosuvastatin.

  19. Global Proteomic Analysis of Human Liver Microsomes: Rapid Characterization and Quantification of Hepatic Drug-Metabolizing Enzymes.

    PubMed

    Achour, Brahim; Al Feteisi, Hajar; Lanucara, Francesco; Rostami-Hodjegan, Amin; Barber, Jill

    2017-06-01

    Many genetic and environmental factors lead to interindividual variations in the metabolism and transport of drugs, profoundly affecting efficacy and toxicity. Precision dosing, that is, targeting drug dose to a well characterized subpopulation, is dependent on quantitative models of the profiles of drug-metabolizing enzymes (DMEs) and transporters within that subpopulation, informed by quantitative proteomics. We report the first use of ion mobility-mass spectrometry for this purpose, allowing rapid, robust, label-free quantification of human liver microsomal (HLM) proteins from distinct individuals. Approximately 1000 proteins were identified and quantified in four samples, including an average of 70 DMEs. Technical and biological variabilities were distinguishable, with technical variability accounting for about 10% of total variability. The biological variation between patients was clearly identified, with samples showing a range of expression profiles for cytochrome P450 and uridine 5'-diphosphoglucuronosyltransferase enzymes. Our results showed excellent agreement with previous data from targeted methods. The label-free method, however, allowed a fuller characterization of the in vitro system, showing, for the first time, that HLMs are significantly heterogeneous. Further, the traditional units of measurement of DMEs (pmol mg -1 HLM protein) are shown to introduce error arising from variability in unrelated, highly abundant proteins. Simulations of this variability suggest that up to 1.7-fold variation in apparent CYP3A4 abundance is artifactual, as are background positive correlations of up to 0.2 (Spearman correlation coefficient) between the abundances of DMEs. We suggest that protein concentrations used in pharmacokinetic predictions and scaling to in vivo clinical situations (physiologically based pharmacokinetics and in vitro-in vivo extrapolation) should be referenced instead to tissue mass. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.

  20. Chemometrics enhanced HPLC-DAD performance for rapid quantification of carbamazepine and phenobarbital in human serum samples.

    PubMed

    Vosough, Maryam; Ghafghazi, Shiva; Sabetkasaei, Masoumeh

    2014-02-01

    This paper describes development and validation of a simple and efficient bioanalytical procedure for simultaneous determination of phenobarbital and carbamazepine in human serum samples using high performance liquid chromatography with photodiode-array detection (HPLC-DAD) regarding a fast elution methodology in less than 5 min. Briefly, this method consisted of a simple deproteinization step of serum samples followed by HPLC analysis on a Bonus-RP column using an isocratic mode of elution with acetonitrile/K2HPO4 (pH=7.5) buffer solution (45:55). Due to the presence of serum endogenous components as non-calibrated components in the sample, second-order calibration based on multivariate curve resolution-alternating least squares (MCR-ALS), has been applied on a set of absorbance matrices collected as a function of retention time and wavelengths. Acceptable resolution and quantification results were achieved in the presence of matrix interferences and the second-order advantage was fully exploited. The average recoveries for carbamazepine and phenobarbital were 89.7% and 86.1% and relative standard deviation values were lower than 9%. Additionally, computed elliptical joint confidence region (EJCR) confirmed the accuracy of the proposed method and indicated the absence of both constant and proportional errors in the predicted concentrations. The developed method enabled the determination of the analytes in different serum samples in the presence of overlapped profiles, while keeping experimental time and extraction steps at minimum. Finally, the serum concentration levels of carbamazepine in three time intervals were reported for morphine-dependents who had received carbamazepine for treating their neuropathic pain. © 2013 Elsevier B.V. All rights reserved.

  1. Measurement of yunaconitine and crassicauline A in small-volume blood serum samples by LC-MS/MS: tracing of aconite poisoning in clinical diagnosis.

    PubMed

    Ka-Wing Chung, Karen; Pak-Lam Chen, Sammy; Ng, Sau-Wah; Wing-Lai Mak, Tony; Sze-Yin Leung, Kelvin

    2012-08-15

    Aconite poisoning is one of the most serious types of herb-related medical emergencies. In Hong Kong, many if not most of these poisoning cases are due to confusion in herbal species; that is, the wrong herbs are used in prescriptions. Such human errors, while inevitable perhaps, can be serious, and sometimes fatal. The chemical components responsible for aconite poisoning are yunaconitine and crassicauline A. In the present study, a rapid and sensitive method for the screening and quantification of yunaconitine and crassicauline A in human serum, using LC-MS/MS, was developed and validated. Methyllycaconitine was chosen as the internal standard. The limit of detection (LOD) of yunaconitine and crassicauline A were found to be 0.022 and 0.021 ng/mL, respectively. The limit of quantification (LOQ) was 0.1 ng/mL for both yunaconitine and crassicauline A. The recovery of yunaconitine and crassicauline A ranged from 78.6% to 84.9% and 78.3% to 87.2%, respectively. The matrix effect of yunaconitine and crassicauline A ranged from 110.0% to 130.4% and 121.2 to 130.0%, respectively. Both yunaconitine and crassicauline A were stable in serum for at least 3 months at -20 °C, and the extracts were stable for at least 7 days. For clinical applications, serum samples of two patients confirmed to have had aconite herbs poisoning in 2008 were quantified using the developed method. The result showed that this method can be utilized in clinical routine applications. This screening method expedites the diagnosis in cases of suspected aconite poisoning, thus enabling doctors to treat the condition more quickly and effectively. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Development and validation of a UHPLC-MS/MS assay for colistin methanesulphonate (CMS) and colistin in human plasma and urine using weak-cation exchange solid-phase extraction.

    PubMed

    Zhao, Miao; Wu, Xiao-Jie; Fan, Ya-Xin; Guo, Bei-Ning; Zhang, Jing

    2016-05-30

    A rapid ultra high-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) assay method was developed for determination of CMS and formed colistin in human plasma and urine. After extraction on a 96-well SPE Supra-Clean Weak Cation Exchange (WCX) plate, the eluents were mixed and injected into the UHPLC-MS/MS system directly. A Phonomenex Kinetex XB-C18 analytical column was employed with a mobile phase consisting of solution "A" (acetonitrile:methanol, 1:1, v/v) and solution "B" (0.1% formic acid in water, v/v). The flow rate was 0.4 mL/min with gradient elution over 3.5 min. Ions were detected in ESI positive ion mode and the precursor-product ion pairs were m/z 390.7/101.3 for colistin A, m/z 386.0/101.2 for colistin B, and m/z 402.3/101.2 for polymyxin B1 (IS), respectively. The lower limit of quantification (LLOQ) was 0.0130 and 0.0251 mg/L for colistin A and colistin B in both plasma and urine with accuracy (relative error, %) <± 12.6% and precision (relative standard deviation, %) <± 10.8%. Stability of CMS was demonstrated in biological samples before and during sample treatment, and in the extract. This new analytical method provides high-throughput treatment and optimized quantification of CMS and colistin, which offers a highly efficient tool for the analysis of a large number of clinical samples as well as routine therapeutic drug monitoring. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Fast quantification of ten psychotropic drugs and metabolites in human plasma by ultra-high performance liquid chromatography tandem mass spectrometry for therapeutic drug monitoring.

    PubMed

    Ansermot, Nicolas; Brawand-Amey, Marlyse; Kottelat, Astrid; Eap, Chin B

    2013-05-31

    A sensitive and selective ultra-high performance liquid chromatography (UHPLC) tandem mass spectrometry (MS/MS) method was developed for the fast quantification of ten psychotropic drugs and metabolites in human plasma for the needs of our laboratory (amisulpride, asenapine, desmethyl-mirtazapine, iloperidone, mirtazapine, norquetiapine, olanzapine, paliperidone, quetiapine and risperidone). Stable isotope-labeled internal standards were used for all analytes, to compensate for the global method variability, including extraction and ionization variations. Sample preparation was performed by generic protein precipitation with acetonitrile. Chromatographic separation was achieved in less than 3.0min on an Acquity UPLC BEH Shield RP18 column (2.1mm×50mm; 1.7μm), using a gradient elution of 10mM ammonium formate buffer pH 3.0 and acetonitrile at a flow rate of 0.4ml/min. The compounds were quantified on a tandem quadrupole mass spectrometer operating in positive electrospray ionization mode, using multiple reaction monitoring. The method was fully validated according to the latest recommendations of international guidelines. Eight point calibration curves were used to cover a large concentration range 0.5-200ng/ml for asenapine, desmethyl-mirtazapine, iloperidone, mirtazapine, olanzapine, paliperidone and risperidone, and 1-1500ng/ml for amisulpride, norquetiapine and quetiapine. Good quantitative performances were achieved in terms of trueness (93.1-111.2%), repeatability (1.3-8.6%) and intermediate precision (1.8-11.5%). Internal standard-normalized matrix effects ranged between 95 and 105%, with a variability never exceeding 6%. The accuracy profiles (total error) were included in the acceptance limits of ±30% for biological samples. This method is therefore suitable for both therapeutic drug monitoring and pharmacokinetic studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Understanding human management of automation errors

    PubMed Central

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  5. Understanding human management of automation errors.

    PubMed

    McBride, Sara E; Rogers, Wendy A; Fisk, Arthur D

    2014-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance.

  6. Technical Note: Deep learning based MRAC using rapid ultra-short echo time imaging.

    PubMed

    Jang, Hyungseok; Liu, Fang; Zhao, Gengyan; Bradshaw, Tyler; McMillan, Alan B

    2018-05-15

    In this study, we explore the feasibility of a novel framework for MR-based attenuation correction for PET/MR imaging based on deep learning via convolutional neural networks, which enables fully automated and robust estimation of a pseudo CT image based on ultrashort echo time (UTE), fat, and water images obtained by a rapid MR acquisition. MR images for MRAC are acquired using dual echo ramped hybrid encoding (dRHE), where both UTE and out-of-phase echo images are obtained within a short single acquisition (35 sec). Tissue labeling of air, soft tissue, and bone in the UTE image is accomplished via a deep learning network that was pre-trained with T1-weighted MR images. UTE images are used as input to the network, which was trained using labels derived from co-registered CT images. The tissue labels estimated by deep learning are refined by a conditional random field based correction. The soft tissue labels are further separated into fat and water components using the two-point Dixon method. The estimated bone, air, fat, and water images are then assigned appropriate Hounsfield units, resulting in a pseudo CT image for PET attenuation correction. To evaluate the proposed MRAC method, PET/MR imaging of the head was performed on 8 human subjects, where Dice similarity coefficients of the estimated tissue labels and relative PET errors were evaluated through comparison to a registered CT image. Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76±0.03, 0.96±0.006, and 0.88±0.01. In PET quantification, the proposed MRAC method produced relative PET errors less than 1% within most brain regions. The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantification with accurate and rapid pseudo CT generation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  7. Quantification of Human and Animal Viruses to Differentiate the Origin of the Fecal Contamination Present in Environmental Samples

    PubMed Central

    Bofill-Mas, Sílvia; Rusiñol, Marta; Fernandez-Cassi, Xavier; Carratalà, Anna; Hundesa, Ayalkibet

    2013-01-01

    Many different viruses are excreted by humans and animals and are frequently detected in fecal contaminated waters causing public health concerns. Classical bacterial indicator such as E. coli and enterococci could fail to predict the risk for waterborne pathogens such as viruses. Moreover, the presence and levels of bacterial indicators do not always correlate with the presence and concentration of viruses, especially when these indicators are present in low concentrations. Our research group has proposed new viral indicators and methodologies for determining the presence of fecal pollution in environmental samples as well as for tracing the origin of this fecal contamination (microbial source tracking). In this paper, we examine to what extent have these indicators been applied by the scientific community. Recently, quantitative assays for quantification of poultry and ovine viruses have also been described. Overall, quantification by qPCR of human adenoviruses and human polyomavirus JC, porcine adenoviruses, bovine polyomaviruses, chicken/turkey parvoviruses, and ovine polyomaviruses is suggested as a toolbox for the identification of human, porcine, bovine, poultry, and ovine fecal pollution in environmental samples. PMID:23762826

  8. Human operator response to error-likely situations in complex engineering systems

    NASA Technical Reports Server (NTRS)

    Morris, Nancy M.; Rouse, William B.

    1988-01-01

    The causes of human error in complex systems are examined. First, a conceptual framework is provided in which two broad categories of error are discussed: errors of action, or slips, and errors of intention, or mistakes. Conditions in which slips and mistakes might be expected to occur are identified, based on existing theories of human error. Regarding the role of workload, it is hypothesized that workload may act as a catalyst for error. Two experiments are presented in which humans' response to error-likely situations were examined. Subjects controlled PLANT under a variety of conditions and periodically provided subjective ratings of mental effort. A complex pattern of results was obtained, which was not consistent with predictions. Generally, the results of this research indicate that: (1) humans respond to conditions in which errors might be expected by attempting to reduce the possibility of error, and (2) adaptation to conditions is a potent influence on human behavior in discretionary situations. Subjects' explanations for changes in effort ratings are also explored.

  9. Development and validation of a rapid and simple LC-MS/MS method for quantification of vemurafenib in human plasma: application to a human pharmacokinetic study.

    PubMed

    Bihan, Kevin; Sauzay, Chloé; Goldwirt, Lauriane; Charbonnier-Beaupel, Fanny; Hulot, Jean-Sebastien; Funck-Brentano, Christian; Zahr, Noël

    2015-02-01

    Vemurafenib (Zelboraf) is a new tyrosine kinase inhibitor that selectively targets activated BRAF V600E gene and is indicated for the treatment of advanced BRAF mutation-positive melanoma. We developed a simple method for vemurafenib quantification using liquid chromatography-tandem mass spectrometry. A stability study of vemurafenib in human plasma was also performed. (13)C(6)-vemurafenib was used as the internal standard. A single-step protein precipitation was used for plasma sample preparation. Chromatography was performed on an Acquity UPLC system (Waters) with chromatographic separation by the use of an Acquity UPLC BEH C18 column (2.1 × 50 mm, 1.7-mm particle size; Waters). Quantification was performed using the monitoring of multiple reactions of following transitions: m/z 488.2 → 381.0 for vemurafenib and m/z 494.2 → 387.0 for internal standard. This method was linear over the range from 1.0 to 100.0 mcg/mL. The lower limit of quantification was 0.1 mcg/mL for vemurafenib in plasma. Vemurafenib remained stable for 1 month at all levels tested, when stored indifferently at room temperature (20 °C), at +4 °C, or at -20 °C. This method was used successfully to perform a plasma pharmacokinetic study of vemurafenib in a patient after oral administration at a steady state. This liquid chromatography-tandem mass spectrometry method for vemurafenib quantification in human plasma is simple, rapid, specific, sensitive, accurate, precise, and reliable.

  10. A novel framework for the local extraction of extra-axial cerebrospinal fluid from MR brain images

    NASA Astrophysics Data System (ADS)

    Mostapha, Mahmoud; Shen, Mark D.; Kim, SunHyung; Swanson, Meghan; Collins, D. Louis; Fonov, Vladimir; Gerig, Guido; Piven, Joseph; Styner, Martin A.

    2018-03-01

    The quantification of cerebrospinal fluid (CSF) in the human brain has shown to play an important role in early postnatal brain developmental. Extr a-axial fluid (EA-CSF), which is characterized by the CSF in the subarachnoid space, is promising in the early detection of children at risk for neurodevelopmental disorders. Currently, though, there is no tool to extract local EA-CSF measurements in a way that is suitable for localized analysis. In this paper, we propose a novel framework for the localized, cortical surface based analysis of EA-CSF. In our proposed processing, we combine probabilistic brain tissue segmentation, cortical surface reconstruction as well as streamline based local EA-CSF quantification. For streamline computation, we employ the vector field generated by solving a Laplacian partial differential equation (PDE) between the cortical surface and the outer CSF hull. To achieve sub-voxel accuracy while minimizing numerical errors, fourth-order Runge-Kutta (RK4) integration was used to generate the streamlines. Finally, the local EA-CSF is computed by integrating the CSF probability along the generated streamlines. The proposed local EA-CSF extraction tool was used to study the early postnatal brain development in typically developing infants. The results show that the proposed localized EA-CSF extraction pipeline can produce statistically significant regions that are not observed in previous global approach.

  11. A single-sampling hair trap for mesocarnivores

    Treesearch

    Jonathan N. Pauli; Matthew B. Hamilton; Edward B. Crain; Steven W. Buskirk

    2007-01-01

    Although techniques to analyze and quantifY DNA-based data have progressed, methods to noninvasively collect samples lag behind. Samples are generally collected from devices that permit coincident sampling of multiple individuals. Because of cross-contamination, substantive genotyping errors can arise. We developed a cost-effective (US$4.60/trap) single-capture hair...

  12. Stochastic output error vibration-based damage detection and assessment in structures under earthquake excitation

    NASA Astrophysics Data System (ADS)

    Sakellariou, J. S.; Fassois, S. D.

    2006-11-01

    A stochastic output error (OE) vibration-based methodology for damage detection and assessment (localization and quantification) in structures under earthquake excitation is introduced. The methodology is intended for assessing the state of a structure following potential damage occurrence by exploiting vibration signal measurements produced by low-level earthquake excitations. It is based upon (a) stochastic OE model identification, (b) statistical hypothesis testing procedures for damage detection, and (c) a geometric method (GM) for damage assessment. The methodology's advantages include the effective use of the non-stationary and limited duration earthquake excitation, the handling of stochastic uncertainties, the tackling of the damage localization and quantification subproblems, the use of "small" size, simple and partial (in both the spatial and frequency bandwidth senses) identified OE-type models, and the use of a minimal number of measured vibration signals. Its feasibility and effectiveness are assessed via Monte Carlo experiments employing a simple simulation model of a 6 storey building. It is demonstrated that damage levels of 5% and 20% reduction in a storey's stiffness characteristics may be properly detected and assessed using noise-corrupted vibration signals.

  13. The Qiagen Investigator® Quantiplex HYres as an alternative kit for DNA quantification.

    PubMed

    Frégeau, Chantal J; Laurin, Nancy

    2015-05-01

    The Investigator® Quantiplex HYres kit was evaluated as a potential replacement for dual DNA quantification of casework samples. This kit was determined to be highly sensitive with a limit of quantification and limit of detection of 0.0049ng/μL and 0.0003ng/μL, respectively, for both human and male DNA, using full or half reaction volumes. It was also accurate in assessing the amount of male DNA present in 96 mock and actual casework male:female mixtures (various ratios) processed in this exercise. The close correlation between the male/human DNA ratios expressed in percentages derived from the Investigator® Quantiplex HYres quantification results and the male DNA proportion calculated in mixed AmpFlSTR® Profiler® Plus or AmpFlSTR® Identifiler® Plus profiles, using the Amelogenin Y peak and STR loci, allowed guidelines to be developed to facilitate decisions regarding when to submit samples to Y-STR rather than autosomal STR profiling. The internal control (IC) target was shown to be more sensitive to inhibitors compared to the human and male DNA targets included in the Investigator® Quantiplex HYres kit serving as a good quality assessor of DNA extracts. The new kit met our criteria of enhanced sensitivity, accuracy, consistency, reliability and robustness for casework DNA quantification. Crown Copyright © 2015. Published by Elsevier Ireland Ltd. All rights reserved.

  14. Human error and human factors engineering in health care.

    PubMed

    Welch, D L

    1997-01-01

    Human error is inevitable. It happens in health care systems as it does in all other complex systems, and no measure of attention, training, dedication, or punishment is going to stop it. The discipline of human factors engineering (HFE) has been dealing with the causes and effects of human error since the 1940's. Originally applied to the design of increasingly complex military aircraft cockpits, HFE has since been effectively applied to the problem of human error in such diverse systems as nuclear power plants, NASA spacecraft, the process control industry, and computer software. Today the health care industry is becoming aware of the costs of human error and is turning to HFE for answers. Just as early experimental psychologists went beyond the label of "pilot error" to explain how the design of cockpits led to air crashes, today's HFE specialists are assisting the health care industry in identifying the causes of significant human errors in medicine and developing ways to eliminate or ameliorate them. This series of articles will explore the nature of human error and how HFE can be applied to reduce the likelihood of errors and mitigate their effects.

  15. Applications of integrated human error identification techniques on the chemical cylinder change task.

    PubMed

    Cheng, Ching-Min; Hwang, Sheue-Ling

    2015-03-01

    This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  16. Green method by diffuse reflectance infrared spectroscopy and spectral region selection for the quantification of sulphamethoxazole and trimethoprim in pharmaceutical formulations.

    PubMed

    da Silva, Fabiana E B; Flores, Érico M M; Parisotto, Graciele; Müller, Edson I; Ferrão, Marco F

    2016-03-01

    An alternative method for the quantification of sulphametoxazole (SMZ) and trimethoprim (TMP) using diffuse reflectance infrared Fourier-transform spectroscopy (DRIFTS) and partial least square regression (PLS) was developed. Interval Partial Least Square (iPLS) and Synergy Partial Least Square (siPLS) were applied to select a spectral range that provided the lowest prediction error in comparison to the full-spectrum model. Fifteen commercial tablet formulations and forty-nine synthetic samples were used. The ranges of concentration considered were 400 to 900 mg g-1SMZ and 80 to 240 mg g-1 TMP. Spectral data were recorded between 600 and 4000 cm-1 with a 4 cm-1 resolution by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS). The proposed procedure was compared to high performance liquid chromatography (HPLC). The results obtained from the root mean square error of prediction (RMSEP), during the validation of the models for samples of sulphamethoxazole (SMZ) and trimethoprim (TMP) using siPLS, demonstrate that this approach is a valid technique for use in quantitative analysis of pharmaceutical formulations. The selected interval algorithm allowed building regression models with minor errors when compared to the full spectrum PLS model. A RMSEP of 13.03 mg g-1for SMZ and 4.88 mg g-1 for TMP was obtained after the selection the best spectral regions by siPLS.

  17. The development of an hourly gridded rainfall product for hydrological applications in England and Wales

    NASA Astrophysics Data System (ADS)

    Liguori, Sara; O'Loughlin, Fiachra; Souvignet, Maxime; Coxon, Gemma; Freer, Jim; Woods, Ross

    2014-05-01

    This research presents a newly developed observed sub-daily gridded precipitation product for England and Wales. Importantly our analysis specifically allows a quantification of rainfall errors from grid to the catchment scale, useful for hydrological model simulation and the evaluation of prediction uncertainties. Our methodology involves the disaggregation of the current one kilometre daily gridded precipitation records available for the United Kingdom[1]. The hourly product is created using information from: 1) 2000 tipping-bucket rain gauges; and 2) the United Kingdom Met-Office weather radar network. These two independent datasets provide rainfall estimates at temporal resolutions much smaller than the current daily gridded rainfall product; thus allowing the disaggregation of the daily rainfall records to an hourly timestep. Our analysis is conducted for the period 2004 to 2008, limited by the current availability of the datasets. We analyse the uncertainty components affecting the accuracy of this product. Specifically we explore how these uncertainties vary spatially, temporally and with climatic regimes. Preliminary results indicate scope for improvement of hydrological model performance by the utilisation of this new hourly gridded rainfall product. Such product will improve our ability to diagnose and identify structural errors in hydrological modelling by including the quantification of input errors. References [1] Keller V, Young AR, Morris D, Davies H (2006) Continuous Estimation of River Flows. Technical Report: Estimation of Precipitation Inputs. in Agency E (ed.). Environmental Agency.

  18. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  19. Quantification of Coffea arabica and Coffea canephora var. robusta concentration in blends by means of synchronous fluorescence and UV-Vis spectroscopies.

    PubMed

    Dankowska, A; Domagała, A; Kowalewski, W

    2017-09-01

    The potential of fluorescence, UV-Vis spectroscopies as well as the low- and mid-level data fusion of both spectroscopies for the quantification of concentrations of roasted Coffea arabica and Coffea canephora var. robusta in coffee blends was investigated. Principal component analysis was used to reduce data multidimensionality. To calculate the level of undeclared addition, multiple linear regression (PCA-MLR) models were used with lowest root mean square error of calibration (RMSEC) of 3.6% and root mean square error of cross-validation (RMSECV) of 7.9%. LDA analysis was applied to fluorescence intensities and UV spectra of Coffea arabica, canephora samples, and their mixtures in order to examine classification ability. The best performance of PCA-LDA analysis was observed for data fusion of UV and fluorescence intensity measurements at wavelength interval of 60nm. LDA showed that data fusion can achieve over 96% of correct classifications (sensitivity) in the test set and 100% of correct classifications in the training set, with low-level data fusion. The corresponding results for individual spectroscopies ranged from 90% (UV-Vis spectroscopy) to 77% (synchronous fluorescence) in the test set, and from 93% to 97% in the training set. The results demonstrate that fluorescence, UV, and visible spectroscopies complement each other, giving a complementary effect for the quantification of roasted Coffea arabica and Coffea canephora var. robusta concentration in blends. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Automation of a Nile red staining assay enables high throughput quantification of microalgal lipid production.

    PubMed

    Morschett, Holger; Wiechert, Wolfgang; Oldiges, Marco

    2016-02-09

    Within the context of microalgal lipid production for biofuels and bulk chemical applications, specialized higher throughput devices for small scale parallelized cultivation are expected to boost the time efficiency of phototrophic bioprocess development. However, the increasing number of possible experiments is directly coupled to the demand for lipid quantification protocols that enable reliably measuring large sets of samples within short time and that can deal with the reduced sample volume typically generated at screening scale. To meet these demands, a dye based assay was established using a liquid handling robot to provide reproducible high throughput quantification of lipids with minimized hands-on-time. Lipid production was monitored using the fluorescent dye Nile red with dimethyl sulfoxide as solvent facilitating dye permeation. The staining kinetics of cells at different concentrations and physiological states were investigated to successfully down-scale the assay to 96 well microtiter plates. Gravimetric calibration against a well-established extractive protocol enabled absolute quantification of intracellular lipids improving precision from ±8 to ±2 % on average. Implementation into an automated liquid handling platform allows for measuring up to 48 samples within 6.5 h, reducing hands-on-time to a third compared to manual operation. Moreover, it was shown that automation enhances accuracy and precision compared to manual preparation. It was revealed that established protocols relying on optical density or cell number for biomass adjustion prior to staining may suffer from errors due to significant changes of the cells' optical and physiological properties during cultivation. Alternatively, the biovolume was used as a measure for biomass concentration so that errors from morphological changes can be excluded. The newly established assay proved to be applicable for absolute quantification of algal lipids avoiding limitations of currently established protocols, namely biomass adjustment and limited throughput. Automation was shown to improve data reliability, as well as experimental throughput simultaneously minimizing the needed hands-on-time to a third. Thereby, the presented protocol meets the demands for the analysis of samples generated by the upcoming generation of devices for higher throughput phototrophic cultivation and thereby contributes to boosting the time efficiency for setting up algae lipid production processes.

  1. Quantification of breast density with spectral mammography based on a scanned multi-slit photon-counting detector: a feasibility study.

    PubMed

    Ding, Huanjun; Molloi, Sabee

    2012-08-07

    A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio of the dual energy image with respect to the square root of mean glandular dose, was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. For an average sized 4.5 cm thick breast, the FOM was maximized with a tube voltage of 46 kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (∼32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique.

  2. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  3. Data-Independent MS/MS Quantification of Neuropeptides for Determination of Putative Feeding-Related Neurohormones in Microdialysate

    PubMed Central

    2015-01-01

    Food consumption is an important behavior that is regulated by an intricate array of neuropeptides (NPs). Although many feeding-related NPs have been identified in mammals, precise mechanisms are unclear and difficult to study in mammals, as current methods are not highly multiplexed and require extensive a priori knowledge about analytes. New advances in data-independent acquisition (DIA) MS/MS and the open-source quantification software Skyline have opened up the possibility to identify hundreds of compounds and quantify them from a single DIA MS/MS run. An untargeted DIA MSE quantification method using Skyline software for multiplexed, discovery-driven quantification was developed and found to produce linear calibration curves for peptides at physiologically relevant concentrations using a protein digest as internal standard. By using this method, preliminary relative quantification of the crab Cancer borealis neuropeptidome (<2 kDa, 137 peptides from 18 families) was possible in microdialysates from 8 replicate feeding experiments. Of these NPs, 55 were detected with an average mass error below 10 ppm. The time-resolved profiles of relative concentration changes for 6 are shown, and there is great potential for the use of this method in future experiments to aid in correlation of NP changes with behavior. This work presents an unbiased approach to winnowing candidate NPs related to a behavior of interest in a functionally relevant manner, and demonstrates the success of such a UPLC-MSE quantification method using the open source software Skyline. PMID:25552291

  4. Data-independent MS/MS quantification of neuropeptides for determination of putative feeding-related neurohormones in microdialysate.

    PubMed

    Schmerberg, Claire M; Liang, Zhidan; Li, Lingjun

    2015-01-21

    Food consumption is an important behavior that is regulated by an intricate array of neuropeptides (NPs). Although many feeding-related NPs have been identified in mammals, precise mechanisms are unclear and difficult to study in mammals, as current methods are not highly multiplexed and require extensive a priori knowledge about analytes. New advances in data-independent acquisition (DIA) MS/MS and the open-source quantification software Skyline have opened up the possibility to identify hundreds of compounds and quantify them from a single DIA MS/MS run. An untargeted DIA MS(E) quantification method using Skyline software for multiplexed, discovery-driven quantification was developed and found to produce linear calibration curves for peptides at physiologically relevant concentrations using a protein digest as internal standard. By using this method, preliminary relative quantification of the crab Cancer borealis neuropeptidome (<2 kDa, 137 peptides from 18 families) was possible in microdialysates from 8 replicate feeding experiments. Of these NPs, 55 were detected with an average mass error below 10 ppm. The time-resolved profiles of relative concentration changes for 6 are shown, and there is great potential for the use of this method in future experiments to aid in correlation of NP changes with behavior. This work presents an unbiased approach to winnowing candidate NPs related to a behavior of interest in a functionally relevant manner, and demonstrates the success of such a UPLC-MS(E) quantification method using the open source software Skyline.

  5. Human error in airway facilities.

    DOT National Transportation Integrated Search

    2001-01-01

    This report examines human errors in Airway Facilities (AF) with the intent of preventing these errors from being : passed on to the new Operations Control Centers. To effectively manage errors, they first have to be identified. : Human factors engin...

  6. Sources of uncertainty in the quantification of genetically modified oilseed rape contamination in seed lots.

    PubMed

    Begg, Graham S; Cullen, Danny W; Iannetta, Pietro P M; Squire, Geoff R

    2007-02-01

    Testing of seed and grain lots is essential in the enforcement of GM labelling legislation and needs reliable procedures for which associated errors have been identified and minimised. In this paper we consider the testing of oilseed rape seed lots obtained from the harvest of a non-GM crop known to be contaminated by volunteer plants from a GM herbicide tolerant variety. The objective was to identify and quantify the error associated with the testing of these lots from the initial sampling to completion of the real-time PCR assay with which the level of GM contamination was quantified. The results showed that, under the controlled conditions of a single laboratory, the error associated with the real-time PCR assay to be negligible in comparison with sampling error, which was exacerbated by heterogeneity in the distribution of GM seeds, most notably at a small scale, i.e. 25 cm3. Sampling error was reduced by one to two thirds on the application of appropriate homogenisation procedures.

  7. Accounting for uncertainty in DNA sequencing data.

    PubMed

    O'Rawe, Jason A; Ferson, Scott; Lyon, Gholson J

    2015-02-01

    Science is defined in part by an honest exposition of the uncertainties that arise in measurements and propagate through calculations and inferences, so that the reliabilities of its conclusions are made apparent. The recent rapid development of high-throughput DNA sequencing technologies has dramatically increased the number of measurements made at the biochemical and molecular level. These data come from many different DNA-sequencing technologies, each with their own platform-specific errors and biases, which vary widely. Several statistical studies have tried to measure error rates for basic determinations, but there are no general schemes to project these uncertainties so as to assess the surety of the conclusions drawn about genetic, epigenetic, and more general biological questions. We review here the state of uncertainty quantification in DNA sequencing applications, describe sources of error, and propose methods that can be used for accounting and propagating these errors and their uncertainties through subsequent calculations. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Prefrontal vulnerabilities and whole brain connectivity in aging and depression.

    PubMed

    Lamar, Melissa; Charlton, Rebecca A; Ajilore, Olusola; Zhang, Aifeng; Yang, Shaolin; Barrick, Thomas R; Rhodes, Emma; Kumar, Anand

    2013-07-01

    Studies exploring the underpinnings of age-related neurodegeneration suggest fronto-limbic alterations that are increasingly vulnerable in the presence of disease including late life depression. Less work has assessed the impact of this specific vulnerability on widespread brain circuitry. Seventy-nine older adults (healthy controls=45; late life depression=34) completed translational tasks shown in non-human primates to rely on fronto-limbic networks involving dorsolateral (Self-Ordered Pointing Task) or orbitofrontal (Object Alternation Task) cortices. A sub-sample of participants also completed diffusion tensor imaging for white matter tract quantification (uncinate and cingulum bundle; n=58) and whole brain tract-based spatial statistics (n=62). Despite task associations to specific white matter tracts across both groups, only healthy controls demonstrated significant correlations between widespread tract integrity and cognition. Thus, increasing Object Alternation Task errors were associated with decreasing fractional anisotropy in the uncinate in late life depression; however, only in healthy controls was the uncinate incorporated into a larger network of white matter vulnerability associating fractional anisotropy with Object Alternation Task errors using whole brain tract-based spatial statistics. It appears that the whole brain impact of specific fronto-limbic vulnerabilities in aging may be eclipsed in the presence of disease-specific neuropathology like that seen in late life depression. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    PubMed

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  10. A novel liquid chromatography-tandem mass spectrometry method for determination of menadione in human plasma after derivatization with 3-mercaptopropionic acid.

    PubMed

    Liu, Ruijuan; Wang, Mengmeng; Ding, Li

    2014-10-01

    Menadione (VK3), an essential fat-soluble naphthoquinone, takes very important physiological and pathological roles, but its detection and quantification is challenging. Herein, a new method was developed for quantification of VK3 in human plasma by liquid chromatography-tandem mass spectrometry (LC-MS/MS) after derivatization with 3-mercaptopropionic acid via Michael addition reaction. The derivative had been identified by the mass spectra and the derivatization conditions were optimized by considering different parameters. The method was demonstrated with high sensitivity and a low limit of quantification of 0.03 ng mL(-1) for VK3, which is about 33-fold better than that for the direct analysis of the underivatized compound. The method also had good precision and reproducibility. It was applied in the determination of basal VK3 in human plasma and a clinical pharmacokinetic study of menadiol sodium diphosphate. Furthermore, the method for the quantification of VK3 using LC-MS/MS was reported in this paper for the first time, and it will provide an important strategy for the further research on VK3 and menadione analogs. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Validation of a commercially available enzyme-linked immunoabsorbent assay for the quantification of human α-Synuclein in cerebrospinal fluid.

    PubMed

    Kruse, Niels; Mollenhauer, Brit

    2015-11-01

    The quantification of α-Synuclein in cerebrospinal fluid (CSF) as a biomarker has gained tremendous interest in the last years. Several commercially available immunoassays are emerging. We here describe the full validation of one commercially available ELISA assay for the quantification of α-Synuclein in human CSF (Covance alpha-Synuclein ELISA kit). The study was conducted within the BIOMARKAPD project in the European initiative Joint Program for Neurodegenerative Diseases (JPND). We investigated the effect of several pre-analytical and analytical confounders: i.e. (1) need for centrifugation of freshly drawn CSF, (2) sample stability, (3) delay of freezing, (4) volume of storage aliquots, (5) freeze/thaw cycles, (6) thawing conditions, (7) dilution linearity, (8) parallelism, (9) spike recovery, and (10) precision. None of these confounders influenced the levels of α-Synuclein in CSF significantly. We found a very high intra-assay precision. The inter-assay precision was lower than expected due to different performances of kit lots used. Overall the validated immunoassay is useful for the quantification of α-Synuclein in human CSF. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Model Update of a Micro Air Vehicle (MAV) Flexible Wing Frame with Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.; Waszak, Martin R.; Morgan, Benjamin G.

    2004-01-01

    This paper describes a procedure to update parameters in the finite element model of a Micro Air Vehicle (MAV) to improve displacement predictions under aerodynamics loads. Because of fabrication, materials, and geometric uncertainties, a statistical approach combined with Multidisciplinary Design Optimization (MDO) is used to modify key model parameters. Static test data collected using photogrammetry are used to correlate with model predictions. Results show significant improvements in model predictions after parameters are updated; however, computed probabilities values indicate low confidence in updated values and/or model structure errors. Lessons learned in the areas of wing design, test procedures, modeling approaches with geometric nonlinearities, and uncertainties quantification are all documented.

  13. Absolute measurement of species differences in sodium taurocholate cotransporting polypeptide (NTCP/Ntcp) and its modulation in cultured hepatocytes.

    PubMed

    Qiu, Xi; Bi, Yi-An; Balogh, Larissa M; Lai, Yurong

    2013-09-01

    Species differences among membrane transporters can be remarkable and difficult to properly assess by conventional methods. Herein, we employed the first use of stable isotope labeling in mammals or stable isotope-labeled peptides combined with mass spectrometry to identify species differences in sodium taurocholate cotransporting polypeptide (NTCP/Ntcp) protein expression in liver tissue and to characterize the modulation of protein expression in sandwich-cultured human (SCHH) and rat hepatocytes (SCRH). The lower limit of quantification was established to be 5 fmol on column with a standard curve that was linear up to 2000 fmol. The accuracy and precision were evaluated with three quality control samples and known amounts of synthetic proteotypic peptides that were spiked into the membrane protein extracts. The overall relative error and coefficient of variation were less than 10%. The expression of Ntcp in mouse and rat was significant higher than that in human (five-fold) and monkey (two-fold) and ranked as mouse > rat > monkey > human. In the cultured hepatocytes, although significant downregulation of Ntcp expression in SCRH at day 5 after the culture was detected, NTCP expression in SCHH was comparable to the suspension hepatocytes. The results suggested that NTCP/Ntcp modulation in cultured hepatocytes is species specific. Copyright © 2013 Wiley Periodicals, Inc.

  14. Determination of moclobemide in human plasma by high-performance liquid chromatography with spectrophotometric detection.

    PubMed

    Amini, Hossein; Shahmir, Badri; Ahmadiani, Abolhassan

    2004-08-05

    A simple and sensitive high-performance liquid chromatographic (HPLC) method with spectrophotometric detection was developed for the determination of moclobemide in human plasma. Plasma samples were extracted under basic conditions with dichloromethane followed by back-extraction into diluted phosphoric acid. Isocratic separation was employed on an ODS column (250 mm x 4.6 mm, 5 microm) at room temperature. The mobile phase consisted of 5 mM NaH2PO4-acetonitrile-triethylamine (1000:350:10 (v/v/v), pH 3.4). Analyses were run at a flow-rate of 1.0 ml/min and ultraviolet (UV) detection was carried out at 240 nm. The method was specific and sensitive with a quantification limit of 15.6 ng/ml and a detection limit of 5 ng/ml at a signal-to-noise ratio of 3:1. The mean absolute recovery was about 98.2%, while the intra- and inter-day coefficient of variation and percent error values of the assay method were all at acceptable levels. Linearity was assessed in the range of 15.6-2000 ng/ml in plasma with a correlation coefficient of greater than 0.999. This method has been used to analyze several hundred human plasma samples for bioavailibility studies.

  15. Comparison of a New Cobinamide-Based Method to a Standard Laboratory Method for Measuring Cyanide in Human Blood

    PubMed Central

    Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R.; Hammer, Gregory B.; Schulman, Scott R.; Zajicek, Anne; Jett, David A.; Boss, Gerry R.

    2013-01-01

    Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms. PMID:23653045

  16. Error quantification of a high-resolution coupled hydrodynamic-ecosystem coastal-ocean model: Part 2. Chlorophyll-a, nutrients and SPM

    NASA Astrophysics Data System (ADS)

    Allen, J. Icarus; Holt, Jason T.; Blackford, Jerry; Proctor, Roger

    2007-12-01

    Marine systems models are becoming increasingly complex and sophisticated, but far too little attention has been paid to model errors and the extent to which model outputs actually relate to ecosystem processes. Here we describe the application of summary error statistics to a complex 3D model (POLCOMS-ERSEM) run for the period 1988-1989 in the southern North Sea utilising information from the North Sea Project, which collected a wealth of observational data. We demonstrate that to understand model data misfit and the mechanisms creating errors, we need to use a hierarchy of techniques, including simple correlations, model bias, model efficiency, binary discriminator analysis and the distribution of model errors to assess model errors spatially and temporally. We also demonstrate that a linear cost function is an inappropriate measure of misfit. This analysis indicates that the model has some skill for all variables analysed. A summary plot of model performance indicates that model performance deteriorates as we move through the ecosystem from the physics, to the nutrients and plankton.

  17. Information systems and human error in the lab.

    PubMed

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  18. Objectified quantification of uncertainties in Bayesian atmospheric inversions

    NASA Astrophysics Data System (ADS)

    Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.

    2015-05-01

    Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.

  19. Simultaneous quantification of α-lactalbumin and β-casein in human milk using ultra-performance liquid chromatography with tandem mass spectrometry based on their signature peptides and winged isotope internal standards.

    PubMed

    Chen, Qi; Zhang, Jingshun; Ke, Xing; Lai, Shiyun; Li, Duo; Yang, Jinchuan; Mo, Weimin; Ren, Yiping

    2016-09-01

    In recent years, there is an increasing need to measure the concentration of individual proteins in human milk, instead of total human milk proteins. Due to lack of human milk protein standards, there are only few quantification methods established. The objective of the present work was to develop a simple and rapid quantification method for simultaneous determination of α-lactalbumin and β-casein in human milk using signature peptides according to a modified quantitative proteomics strategy. The internal standards containing the signature peptide sequences were synthesized with isotope-labeled amino acids. The purity of synthesized peptides as standards was determined by amino acid analysis method and area normalization method. The contents of α-lactalbumin and β-casein in human milk were measured according to the equimolar relationship between the two proteins and their corresponding signature peptides. The method validation results showed a satisfied linearity (R(2)>0.99) and recoveries (97.2-102.5% for α-lactalbumin and 99.5-100.3% for β-casein). The limit of quantification for α-lactalbumin and β-casein was 8.0mg/100g and 1.2mg/100g, respectively. CVs for α-lactalbumin and β-casein in human milk were 5.2% and 3.0%. The contents of α-lactalbumin and β-casein in 147 human milk samples were successfully determined by the established method and their contents were 205.5-578.2mg/100g and 116.4-467.4mg/100g at different lactation stages. The developed method allows simultaneously determination of α-lactalbumin and β-casein in human milk. The quantitative strategy based on signature peptide should be applicable to other endogenous proteins in breast milk and other body fluids. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Polarization-sensitive optical coherence tomography-based imaging, parameterization, and quantification of human cartilage degeneration

    NASA Astrophysics Data System (ADS)

    Brill, Nicolai; Wirtz, Mathias; Merhof, Dorit; Tingart, Markus; Jahr, Holger; Truhn, Daniel; Schmitt, Robert; Nebelung, Sven

    2016-07-01

    Polarization-sensitive optical coherence tomography (PS-OCT) is a light-based, high-resolution, real-time, noninvasive, and nondestructive imaging modality yielding quasimicroscopic cross-sectional images of cartilage. As yet, comprehensive parameterization and quantification of birefringence and tissue properties have not been performed on human cartilage. PS-OCT and algorithm-based image analysis were used to objectively grade human cartilage degeneration in terms of surface irregularity, tissue homogeneity, signal attenuation, as well as birefringence coefficient and band width, height, depth, and number. Degeneration-dependent changes were noted for the former three parameters exclusively, thereby questioning the diagnostic value of PS-OCT in the assessment of human cartilage degeneration.

  1. Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors

    NASA Astrophysics Data System (ADS)

    Marti, Alejandro; Folch, Arnau

    2018-03-01

    Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally efficient online dispersal models.

  2. Accuracy of 1H magnetic resonance spectroscopy for quantification of 2-hydroxyglutarate using linear combination and J-difference editing at 9.4T.

    PubMed

    Neuberger, Ulf; Kickingereder, Philipp; Helluy, Xavier; Fischer, Manuel; Bendszus, Martin; Heiland, Sabine

    2017-12-01

    Non-invasive detection of 2-hydroxyglutarate (2HG) by magnetic resonance spectroscopy is attractive since it is related to tumor metabolism. Here, we compare the detection accuracy of 2HG in a controlled phantom setting via widely used localized spectroscopy sequences quantified by linear combination of metabolite signals vs. a more complex approach applying a J-difference editing technique at 9.4T. Different phantoms, comprised out of a concentration series of 2HG and overlapping brain metabolites, were measured with an optimized point-resolved-spectroscopy sequence (PRESS) and an in-house developed J-difference editing sequence. The acquired spectra were post-processed with LCModel and a simulated metabolite set (PRESS) or with a quantification formula for J-difference editing. Linear regression analysis demonstrated a high correlation of real 2HG values with those measured with the PRESS method (adjusted R-squared: 0.700, p<0.001) as well as with those measured with the J-difference editing method (adjusted R-squared: 0.908, p<0.001). The regression model with the J-difference editing method however had a significantly higher explanatory value over the regression model with the PRESS method (p<0.0001). Moreover, with J-difference editing 2HG was discernible down to 1mM, whereas with the PRESS method 2HG values were not discernable below 2mM and with higher systematic errors, particularly in phantoms with high concentrations of N-acetyl-asparate (NAA) and glutamate (Glu). In summary, quantification of 2HG with linear combination of metabolite signals shows high systematic errors particularly at low 2HG concentration and high concentration of confounding metabolites such as NAA and Glu. In contrast, J-difference editing offers a more accurate quantification even at low 2HG concentrations, which outweighs the downsides of longer measurement time and more complex postprocessing. Copyright © 2017. Published by Elsevier GmbH.

  3. Stochastic Models of Human Errors

    NASA Technical Reports Server (NTRS)

    Elshamy, Maged; Elliott, Dawn M. (Technical Monitor)

    2002-01-01

    Humans play an important role in the overall reliability of engineering systems. More often accidents and systems failure are traced to human errors. Therefore, in order to have meaningful system risk analysis, the reliability of the human element must be taken into consideration. Describing the human error process by mathematical models is a key to analyzing contributing factors. Therefore, the objective of this research effort is to establish stochastic models substantiated by sound theoretic foundation to address the occurrence of human errors in the processing of the space shuttle.

  4. Operational Interventions to Maintenance Error

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Dulchinos, VIcki

    1997-01-01

    A significant proportion of aviation accidents and incidents are known to be tied to human error. However, research of flight operational errors has shown that so-called pilot error often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the team7 concept for maintenance operations and in tailoring programs to fit the needs of technical opeRAtions. Nevertheless, there remains a dual challenge: 1) to develop human factors interventions which are directly supported by reliable human error data, and 2) to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  5. Reduction of Maintenance Error Through Focused Interventions

    NASA Technical Reports Server (NTRS)

    Kanki, Barbara G.; Walter, Diane; Rosekind, Mark R. (Technical Monitor)

    1997-01-01

    It is well known that a significant proportion of aviation accidents and incidents are tied to human error. In flight operations, research of operational errors has shown that so-called "pilot error" often involves a variety of human factors issues and not a simple lack of individual technical skills. In aircraft maintenance operations, there is similar concern that maintenance errors which may lead to incidents and accidents are related to a large variety of human factors issues. Although maintenance error data and research are limited, industry initiatives involving human factors training in maintenance have become increasingly accepted as one type of maintenance error intervention. Conscientious efforts have been made in re-inventing the "team" concept for maintenance operations and in tailoring programs to fit the needs of technical operations. Nevertheless, there remains a dual challenge: to develop human factors interventions which are directly supported by reliable human error data, and to integrate human factors concepts into the procedures and practices of everyday technical tasks. In this paper, we describe several varieties of human factors interventions and focus on two specific alternatives which target problems related to procedures and practices; namely, 1) structured on-the-job training and 2) procedure re-design. We hope to demonstrate that the key to leveraging the impact of these solutions comes from focused interventions; that is, interventions which are derived from a clear understanding of specific maintenance errors, their operational context and human factors components.

  6. Testing 3D landform quantification methods with synthetic drumlins in a real digital elevation model

    NASA Astrophysics Data System (ADS)

    Hillier, John K.; Smith, Mike J.

    2012-06-01

    Metrics such as height and volume quantifying the 3D morphology of landforms are important observations that reflect and constrain Earth surface processes. Errors in such measurements are, however, poorly understood. A novel approach, using statistically valid ‘synthetic' landscapes to quantify the errors is presented. The utility of the approach is illustrated using a case study of 184 drumlins observed in Scotland as quantified from a Digital Elevation Model (DEM) by the ‘cookie cutter' extraction method. To create the synthetic DEMs, observed drumlins were removed from the measured DEM and replaced by elongate 3D Gaussian ones of equivalent dimensions positioned randomly with respect to the ‘noise' (e.g. trees) and regional trends (e.g. hills) that cause the errors. Then, errors in the cookie cutter extraction method were investigated by using it to quantify these ‘synthetic' drumlins, whose location and size is known. Thus, the approach determines which key metrics are recovered accurately. For example, mean height of 6.8 m is recovered poorly at 12.5 ± 0.6 (2σ) m, but mean volume is recovered correctly. Additionally, quantification methods can be compared: A variant on the cookie cutter using an un-tensioned spline induced about twice (× 1.79) as much error. Finally, a previously reportedly statistically significant (p = 0.007) difference in mean volume between sub-populations of different ages, which may reflect formational processes, is demonstrated to be only 30-50% likely to exist in reality. Critically, the synthetic DEMs are demonstrated to realistically model parameter recovery, primarily because they are still almost entirely the original landscape. Results are insensitive to the exact method used to create the synthetic DEMs, and the approach could be readily adapted to assess a variety of landforms (e.g. craters, dunes and volcanoes).

  7. Effect of grid resolution on large eddy simulation of wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Rezaeiravesh, S.; Liefvendahl, M.

    2018-05-01

    The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.

  8. Analytical Methods for Quantification of Vitamin D and Implications for Research and Clinical Practice.

    PubMed

    Stokes, Caroline S; Lammert, Frank; Volmer, Dietrich A

    2018-02-01

    A plethora of contradictory research surrounds vitamin D and its influence on health and disease. This may, in part, result from analytical difficulties with regard to measuring vitamin D metabolites in serum. Indeed, variation exists between analytical techniques and assays used for the determination of serum 25-hydroxyvitamin D. Research studies into the effects of vitamin D on clinical endpoints rely heavily on the accurate assessment of vitamin D status. This has important implications, as findings from vitamin D-related studies to date may potentially have been hampered by the quantification techniques used. Likewise, healthcare professionals are increasingly incorporating vitamin D testing and supplementation regimens into their practice, and measurement errors may be also confounding the clinical decisions. Importantly, the Vitamin D Standardisation Programme is an initiative that aims to standardise the measurement of vitamin D metabolites. Such a programme is anticipated to eliminate the inaccuracies surrounding vitamin D quantification. Copyright© 2018, International Institute of Anticancer Research (Dr. George J. Delinasios), All rights reserved.

  9. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  10. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  11. A liquid chromatography/tandem mass spectrometry assay for the analysis of atomoxetine in human plasma and in vitro cellular samples

    PubMed Central

    Appel, David I.; Brinda, Bryan; Markowitz, John S.; Newcorn, Jeffrey H.; Zhu, Hao-Jie

    2012-01-01

    A simple, rapid and sensitive method for quantification of atomoxetine by liquid chromatography- tandem mass spectrometry (LC-MS/MS) was developed. This assay represents the first LC-MS/MS quantification method for atomoxetine utilizing electrospray ionization. Deuterated atomoxetine (d3-atomoxetine) was adopted as the internal standard. Direct protein precipitation was utilized for sample preparation. This method was validated for both human plasma and in vitro cellular samples. The lower limit of quantification was 3 ng/ml and 10 nM for human plasma and cellular samples, respectively. The calibration curves were linear within the ranges of 3 ng/ml to 900 ng/ml and 10 nM to 10 μM for human plasma and cellular samples, respectively (r2 > 0.999). The intra- and inter-day assay accuracy and precision were evaluated using quality control samples at 3 different concentrations in both human plasma and cellular lysate. Sample run stability, assay selectivity, matrix effect, and recovery were also successfully demonstrated. The present assay is superior to previously published LC-MS and LC-MS/MS methods in terms of sensitivity or the simplicity of sample preparation. This assay is applicable to the analysis of atomoxetine in both human plasma and in vitro cellular samples. PMID:22275222

  12. Full-field transient vibrometry of the human tympanic membrane by local phase correlation and high-speed holography

    NASA Astrophysics Data System (ADS)

    Dobrev, Ivo; Furlong, Cosme; Cheng, Jeffrey T.; Rosowski, John J.

    2014-09-01

    Understanding the human hearing process would be helped by quantification of the transient mechanical response of the human ear, including the human tympanic membrane (TM or eardrum). We propose a new hybrid high-speed holographic system (HHS) for acquisition and quantification of the full-field nanometer transient (i.e., >10 kHz) displacement of the human TM. We have optimized and implemented a 2+1 frame local correlation (LC) based phase sampling method in combination with a high-speed (i.e., >40 K fps) camera acquisition system. To our knowledge, there is currently no existing system that provides such capabilities for the study of the human TM. The LC sampling method has a displacement difference of <11 nm relative to measurements obtained by a four-phase step algorithm. Comparisons between our high-speed acquisition system and a laser Doppler vibrometer indicate differences of <10 μs. The high temporal (i.e., >40 kHz) and spatial (i.e., >100 k data points) resolution of our HHS enables parallel measurements of all points on the surface of the TM, which allows quantification of spatially dependent motion parameters, such as modal frequencies and acoustic delays. Such capabilities could allow inferring local material properties across the surface of the TM.

  13. The contributions of human factors on human error in Malaysia aviation maintenance industries

    NASA Astrophysics Data System (ADS)

    Padil, H.; Said, M. N.; Azizan, A.

    2018-05-01

    Aviation maintenance is a multitasking activity in which individuals perform varied tasks under constant pressure to meet deadlines as well as challenging work conditions. These situational characteristics combined with human factors can lead to various types of human related errors. The primary objective of this research is to develop a structural relationship model that incorporates human factors, organizational factors, and their impact on human errors in aviation maintenance. Towards that end, a questionnaire was developed which was administered to Malaysian aviation maintenance professionals. Structural Equation Modelling (SEM) approach was used in this study utilizing AMOS software. Results showed that there were a significant relationship of human factors on human errors and were tested in the model. Human factors had a partial effect on organizational factors while organizational factors had a direct and positive impact on human errors. It was also revealed that organizational factors contributed to human errors when coupled with human factors construct. This study has contributed to the advancement of knowledge on human factors effecting safety and has provided guidelines for improving human factors performance relating to aviation maintenance activities and could be used as a reference for improving safety performance in the Malaysian aviation maintenance companies.

  14. A theory of human error

    NASA Technical Reports Server (NTRS)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1980-01-01

    Human error, a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents is investigated. Correction of the sources of human error requires that one attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations is presented. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  15. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, Diana

    2014-01-01

    Most businesses recognize that people will make mistakes and assume errors are just part of the cost of doing business, but does it need to be? Companies with high risk, or major consequences, should consider the effect of human error. In a variety of industries, Human Errors have caused costly failures and workplace injuries. These have included: airline mishaps, medical malpractice, administration of medication and major oil spills have all been blamed on human error. A technique to mitigate or even eliminate some of these costly human errors is the use of Human Reliability Analysis (HRA). Various methodologies are available to perform Human Reliability Assessments that range from identifying the most likely areas for concern to detailed assessments with human error failure probabilities calculated. Which methodology to use would be based on a variety of factors that would include: 1) how people react and act in different industries, and differing expectations based on industries standards, 2) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 3) type and availability of data and 4) how the industry views risk & reliability influences ( types of emergencies, contingencies and routine tasks versus cost based concerns). The Human Reliability Assessments should be the first step to reduce, mitigate or eliminate the costly mistakes or catastrophic failures. Using Human Reliability techniques to identify and classify human error risks allows a company more opportunities to mitigate or eliminate these risks and prevent costly failures.

  16. Human Reliability and the Cost of Doing Business

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2014-01-01

    Human error cannot be defined unambiguously in advance of it happening, it often becomes an error after the fact. The same action can result in a tragic accident for one situation or a heroic action given a more favorable outcome. People often forget that we employ humans in business and industry for the flexibility and capability to change when needed. In complex systems, operations are driven by their specifications of the system and the system structure. People provide the flexibility to make it work. Human error has been reported as being responsible for 60%-80% of failures, accidents and incidents in high-risk industries. We don't have to accept that all human errors are inevitable. Through the use of some basic techniques, many potential human error events can be addressed. There are actions that can be taken to reduce the risk of human error.

  17. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  18. Single molecule counting and assessment of random molecular tagging errors with transposable giga-scale error-correcting barcodes.

    PubMed

    Lau, Billy T; Ji, Hanlee P

    2017-09-21

    RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.

  19. Spectral performance of a whole-body research photon counting detector CT: quantitative accuracy in derived image sets

    NASA Astrophysics Data System (ADS)

    Leng, Shuai; Zhou, Wei; Yu, Zhicong; Halaweish, Ahmed; Krauss, Bernhard; Schmidt, Bernhard; Yu, Lifeng; Kappler, Steffen; McCollough, Cynthia

    2017-09-01

    Photon-counting computed tomography (PCCT) uses a photon counting detector to count individual photons and allocate them to specific energy bins by comparing photon energy to preset thresholds. This enables simultaneous multi-energy CT with a single source and detector. Phantom studies were performed to assess the spectral performance of a research PCCT scanner by assessing the accuracy of derived images sets. Specifically, we assessed the accuracy of iodine quantification in iodine map images and of CT number accuracy in virtual monoenergetic images (VMI). Vials containing iodine with five known concentrations were scanned on the PCCT scanner after being placed in phantoms representing the attenuation of different size patients. For comparison, the same vials and phantoms were also scanned on 2nd and 3rd generation dual-source, dual-energy scanners. After material decomposition, iodine maps were generated, from which iodine concentration was measured for each vial and phantom size and compared with the known concentration. Additionally, VMIs were generated and CT number accuracy was compared to the reference standard, which was calculated based on known iodine concentration and attenuation coefficients at each keV obtained from the U.S. National Institute of Standards and Technology (NIST). Results showed accurate iodine quantification (root mean square error of 0.5 mgI/cc) and accurate CT number of VMIs (percentage error of 8.9%) using the PCCT scanner. The overall performance of the PCCT scanner, in terms of iodine quantification and VMI CT number accuracy, was comparable to that of EID-based dual-source, dual-energy scanners.

  20. WE-AB-204-05: Harmonizing PET/CT Quantification in Multicenter Studies: A Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marques da Silva, A; Fischer, A

    2015-06-15

    Purpose: To present the implementation of a strategy to harmonize FDG PET/CT quantification (SUV), performed with different scanner models and manufacturers. Methods: The strategy was based on Boellaard (2011) and EARL FDG-PET/CT accreditation program, that propose quality control measurements for harmonizing scanner performance. A NEMA IEC Body phantom study was performed using four different devices: PHP-1 (Gemini TF Base, Philips); PHP-2 (Gemini GXL, Philips); GEH (Discovery 600, General Electric); SMS (Biograph Hi-Rez 16, Siemens). The SUV Recovery Coefficient (RC) was calculated using the clinical protocol and other clinically relevant reconstruction parameters. The most appropriate reconstruction parameters (MARP) for SUV harmonization,more » in each scanner, are those which achieve EARL harmonizing standards. They were identified using the lowest root mean square errors (RMSE). To evaluate the strategy’s effectiveness, the Maximum Differences (MD) between the clinical and MARP RC values were calculated. Results: The reconstructions parameters that obtained the lowest RMSE are: FBP 5mm (PHP-1); LOR-RAMLA 2i0.008l (PHP-2); VuePointHD 2i32s10mm (GEH); and FORE+OSEM 4i8s6mm (SMS). Thus, to ensure that quantitative PET image measurements are interchangeable between these sites, images must be reconstructed with the above-mentioned parameters. Although, a decoupling between the best image for PET/CT qualitative analysis and the best image for quantification studies was observed. The MD showed that the strategy was effective in reducing the variability of SUV quantification for small structures (<17mm). Conclusion: The harmonization strategy of the SUV quantification implemented with these devices was effective in reducing the variability of small structures quantification, minimizing the inter-scanner and inter-institution differences in quantification. However, it is essential that, in addition to the harmonization of quantification, the standardization of the methodology of patient preparation must be maintained, in order to minimize the SUV variability due to biological factors. Financial support by CAPES.« less

  1. Multiple Velocity Profile Measurements in Hypersonic Flows Using Sequentially-Imaged Fluorescence Tagging

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Danehy, Paul M.; Inman, Jennifer A.; Jones, Stephen B.; Ivey,Christopher b.; Goyne, Christopher P.

    2010-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to perform velocity measurements in hypersonic flows by generating multiple tagged lines which fluoresce as they convect downstream. For each laser pulse, a single interline, progressive scan intensified CCD (charge-coupled device) camera was used to obtain two sequential images of the NO molecules that had been tagged by the laser. The CCD configuration allowed for sub-microsecond acquisition of both images, resulting in sub-microsecond temporal resolution as well as sub-mm spatial resolution (0.5-mm horizontal, 0.7-mm vertical). Determination of axial velocity was made by application of a cross-correlation analysis of the horizontal shift of individual tagged lines. A numerical study of measured velocity error due to a uniform and linearly-varying collisional rate distribution was performed. Quantification of systematic errors, the contribution of gating/exposure duration errors, and the influence of collision rate on temporal uncertainty were made. Quantification of the spatial uncertainty depended upon the signal-to-noise ratio of the acquired profiles. This velocity measurement technique has been demonstrated for two hypersonic flow experiments: (1) a reaction control system (RCS) jet on an Orion Crew Exploration Vehicle (CEV) wind tunnel model and (2) a 10-degree half-angle wedge containing a 2-mm tall, 4-mm wide cylindrical boundary layer trip. The experiments were performed at the NASA Langley Research Center's 31-Inch Mach 10 Air Tunnel.

  2. Simultaneous quantification of the boar-taint compounds skatole and androstenone by surface-enhanced Raman scattering (SERS) and multivariate data analysis.

    PubMed

    Sørensen, Klavs M; Westley, Chloe; Goodacre, Royston; Engelsen, Søren Balling

    2015-10-01

    This study investigates the feasibility of using surface-enhanced Raman scattering (SERS) for the quantification of absolute levels of the boar-taint compounds skatole and androstenone in porcine fat. By investigation of different types of nanoparticles, pH and aggregating agents, an optimized environment that promotes SERS of the analytes was developed and tested with different multivariate spectral pre-processing techniques, and this was combined with variable selection on a series of analytical standards. The resulting method exhibited prediction errors (root mean square error of cross validation, RMSECV) of 2.4 × 10(-6) M skatole and 1.2 × 10(-7) M androstenone, with a limit of detection corresponding to approximately 2.1 × 10(-11) M for skatole and approximately 1.8 × 10(-10) for androstenone. The method was subsequently tested on porcine fat extract, leading to prediction errors (RMSECV) of 0.17 μg/g for skatole and 1.5 μg/g for androstenone. It is clear that this optimized SERS method, when combined with multivariate analysis, shows great potential for optimization into an on-line application, which will be the first of its kind, and opens up possibilities for simultaneous detection of other meat-quality metabolites or pathogen markers. Graphical abstract Artistic rendering of a laser-illuminated gold colloid sphere with skatole and androstenone adsorbed on the surface.

  3. Direct and simultaneous quantification of tannin mean degree of polymerization and percentage of galloylation in grape seeds using diffuse reflectance fourier transform-infrared spectroscopy.

    PubMed

    Pappas, Christos; Kyraleou, Maria; Voskidi, Eleni; Kotseridis, Yorgos; Taranilis, Petros A; Kallithraka, Stamatina

    2015-02-01

    The direct and simultaneous quantitative determination of the mean degree of polymerization (mDP) and the degree of galloylation (%G) in grape seeds were quantified using diffuse reflectance infrared Fourier transform spectroscopy and partial least squares (PLS). The results were compared with those obtained using the conventional analysis employing phloroglucinolysis as pretreatment followed by high performance liquid chromatography-UV and mass spectrometry detection. Infrared spectra were recorded in solid state samples after freeze drying. The 2nd derivative of the 1832 to 1416 and 918 to 739 cm(-1) spectral regions for the quantification of mDP, the 2nd derivative of the 1813 to 607 cm(-1) spectral region for the degree of %G determination and PLS regression were used. The determination coefficients (R(2) ) of mDP and %G were 0.99 and 0.98, respectively. The corresponding values of the root-mean-square error of calibration were found 0.506 and 0.692, the root-mean-square error of cross validation 0.811 and 0.921, and the root-mean-square error of prediction 0.612 and 0.801. The proposed method in comparison with the conventional method is simpler, less time consuming, more economical, and requires reduced quantities of chemical reagents and fewer sample pretreatment steps. It could be a starting point for the design of more specific models according to the requirements of the wineries. © 2015 Institute of Food Technologists®

  4. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    PubMed Central

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2015-01-01

    Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

  5. Sensitive and selective liquid chromatography-tandem mass spectrometry method for the quantification of aniracetam in human plasma.

    PubMed

    Zhang, Jingjing; Liang, Jiabi; Tian, Yuan; Zhang, Zunjian; Chen, Yun

    2007-10-15

    A rapid, sensitive and selective LC-MS/MS method was developed and validated for the quantification of aniracetam in human plasma using estazolam as internal standard (IS). Following liquid-liquid extraction, the analytes were separated using a mobile phase of methanol-water (60:40, v/v) on a reverse phase C18 column and analyzed by a triple-quadrupole mass spectrometer in the selected reaction monitoring (SRM) mode using the respective [M+H]+ ions, m/z 220-->135 for aniracetam and m/z 295-->205 for the IS. The assay exhibited a linear dynamic range of 0.2-100 ng/mL for aniracetam in human plasma. The lower limit of quantification (LLOQ) was 0.2 ng/mL with a relative standard deviation of less than 15%. Acceptable precision and accuracy were obtained for concentrations over the standard curve range. The validated LC-MS/MS method has been successfully applied to study the pharmacokinetics of aniracetam in healthy male Chinese volunteers.

  6. Determination of rifampicin in human plasma by high-performance liquid chromatography coupled with ultraviolet detection after automatized solid-liquid extraction.

    PubMed

    Louveau, B; Fernandez, C; Zahr, N; Sauvageon-Martre, H; Maslanka, P; Faure, P; Mourah, S; Goldwirt, L

    2016-12-01

    A precise and accurate high-performance liquid chromatography (HPLC) quantification method of rifampicin in human plasma was developed and validated using ultraviolet detection after an automatized solid-phase extraction. The method was validated with respect to selectivity, extraction recovery, linearity, intra- and inter-day precision, accuracy, lower limit of quantification and stability. Chromatographic separation was performed on a Chromolith RP 8 column using a mixture of 0.05 m acetate buffer pH 5.7-acetonitrile (35:65, v/v) as mobile phase. The compounds were detected at a wavelength of 335 nm with a lower limit of quantification of 0.05 mg/L in human plasma. Retention times for rifampicin and 6,7-dimethyl-2,3-di(2-pyridyl) quinoxaline used as internal standard were respectively 3.77 and 4.81 min. This robust and exact method was successfully applied in routine for therapeutic drug monitoring in patients treated with rifampicin. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Quantification of neutral human milk oligosaccharides by graphitic carbon HPLC with tandem mass spectrometry

    PubMed Central

    Bao, Yuanwu; Chen, Ceng; Newburg, David S.

    2012-01-01

    Defining the biologic roles of human milk oligosaccharides (HMOS) requires an efficient, simple, reliable, and robust analytical method for simultaneous quantification of oligosaccharide profiles from multiple samples. The HMOS fraction of milk is a complex mixture of polar, highly branched, isomeric structures that contain no intrinsic facile chromophore, making their resolution and quantification challenging. A liquid chromatography-mass spectrometry (LC-MS) method was devised to resolve and quantify 11 major neutral oligosaccharides of human milk simultaneously. Crude HMOS fractions are reduced, resolved by porous graphitic carbon HPLC with a water/acetonitrile gradient, detected by mass spectrometric specific ion monitoring, and quantified. The HPLC separates isomers of identical molecular weights allowing 11 peaks to be fully resolved and quantified by monitoring mass to charge (m/z) ratios of the deprotonated negative ions. The standard curves for each of the 11 oligosaccharides is linear from 0.078 or 0.156 to 20 μg/mL (R2 > 0.998). Precision (CV) ranges from 1% to 9%. Accuracy is from 86% to 104%. This analytical technique provides sensitive, precise, accurate quantification for each of the 11 milk oligosaccharides and allows measurement of differences in milk oligosaccharide patterns between individuals and at different stages of lactation. PMID:23068043

  8. Nondestructive Detection and Quantification of Blueberry Bruising using Near-infrared (NIR) Hyperspectral Reflectance Imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Yu; Li, Changying; Takeda, Fumiomi

    2016-10-01

    Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 - 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising.

  9. Nondestructive Detection and Quantification of Blueberry Bruising using Near-infrared (NIR) Hyperspectral Reflectance Imaging.

    PubMed

    Jiang, Yu; Li, Changying; Takeda, Fumiomi

    2016-10-21

    Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 - 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising.

  10. Non-destructive detection and quantification of blueberry bruising using near-infrared (NIR) hyperspectral reflectance imaging

    USDA-ARS?s Scientific Manuscript database

    Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive and time-consuming. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. The spe...

  11. Managing human fallibility in critical aerospace situations

    NASA Astrophysics Data System (ADS)

    Tew, Larry

    2014-11-01

    Human fallibility is pervasive in the aerospace industry with over 50% of errors attributed to human error. Consider the benefits to any organization if those errors were significantly reduced. Aerospace manufacturing involves high value, high profile systems with significant complexity and often repetitive build, assembly, and test operations. In spite of extensive analysis, planning, training, and detailed procedures, human factors can cause unexpected errors. Handling such errors involves extensive cause and corrective action analysis and invariably schedule slips and cost growth. We will discuss success stories, including those associated with electro-optical systems, where very significant reductions in human fallibility errors were achieved after receiving adapted and specialized training. In the eyes of company and customer leadership, the steps used to achieve these results lead to in a major culture change in both the workforce and the supporting management organization. This approach has proven effective in other industries like medicine, firefighting, law enforcement, and aviation. The roadmap to success and the steps to minimize human error are known. They can be used by any organization willing to accept human fallibility and take a proactive approach to incorporate the steps needed to manage and minimize error.

  12. Prediction of human errors by maladaptive changes in event-related brain networks.

    PubMed

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus

    2008-04-22

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.

  13. Prediction of human errors by maladaptive changes in event-related brain networks

    PubMed Central

    Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123

  14. Approximating prediction uncertainty for random forest regression models

    Treesearch

    John W. Coulston; Christine E. Blinn; Valerie A. Thomas; Randolph H. Wynne

    2016-01-01

    Machine learning approaches such as random forest have increased for the spatial modeling and mapping of continuous variables. Random forest is a non-parametric ensemble approach, and unlike traditional regression approaches there is no direct quantification of prediction error. Understanding prediction uncertainty is important when using model-based continuous maps as...

  15. The influence of landscape characteristics and home-range size on the quantification of landscape-genetics relationships

    Treesearch

    Tabitha A. Graves; Tzeidle N. Wasserman; Milton Cezar Ribeiro; Erin L. Landguth; Stephen F. Spear; Niko Balkenhol; Colleen B. Higgins; Marie-Josee Fortin; Samuel A. Cushman; Lisette P. Waits

    2012-01-01

    A common approach used to estimate landscape resistance involves comparing correlations of ecological and genetic distances calculated among individuals of a species. However, the location of sampled individuals may contain some degree of spatial uncertainty due to the natural variation of animals moving through their home range ormeasurement error in plant or animal...

  16. Quantification of model uncertainty in aerosol optical thickness retrieval from Ozone Monitoring Instrument (OMI) measurements

    NASA Astrophysics Data System (ADS)

    Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.

    2013-09-01

    We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.

  17. MRMPlus: an open source quality control and assessment tool for SRM/MRM assay development.

    PubMed

    Aiyetan, Paul; Thomas, Stefani N; Zhang, Zhen; Zhang, Hui

    2015-12-12

    Selected and multiple reaction monitoring involves monitoring a multiplexed assay of proteotypic peptides and associated transitions in mass spectrometry runs. To describe peptide and associated transitions as stable, quantifiable, and reproducible representatives of proteins of interest, experimental and analytical validation is required. However, inadequate and disparate analytical tools and validation methods predispose assay performance measures to errors and inconsistencies. Implemented as a freely available, open-source tool in the platform independent Java programing language, MRMPlus computes analytical measures as recommended recently by the Clinical Proteomics Tumor Analysis Consortium Assay Development Working Group for "Tier 2" assays - that is, non-clinical assays sufficient enough to measure changes due to both biological and experimental perturbations. Computed measures include; limit of detection, lower limit of quantification, linearity, carry-over, partial validation of specificity, and upper limit of quantification. MRMPlus streamlines assay development analytical workflow and therefore minimizes error predisposition. MRMPlus may also be used for performance estimation for targeted assays not described by the Assay Development Working Group. MRMPlus' source codes and compiled binaries can be freely downloaded from https://bitbucket.org/paiyetan/mrmplusgui and https://bitbucket.org/paiyetan/mrmplusgui/downloads respectively.

  18. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  19. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Eas M.

    2003-01-01

    The modus operandi in addressing human error in aviation systems is predominantly that of technological interventions or fixes. Such interventions exhibit considerable variability both in terms of sophistication and application. Some technological interventions address human error directly while others do so only indirectly. Some attempt to eliminate the occurrence of errors altogether whereas others look to reduce the negative consequences of these errors. In any case, technological interventions add to the complexity of the systems and may interact with other system components in unforeseeable ways and often create opportunities for novel human errors. Consequently, there is a need to develop standards for evaluating the potential safety benefit of each of these intervention products so that resources can be effectively invested to produce the biggest benefit to flight safety as well as to mitigate any adverse ramifications. The purpose of this project was to help define the relationship between human error and technological interventions, with the ultimate goal of developing a set of standards for evaluating or measuring the potential benefits of new human error fixes.

  20. Automating cell detection and classification in human brain fluorescent microscopy images using dictionary learning and sparse coding.

    PubMed

    Alegro, Maryana; Theofilas, Panagiotis; Nguy, Austin; Castruita, Patricia A; Seeley, William; Heinsen, Helmut; Ushizima, Daniela M; Grinberg, Lea T

    2017-04-15

    Immunofluorescence (IF) plays a major role in quantifying protein expression in situ and understanding cell function. It is widely applied in assessing disease mechanisms and in drug discovery research. Automation of IF analysis can transform studies using experimental cell models. However, IF analysis of postmortem human tissue relies mostly on manual interaction, often subjected to low-throughput and prone to error, leading to low inter and intra-observer reproducibility. Human postmortem brain samples challenges neuroscientists because of the high level of autofluorescence caused by accumulation of lipofuscin pigment during aging, hindering systematic analyses. We propose a method for automating cell counting and classification in IF microscopy of human postmortem brains. Our algorithm speeds up the quantification task while improving reproducibility. Dictionary learning and sparse coding allow for constructing improved cell representations using IF images. These models are input for detection and segmentation methods. Classification occurs by means of color distances between cells and a learned set. Our method successfully detected and classified cells in 49 human brain images. We evaluated our results regarding true positive, false positive, false negative, precision, recall, false positive rate and F1 score metrics. We also measured user-experience and time saved compared to manual countings. We compared our results to four open-access IF-based cell-counting tools available in the literature. Our method showed improved accuracy for all data samples. The proposed method satisfactorily detects and classifies cells from human postmortem brain IF images, with potential to be generalized for applications in other counting tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1999-01-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy{close_quote}s Idaho National Engineering and Environmental Laboratory (INEEL) is developing a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper will describe previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS. {copyright} {ital 1999 American Institute of Physics.}« less

  2. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation ofmore » the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.« less

  3. Structured methods for identifying and correcting potential human errors in aviation operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, W.R.

    1997-10-01

    Human errors have been identified as the source of approximately 60% of the incidents and accidents that occur in commercial aviation. It can be assumed that a very large number of human errors occur in aviation operations, even though in most cases the redundancies and diversities built into the design of aircraft systems prevent the errors from leading to serious consequences. In addition, when it is acknowledged that many system failures have their roots in human errors that occur in the design phase, it becomes apparent that the identification and elimination of potential human errors could significantly decrease the risksmore » of aviation operations. This will become even more critical during the design of advanced automation-based aircraft systems as well as next-generation systems for air traffic management. Structured methods to identify and correct potential human errors in aviation operations have been developed and are currently undergoing testing at the Idaho National Engineering and Environmental Laboratory (INEEL).« less

  4. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    PubMed

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  5. Simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) for dynamic contrast-enhanced MRI of liver.

    PubMed

    Ning, Jia; Sun, Yongliang; Xie, Sheng; Zhang, Bida; Huang, Feng; Koken, Peter; Smink, Jouke; Yuan, Chun; Chen, Huijun

    2018-05-01

    To propose a simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) method for liver dynamic contrast-enhanced MRI. The proposed SAHA simultaneously acquired high temporal-resolution 2D images for vascular input function extraction using Cartesian sampling and 3D large-coverage high spatial-resolution liver dynamic contrast-enhanced images using golden angle stack-of-stars acquisition in an interleaved way. Simulations were conducted to investigate the accuracy of SAHA in pharmacokinetic analysis. A healthy volunteer and three patients with cirrhosis or hepatocellular carcinoma were included in the study to investigate the feasibility of SAHA in vivo. Simulation studies showed that SAHA can provide closer results to the true values and lower root mean square error of estimated pharmacokinetic parameters in all of the tested scenarios. The in vivo scans of subjects provided fair image quality of both 2D images for arterial input function and portal venous input function and 3D whole liver images. The in vivo fitting results showed that the perfusion parameters of healthy liver were significantly different from those of cirrhotic liver and HCC. The proposed SAHA can provide improved accuracy in pharmacokinetic modeling and is feasible in human liver dynamic contrast-enhanced MRI, suggesting that SAHA is a potential tool for liver dynamic contrast-enhanced MRI. Magn Reson Med 79:2629-2641, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  6. Intervention strategies for the management of human error

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1993-01-01

    This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.

  7. Evaluation of dry blood spot technique for quantification of an Anti-CD20 monoclonal antibody drug in human blood samples.

    PubMed

    Lin, Yong-Qing; Zhang, Yilu; Li, Connie; Li, Louis; Zhang, Kelley; Li, Shawn

    2012-01-01

    To evaluate the dried blood spot (DBS) technique in ELISA quantification of larger biomolecular drugs, an anti-CD20 monoclonal antibody drug was used as an example. A method for the quantification of the anti-CD20 drug in human DBS was developed and validated. The drug standard and quality control samples prepared in fresh human blood were spotted on DBS cards and then extracted. A luminescent ELISA was used for quantification of the drug from DBS samples. The assay range of the anti-CD20 drug standards in DBS was 100-2500ng/mL. The intra-assay precision (%CV) ranged from 0.4% to 10.1%, and the accuracy (%Recovery) ranged from 77.9% to 113.9%. The inter assay precision (%CV) ranged from 5.9% to 17.4%, and the accuracy ranged from 81.5% to 110.5%. The DBS samples diluted 500 and 50-fold yielded recovery of 88.7% and 90.7%, respectively. The preparation of DBS in higher and lower hematocrit (53% and 35%) conditions did not affect the recovery of the drug. Furthermore, the storage stability of the anti-CD20 drug on DBS cards was tested at various conditions. It was found that the anti-CD20 drug was stable for one week in DBS stored at room temperature. However, it was determined that the stability was compro]mised in DBS stored at high humidity, high temperature (55°C), and exposed to direct daylight for a week, as well as for samples stored at room temperature and high humidity conditions for a month. Stability did not change significantly in samples that underwent 3 freeze/thaw cycles. Our results demonstrated a successful use of DBS technique in ELISA quantification of an anti-CD20 monoclonal antibody drug in human blood. The stability data provides information regarding sample storage and shipping for future clinical studies. It is, therefore, concluded that the DBS technique is applicable in the quantification of other large biomolecule drugs or biomarkers. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Multiple-approaches to the identification and quantification of cytochromes P450 in human liver tissue by mass spectrometry.

    PubMed

    Seibert, Cathrin; Davidson, Brian R; Fuller, Barry J; Patterson, Laurence H; Griffiths, William J; Wang, Yuqin

    2009-04-01

    Here we report the identification and approximate quantification of cytochrome P450 (CYP) proteins in human liver microsomes as determined by nano-LC-MS/MS with application of the exponentially modified protein abundance index (emPAI) algorithm during database searching. Protocols based on 1D-gel protein separation and 2D-LC peptide separation gave comparable results. In total, 18 CYP isoforms were unambiguously identified based on unique peptide matches. Further, we have determined the absolute quantity of two CYP enzymes (2E1 and 1A2) in human liver microsomes using stable-isotope dilution mass spectrometry, where microsomal proteins were separated by 1D-gel electrophoresis, digested with trypsin in the presence of either a CYP2E1- or 1A2-specific stable-isotope labeled tryptic peptide and analyzed by LC-MS/MS. Using multiple reaction monitoring (MRM) for the isotope-labeled tryptic peptides and their natural unlabeled analogues quantification could be performed over the range of 0.1-1.5 pmol on column. Liver microsomes from four individuals were analyzed for CYP2E1 giving values of 88-200 pmol/mg microsomal protein. The CYP1A2 content of microsomes from a further three individuals ranged from 165 to 263 pmol/mg microsomal protein. Although, in this proof-of-concept study for CYP quantification, the two CYP isoforms were quantified from different samples, there are no practical reasons to prevent multiplexing the method to allow the quantification of multiple CYP isoforms in a single sample.

  9. Multiple-approaches to the identification and quantification of cytochromes P450 in human liver tissue by mass spectrometry

    PubMed Central

    Seibert, Cathrin; Davidson, Brian R.; Fuller, Barry J.; Patterson, Laurence H.; Griffiths, William J.; Wang, Yuqin

    2009-01-01

    Here we report the identification and approximate quantification of cytochrome P450 (CYP) proteins in human liver microsomes as determined by nano-LC-MS/MS with application of the exponentially modified protein abundance index (emPAI) algorithm during database searching. Protocols based on 1D-gel protein separation and 2D-LC peptide separation gave comparable results. In total 18 CYP isoforms were unambiguously identified based on unique peptide matches. Further, we have determined the absolute quantity of two CYP enzymes (2E1 and 1A2) in human liver microsomes using stable-isotope dilution mass spectrometry, where microsomal proteins were separated by 1D-gel electrophoresis, digested with trypsin in the presence of either a CYP2E1- or 1A2-specific stable-isotope labelled tryptic peptide and analysed by LC-MS/MS. Using multiple reaction monitoring (MRM) for the isotope-labelled tryptic peptides and their natural unlabelled analogues quantification could be performed over the range of 0.1 – 1.5 pmol on column. Liver microsomes from four individuals were analysed for CYP2E1 giving values of 88 - 200 pmol/mg microsomal protein. The CYP1A2 content of microsomes from a further three individuals ranged from 165 – 263 pmol/mg microsomal protein. Although, in this proof-of-concept study for CYP quantification, the two CYP-isoforms were quantified from different samples, there are no practical reasons to prevent multiplexing the method to allow the quantification of multiple CYP-isoforms in a single sample. PMID:19714871

  10. Validation of a dilute and shoot method for quantification of 12 elements by inductively coupled plasma tandem mass spectrometry in human milk and in cow milk preparations.

    PubMed

    Dubascoux, Stéphane; Andrey, Daniel; Vigo, Mario; Kastenmayer, Peter; Poitevin, Eric

    2018-09-01

    Nutritional information about human milk is essential as early human growth and development have been closely linked to the status and requirements of several macro- and micro-elements. However, methods addressing whole mineral profiling in human milk have been scarce due in part to their technical complexities to accurately and simultaneously measure the concentration of micro- and macro-trace elements in low volume of human milk. In the present study, a single laboratory validation has been performed using a "dilute and shoot" approach for the quantification of sodium (Na), magnesium (Mg), phosphorus (P), potassium (K), calcium (Ca), manganese (Mn), iron (Fe), copper (Cu), zinc (Zn), selenium (Se), molybdenum (Mo) and iodine (I), in both human milk and milk preparations. Performances in terms of limits of detection and quantification, of repeatability, reproducibility and trueness have been assessed and verified using various reference or certified materials. For certified human milk sample (NIST 1953), recoveries obtained for reference or spiked values are ranged from 93% to 108% (except for Mn at 151%). This robust method using new technology ICP-MS/MS without high pressure digestion is adapted to both routinely and rapidly analyze human milk micro-sample (i.e. less than 250 μL) in the frame of clinical trials but also to be extended to the mineral profiling of milk preparations like infant formula and adult nutritionals. Copyright © 2018 Elsevier GmbH. All rights reserved.

  11. Error and uncertainty in Raman thermal conductivity measurements

    DOE PAGES

    Thomas Edwin Beechem; Yates, Luke; Graham, Samuel

    2015-04-22

    We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less

  12. Error Reduction Analysis and Optimization of Varying GRACE-Type Micro-Satellite Constellations

    NASA Astrophysics Data System (ADS)

    Widner, M. V., IV; Bettadpur, S. V.; Wang, F.; Yunck, T. P.

    2017-12-01

    The Gravity Recovery and Climate Experiment (GRACE) mission has been a principal contributor in the study and quantification of Earth's time-varying gravity field. Both GRACE and its successor, GRACE Follow-On, are limited by their paired satellite design which only provide a full map of Earth's gravity field approximately every thirty days and at large spatial resolutions of over 300 km. Micro-satellite technology has presented the feasibility of improving the architecture of future missions to address these issues with the implementation of a constellations of satellites having similar characteristics as GRACE. To optimize the constellation's architecture, several scenarios are evaluated to determine how implementing this configuration affects the resultant gravity field maps and characterize which instrument system errors improve, which do not, and how changes in constellation architecture affect these errors.

  13. Fundamental Analysis of the Linear Multiple Regression Technique for Quantification of Water Quality Parameters from Remote Sensing Data. Ph.D. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H., III

    1977-01-01

    Constituents with linear radiance gradients with concentration may be quantified from signals which contain nonlinear atmospheric and surface reflection effects for both homogeneous and non-homogeneous water bodies provided accurate data can be obtained and nonlinearities are constant with wavelength. Statistical parameters must be used which give an indication of bias as well as total squared error to insure that an equation with an optimum combination of bands is selected. It is concluded that the effect of error in upwelled radiance measurements is to reduce the accuracy of the least square fitting process and to increase the number of points required to obtain a satisfactory fit. The problem of obtaining a multiple regression equation that is extremely sensitive to error is discussed.

  14. Exploring Reactions to Pilot Reliability Certification and Changing Attitudes on the Reduction of Errors

    ERIC Educational Resources Information Center

    Boedigheimer, Dan

    2010-01-01

    Approximately 70% of aviation accidents are attributable to human error. The greatest opportunity for further improving aviation safety is found in reducing human errors in the cockpit. The purpose of this quasi-experimental, mixed-method research was to evaluate whether there was a difference in pilot attitudes toward reducing human error in the…

  15. Assessment and quantification of patient set-up errors in nasopharyngeal cancer patients and their biological and dosimetric impact in terms of generalized equivalent uniform dose (gEUD), tumour control probability (TCP) and normal tissue complication probability (NTCP).

    PubMed

    Boughalia, A; Marcie, S; Fellah, M; Chami, S; Mekki, F

    2015-06-01

    The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy-oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose-volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients.

  16. Specific and quantitative detection of human polyomaviruses BKV, JCV, and SV40 by real time PCR.

    PubMed

    McNees, Adrienne L; White, Zoe S; Zanwar, Preeti; Vilchez, Regis A; Butel, Janet S

    2005-09-01

    The polyomaviruses that infect humans, BK virus (BKV), JC virus (JCV), and simian virus 40 (SV40), typically establish subclinical persistent infections. However, reactivation of these viruses in immunocompromised hosts is associated with renal nephropathy and hemorrhagic cystitis (HC) caused by BKV and with progressive multifocal leukoencephalopathy (PML) caused by JCV. Additionally, SV40 is associated with several types of human cancers including primary brain and bone cancers, mesotheliomas, and non-Hodgkin's lymphoma. Advancements in detection of these viruses may contribute to improved diagnosis and treatment of affected patients. To develop sensitive and specific real time quantitative polymerase chain reaction (RQ-PCR) assays for the detection of T-antigen DNA sequences of the human polyomaviruses BKV, JCV, and SV40 using the ABI Prism 7000 Sequence Detection System. Assays for absolute quantification of the viral T-ag sequences were designed and the sensitivity and specificity were evaluated. A quantitative assay to measure the single copy human RNAse P gene was also developed and evaluated in order to normalize viral gene copy numbers to cell numbers. Quantification of the target genes is sensitive and specific over a 7 log dynamic range. Ten copies each of the viral and cellular genes are reproducibly and accurately detected. The sensitivity of detection of the RQ-PCR assays is increased 10- to 100-fold compared to conventional PCR and agarose gel protocols. The primers and probes used to detect the viral genes are specific for each virus and there is no cross reactivity within the dynamic range of the standard dilutions. The sensitivity of detection for these assays is not reduced in human cellular extracts; however, different DNA extraction protocols may affect quantification. These assays provide a technique for rapid and specific quantification of polyomavirus genomes per cell in human samples.

  17. Evaluating a medical error taxonomy.

    PubMed

    Brixey, Juliana; Johnson, Todd R; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a standard language for reporting medication errors. This project maps the NCC MERP taxonomy of medication error to MedWatch medical errors involving infusion pumps. Of particular interest are human factors associated with medical device errors. The NCC MERP taxonomy of medication errors is limited in mapping information from MEDWATCH because of the focus on the medical device and the format of reporting.

  18. Albertian errors in head-mounted displays: I. Choice of eye-point location for a near- or far-field task visualization.

    PubMed

    Rolland, Jannick; Ha, Yonggang; Fidopiastis, Cali

    2004-06-01

    A theoretical investigation of rendered depth and angular errors, or Albertian errors, linked to natural eye movements in binocular head-mounted displays (HMDs) is presented for three possible eye-point locations: the center of the entrance pupil, the nodal point, and the center of rotation of the eye. A numerical quantification was conducted for both the pupil and the center of rotation of the eye under the assumption that the user will operate solely in either the near field under an associated instrumentation setting or the far field under a different setting. Under these conditions, the eyes are taken to gaze in the plane of the stereoscopic images. Across conditions, results show that the center of the entrance pupil minimizes rendered angular errors, while the center of rotation minimizes rendered position errors. Significantly, this investigation quantifies that under proper setting of the HMD and correct choice of the eye points, rendered depth and angular errors can be brought to be either negligible or within specification of even the most stringent applications in performance of tasks in either the near field or the far field.

  19. An Evaluation of Departmental Radiation Oncology Incident Reports: Anticipating a National Reporting System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terezakis, Stephanie A., E-mail: stereza1@jhmi.edu; Harris, Kendra M.; Ford, Eric

    Purpose: Systems to ensure patient safety are of critical importance. The electronic incident reporting systems (IRS) of 2 large academic radiation oncology departments were evaluated for events that may be suitable for submission to a national reporting system (NRS). Methods and Materials: All events recorded in the combined IRS were evaluated from 2007 through 2010. Incidents were graded for potential severity using the validated French Nuclear Safety Authority (ASN) 5-point scale. These incidents were categorized into 7 groups: (1) human error, (2) software error, (3) hardware error, (4) error in communication between 2 humans, (5) error at the human-software interface,more » (6) error at the software-hardware interface, and (7) error at the human-hardware interface. Results: Between the 2 systems, 4407 incidents were reported. Of these events, 1507 (34%) were considered to have the potential for clinical consequences. Of these 1507 events, 149 (10%) were rated as having a potential severity of ≥2. Of these 149 events, the committee determined that 79 (53%) of these events would be submittable to a NRS of which the majority was related to human error or to the human-software interface. Conclusions: A significant number of incidents were identified in this analysis. The majority of events in this study were related to human error and to the human-software interface, further supporting the need for a NRS to facilitate field-wide learning and system improvement.« less

  20. Designing a compact high performance brain PET scanner—simulation study

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Majewski, Stan; Kinahan, Paul E.; Harrison, Robert L.; Elston, Brian F.; Manjeshwar, Ravindra; Dolinsky, Sergei; Stolin, Alexander V.; Brefczynski-Lewis, Julie A.; Qi, Jinyi

    2016-05-01

    The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér-Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of-interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact ‘helmet’ design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging.

  1. Designing a compact high performance brain PET scanner—simulation study

    PubMed Central

    Gong, Kuang; Majewski, Stan; Kinahan, Paul E; Harrison, Robert L; Elston, Brian F; Manjeshwar, Ravindra; Dolinsky, Sergei; Stolin, Alexander V; Brefczynski-Lewis, Julie A; Qi, Jinyi

    2016-01-01

    The desire to understand normal and disordered human brain function of upright, moving persons in natural environments motivates the development of the ambulatory micro-dose brain PET imager (AMPET). An ideal system would be light weight but with high sensitivity and spatial resolution, although these requirements are often in conflict with each other. One potential approach to meet the design goals is a compact brain-only imaging device with a head-sized aperture. However, a compact geometry increases parallax error in peripheral lines of response, which increases bias and variance in region of interest (ROI) quantification. Therefore, we performed simulation studies to search for the optimal system configuration and to evaluate the potential improvement in quantification performance over existing scanners. We used the Cramér–Rao variance bound to compare the performance for ROI quantification using different scanner geometries. The results show that while a smaller ring diameter can increase photon detection sensitivity and hence reduce the variance at the center of the field of view, it can also result in higher variance in peripheral regions when the length of detector crystal is 15 mm or more. This variance can be substantially reduced by adding depth-of- interaction (DOI) measurement capability to the detector modules. Our simulation study also shows that the relative performance depends on the size of the ROI, and a large ROI favors a compact geometry even without DOI information. Based on these results, we propose a compact ‘helmet’ design using detectors with DOI capability. Monte Carlo simulations show the helmet design can achieve four-fold higher sensitivity and resolve smaller features than existing cylindrical brain PET scanners. The simulations also suggest that improving TOF timing resolution from 400 ps to 200 ps also results in noticeable improvement in image quality, indicating better timing resolution is desirable for brain imaging. PMID:27081753

  2. Sleep quality, posttraumatic stress, depression, and human errors in train drivers: a population-based nationwide study in South Korea.

    PubMed

    Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo

    2014-12-01

    Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.

  3. Quantification of breast density with spectral mammography based on a scanned multi-slit photon-counting detector: A feasibility study

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. Methods A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio (SNR) of the dual energy image with respect to the square root of mean glandular dose (MGD), was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. Results For an average sized breast of 4.5 cm thick, the FOM was maximized with a tube voltage of 46kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (~ 32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. Conclusions The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique. PMID:22771941

  4. Quantitative Evaluation of Segmentation- and Atlas-Based Attenuation Correction for PET/MR on Pediatric Patients.

    PubMed

    Bezrukov, Ilja; Schmidt, Holger; Gatidis, Sergios; Mantlik, Frédéric; Schäfer, Jürgen F; Schwenzer, Nina; Pichler, Bernd J

    2015-07-01

    Pediatric imaging is regarded as a key application for combined PET/MR imaging systems. Because existing MR-based attenuation-correction methods were not designed specifically for pediatric patients, we assessed the impact of 2 potentially influential factors: inter- and intrapatient variability of attenuation coefficients and anatomic variability. Furthermore, we evaluated the quantification accuracy of 3 methods for MR-based attenuation correction without (SEGbase) and with bone prediction using an adult and a pediatric atlas (SEGwBONEad and SEGwBONEpe, respectively) on PET data of pediatric patients. The variability of attenuation coefficients between and within pediatric (5-17 y, n = 17) and adult (27-66 y, n = 16) patient collectives was assessed on volumes of interest (VOIs) in CT datasets for different tissue types. Anatomic variability was assessed on SEGwBONEad/pe attenuation maps by computing mean differences to CT-based attenuation maps for regions of bone tissue, lungs, and soft tissue. PET quantification was evaluated on VOIs with physiologic uptake and on 80% isocontour VOIs with elevated uptake in the thorax and abdomen/pelvis. Inter- and intrapatient variability of the bias was assessed for each VOI group and method. Statistically significant differences in mean VOI Hounsfield unit values and linear attenuation coefficients between adult and pediatric collectives were found in the lungs and femur. The prediction of attenuation maps using the pediatric atlas showed a reduced error in bone tissue and better delineation of bone structure. Evaluation of PET quantification accuracy showed statistically significant mean errors in mean standardized uptake values of -14% ± 5% and -23% ± 6% in bone marrow and femur-adjacent VOIs with physiologic uptake for SEGbase, which could be reduced to 0% ± 4% and -1% ± 5% using SEGwBONEpe attenuation maps. Bias in soft-tissue VOIs was less than 5% for all methods. Lung VOIs showed high SDs in the range of 15% for all methods. For VOIs with elevated uptake, mean and SD were less than 5% except in the thorax. The use of a dedicated atlas for the pediatric patient collective resulted in improved attenuation map prediction in osseous regions and reduced interpatient bias variation in femur-adjacent VOIs. For the lungs, in which intrapatient variation was higher for the pediatric collective, a patient- or group-specific attenuation coefficient might improve attenuation map accuracy. Mean errors of -14% and -23% in bone marrow and femur-adjacent VOIs can affect PET quantification in these regions when bone tissue is ignored. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  5. Addressing Systematic Errors in Correlation Tracking on HMI Magnetograms

    NASA Astrophysics Data System (ADS)

    Mahajan, Sushant S.; Hathaway, David H.; Munoz-Jaramillo, Andres; Martens, Petrus C.

    2017-08-01

    Correlation tracking in solar magnetograms is an effective method to measure the differential rotation and meridional flow on the solar surface. However, since the tracking accuracy required to successfully measure meridional flow is very high, small systematic errors have a noticeable impact on measured meridional flow profiles. Additionally, the uncertainties of this kind of measurements have been historically underestimated, leading to controversy regarding flow profiles at high latitudes extracted from measurements which are unreliable near the solar limb.Here we present a set of systematic errors we have identified (and potential solutions), including bias caused by physical pixel sizes, center-to-limb systematics, and discrepancies between measurements performed using different time intervals. We have developed numerical techniques to get rid of these systematic errors and in the process improve the accuracy of the measurements by an order of magnitude.We also present a detailed analysis of uncertainties in these measurements using synthetic magnetograms and the quantification of an upper limit below which meridional flow measurements cannot be trusted as a function of latitude.

  6. Application of Rapid Visco Analyser (RVA) viscograms and chemometrics for maize hardness characterisation.

    PubMed

    Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena

    2015-04-15

    It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. In-line multipoint near-infrared spectroscopy for moisture content quantification during freeze-drying.

    PubMed

    Kauppinen, Ari; Toiviainen, Maunu; Korhonen, Ossi; Aaltonen, Jaakko; Järvinen, Kristiina; Paaso, Janne; Juuti, Mikko; Ketolainen, Jarkko

    2013-02-19

    During the past decade, near-infrared (NIR) spectroscopy has been applied for in-line moisture content quantification during a freeze-drying process. However, NIR has been used as a single-vial technique and thus is not representative of the entire batch. This has been considered as one of the main barriers for NIR spectroscopy becoming widely used in process analytical technology (PAT) for freeze-drying. Clearly it would be essential to monitor samples that reliably represent the whole batch. The present study evaluated multipoint NIR spectroscopy for in-line moisture content quantification during a freeze-drying process. Aqueous sucrose solutions were used as model formulations. NIR data was calibrated to predict the moisture content using partial least-squares (PLS) regression with Karl Fischer titration being used as a reference method. PLS calibrations resulted in root-mean-square error of prediction (RMSEP) values lower than 0.13%. Three noncontact, diffuse reflectance NIR probe heads were positioned on the freeze-dryer shelf to measure the moisture content in a noninvasive manner, through the side of the glass vials. The results showed that the detection of unequal sublimation rates within a freeze-dryer shelf was possible with the multipoint NIR system in use. Furthermore, in-line moisture content quantification was reliable especially toward the end of the process. These findings indicate that the use of multipoint NIR spectroscopy can achieve representative quantification of moisture content and hence a drying end point determination to a desired residual moisture level.

  8. Analyte quantification with comprehensive two-dimensional gas chromatography: assessment of methods for baseline correction, peak delineation, and matrix effect elimination for real samples.

    PubMed

    Samanipour, Saer; Dimitriou-Christidis, Petros; Gros, Jonas; Grange, Aureline; Samuel Arey, J

    2015-01-02

    Comprehensive two-dimensional gas chromatography (GC×GC) is used widely to separate and measure organic chemicals in complex mixtures. However, approaches to quantify analytes in real, complex samples have not been critically assessed. We quantified 7 PAHs in a certified diesel fuel using GC×GC coupled to flame ionization detector (FID), and we quantified 11 target chlorinated hydrocarbons in a lake water extract using GC×GC with electron capture detector (μECD), further confirmed qualitatively by GC×GC with electron capture negative chemical ionization time-of-flight mass spectrometer (ENCI-TOFMS). Target analyte peak volumes were determined using several existing baseline correction algorithms and peak delineation algorithms. Analyte quantifications were conducted using external standards and also using standard additions, enabling us to diagnose matrix effects. We then applied several chemometric tests to these data. We find that the choice of baseline correction algorithm and peak delineation algorithm strongly influence the reproducibility of analyte signal, error of the calibration offset, proportionality of integrated signal response, and accuracy of quantifications. Additionally, the choice of baseline correction and the peak delineation algorithm are essential for correctly discriminating analyte signal from unresolved complex mixture signal, and this is the chief consideration for controlling matrix effects during quantification. The diagnostic approaches presented here provide guidance for analyte quantification using GC×GC. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Exploiting multicompartment effects in triple-echo steady-state T2 mapping for fat fraction quantification.

    PubMed

    Liu, Dian; Steingoetter, Andreas; Curcic, Jelena; Kozerke, Sebastian

    2018-01-01

    To investigate and exploit the effect of intravoxel off-resonance compartments in the triple-echo steady-state (TESS) sequence without fat suppression for T 2 mapping and to leverage the results for fat fraction quantification. In multicompartment tissue, where at least one compartment is excited off-resonance, the total signal exhibits periodic modulations as a function of echo time (TE). Simulated multicompartment TESS signals were synthesized at various TEs. Fat emulsion phantoms were prepared and scanned at the same TE combinations using TESS. In vivo knee data were obtained with TESS to validate the simulations. The multicompartment effect was exploited for fat fraction quantification in the stomach by acquiring TESS signals at two TE combinations. Simulated and measured multicompartment signal intensities were in good agreement. Multicompartment effects caused erroneous T 2 offsets, even at low water-fat ratios. The choice of TE caused T 2 variations of as much as 28% in cartilage. The feasibility of fat fraction quantification to monitor the decrease of fat content in the stomach during digestion is demonstrated. Intravoxel off-resonance compartments are a confounding factor for T 2 quantification using TESS, causing errors that are dependent on the TE. At the same time, off-resonance effects may allow for efficient fat fraction mapping using steady-state imaging. Magn Reson Med 79:423-429, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Full-field transient vibrometry of the human tympanic membrane by local phase correlation and high-speed holography

    PubMed Central

    Dobrev, Ivo; Furlong, Cosme; Cheng, Jeffrey T.; Rosowski, John J.

    2014-01-01

    Abstract. Understanding the human hearing process would be helped by quantification of the transient mechanical response of the human ear, including the human tympanic membrane (TM or eardrum). We propose a new hybrid high-speed holographic system (HHS) for acquisition and quantification of the full-field nanometer transient (i.e., >10  kHz) displacement of the human TM. We have optimized and implemented a 2+1 frame local correlation (LC) based phase sampling method in combination with a high-speed (i.e., >40  K fps) camera acquisition system. To our knowledge, there is currently no existing system that provides such capabilities for the study of the human TM. The LC sampling method has a displacement difference of <11  nm relative to measurements obtained by a four-phase step algorithm. Comparisons between our high-speed acquisition system and a laser Doppler vibrometer indicate differences of <10  μs. The high temporal (i.e., >40  kHz) and spatial (i.e., >100  k data points) resolution of our HHS enables parallel measurements of all points on the surface of the TM, which allows quantification of spatially dependent motion parameters, such as modal frequencies and acoustic delays. Such capabilities could allow inferring local material properties across the surface of the TM. PMID:25191832

  11. Analyzing human errors in flight mission operations

    NASA Technical Reports Server (NTRS)

    Bruno, Kristin J.; Welz, Linda L.; Barnes, G. Michael; Sherif, Josef

    1993-01-01

    A long-term program is in progress at JPL to reduce cost and risk of flight mission operations through a defect prevention/error management program. The main thrust of this program is to create an environment in which the performance of the total system, both the human operator and the computer system, is optimized. To this end, 1580 Incident Surprise Anomaly reports (ISA's) from 1977-1991 were analyzed from the Voyager and Magellan projects. A Pareto analysis revealed that 38 percent of the errors were classified as human errors. A preliminary cluster analysis based on the Magellan human errors (204 ISA's) is presented here. The resulting clusters described the underlying relationships among the ISA's. Initial models of human error in flight mission operations are presented. Next, the Voyager ISA's will be scored and included in the analysis. Eventually, these relationships will be used to derive a theoretically motivated and empirically validated model of human error in flight mission operations. Ultimately, this analysis will be used to make continuous process improvements continuous process improvements to end-user applications and training requirements. This Total Quality Management approach will enable the management and prevention of errors in the future.

  12. freeQuant: A Mass Spectrometry Label-Free Quantification Software Tool for Complex Proteome Analysis.

    PubMed

    Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong

    2015-01-01

    Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.

  13. A new restricted access molecularly imprinted polymer capped with albumin for direct extraction of drugs from biological matrices: the case of chlorpromazine in human plasma.

    PubMed

    de Oliveira Isac Moraes, Gabriel; da Silva, Larissa Meirelles Rodrigues; dos Santos-Neto, Alvaro José; Florenzano, Fábio Herbst; Figueiredo, Eduardo Costa

    2013-09-01

    A new restricted access molecularly imprinted polymer coated with bovine serum albumin (RAMIP-BSA) was developed, characterized, and used for direct analysis of chlorpromazine in human plasma samples. The RAMIP-BSA was synthesized using chlorpromazine, methacrylic acid, and ethylene glycol dimethacrylate as template, functional monomer, and cross-linker, respectively. Glycerol dimethacrylate and hydroxy methyl methacrylate were used to promote a hydrophilic surface (high density of hydroxyl groups). Afterward, the polymer was coated with BSA using glutaraldehyde as cross-linker, resulting in a protein chemical shield around it. The material was able to eliminate ca. 99% of protein when a 44-mg mL(-1) BSA aqueous solution was passed through it. The RAMIP-BSA was packed in a column and used for direct analysis of chlorpromazine in human plasma samples in an online column switching high-performance liquid chromatography system. The analytical calibration curve was prepared in a pool of human plasma samples with chlorpromazine concentrations ranging from 30 to 350 μg L(-1). The correlation coefficient obtained was 0.995 and the limit of quantification was 30 μg L(-1). Intra-day and inter-day precision and accuracy presented variation coefficients and relative errors lower than 15% and within -15 and 15%, respectively. The sample throughput was 3 h(-1) (sample preparation and chromatographic analysis steps) and the same RAMIP-BSA column was efficiently used for about 90 cycles.

  14. Targeted quantification of low ng/mL level proteins in human serum without immunoaffinity depletion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Tujin; Sun, Xuefei; Gao, Yuqian

    2013-07-05

    We recently reported an antibody-free targeted protein quantification strategy, termed high-pressure, high-resolution separations with intelligent selection and multiplexing (PRISM) for achieving significantly enhanced sensitivity using selected reaction monitoring (SRM) mass spectrometry. Integrating PRISM with front-end IgY14 immunoaffinity depletion, sensitive detection of targeted proteins at 50-100 pg/mL levels in human blood plasma/serum was demonstrated. However, immunoaffinity depletion is often associated with undesired losses of target proteins of interest. Herein we report further evaluation of PRISM-SRM quantification of low-abundance serum proteins without immunoaffinity depletion and the multiplexing potential of this technique. Limits of quantification (LOQs) at low ng/mL levels with a medianmore » CV of ~12% were achieved for proteins spiked into human female serum using as little as 2 µL serum. PRISM-SRM provided up to ~1000-fold improvement in the LOQ when compared to conventional SRM measurements. Multiplexing capability of PRISM-SRM was also evaluated by two sets of serum samples with 6 and 21 target peptides spiked at the low attomole/µL levels. The results from SRM measurements for pooled or post-concatenated samples were comparable to those obtained from individual peptide fractions in terms of signal-to-noise ratios and SRM peak area ratios of light to heavy peptides. PRISM-SRM was applied to measure several ng/mL-level endogenous plasma proteins, including prostate-specific antigen, in clinical patient sera where correlation coefficients > 0.99 were observed between the results from PRISM-SRM and ELISA assays. Our results demonstrate that PRISM-SRM can be successfully used for quantification of low-abundance endogenous proteins in highly complex samples. Moderate throughput (50 samples/week) can be achieved by applying the post-concatenation or fraction multiplexing strategies. We anticipate broad applications for targeted PRISM-SRM quantification of low-abundance cellular proteins in systems biology studies as well as candidate biomarkers in biofluids.« less

  15. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    NASA Technical Reports Server (NTRS)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  16. Use of a portable, automated, open-circuit gas quantification system and the sulfur hexafluoride tracer technique for measuring enteric methane emissions in Holstein cows fed ad libitum or restricted

    USDA-ARS?s Scientific Manuscript database

    The sulfur hexafluoride tracer technique (SF**6) is a commonly used method for measuring CH**4 enteric emissions in ruminants. Studies using SF**6 have shown large variation in CH**4 emissions data, inconsistencies in CH**4 emissions across studies, and potential methodological errors. Therefore, th...

  17. Blade Sections in Streamwise Oscillations into Reverse Flow

    DTIC Science & Technology

    2015-05-07

    NC 27709-2211 Reverse Flow, Oscillating Airfoils , Oscillating Freesteam REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR...plate or bluff body rather than an airfoil . Reverse flow operation requires investigation and quantification to accurately capture these Submitted for... airfoil integrated quantities (lift, drag, moment) in reverse flow and developed new algorithms for comprehensive codes, reducing errors from 30 %–50

  18. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    NASA Astrophysics Data System (ADS)

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis for future work in this field.

  19. Quantification of Kryptofix 2.2.2 in [18F]fluorine-labelled radiopharmaceuticals by rapid-resolution liquid chromatography.

    PubMed

    Lao, Yexing; Yang, Cuiping; Zou, Wei; Gan, Manquan; Chen, Ping; Su, Weiwei

    2012-05-01

    The cryptand Kryptofix 2.2.2 is used extensively as a phase-transfer reagent in the preparation of [18F]fluoride-labelled radiopharmaceuticals. However, it has considerable acute toxicity. The aim of this study was to develop and validate a method for rapid (within 1 min), specific and sensitive quantification of Kryptofix 2.2.2 at trace levels. Chromatographic separations were carried out by rapid-resolution liquid chromatography (Agilent ZORBAX SB-C18 rapid-resolution column, 2.1 × 30 mm, 3.5 μm). Tandem mass spectra were acquired using a triple quadrupole mass spectrometer equipped with an electrospray ionization interface. Quantitative mass spectrometric analysis was conducted in positive ion mode and multiple reaction monitoring mode for the m/z 377.3 → 114.1 transition for Kryptofix 2.2.2. The external standard method was used for quantification. The method met the precision and efficiency requirements for PET radiopharmaceuticals, providing satisfactory results for specificity, matrix effect, stability, linearity (0.5-100 ng/ml, r(2)=0.9975), precision (coefficient of variation < 5%), accuracy (relative error < ± 3%), sensitivity (lower limit of quantification=0.5 ng) and detection time (<1 min). Fluorodeoxyglucose (n=6) was analysed, and the Kryptofix 2.2.2 content was found to be well below the maximum permissible levels approved by the US Food and Drug Administration. The developed method has a short analysis time (<1 min) and high sensitivity (lower limit of quantification=0.5 ng/ml) and can be successfully applied to rapid quantification of Kryptofix 2.2.2 at trace levels in fluorodeoxyglucose. This method could also be applied to other [18F]fluorine-labelled radiopharmaceuticals that use Kryptofix 2.2.2 as a phase-transfer reagent.

  20. Quantitative PCR for Genetic Markers of Human Fecal Pollution

    EPA Science Inventory

    Assessment of health risk and fecal bacteria loads associated with human fecal pollution requires reliable host-specific analytical methods and a rapid quantificationapproach. We report the development of quantitative PCR assays for quantification of two recently described human-...

  1. Modeling human response errors in synthetic flight simulator domain

    NASA Technical Reports Server (NTRS)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  2. Defining the Relationship Between Human Error Classes and Technology Intervention Strategies

    NASA Technical Reports Server (NTRS)

    Wiegmann, Douglas A.; Rantanen, Esa; Crisp, Vicki K. (Technical Monitor)

    2002-01-01

    One of the main factors in all aviation accidents is human error. The NASA Aviation Safety Program (AvSP), therefore, has identified several human-factors safety technologies to address this issue. Some technologies directly address human error either by attempting to reduce the occurrence of errors or by mitigating the negative consequences of errors. However, new technologies and system changes may also introduce new error opportunities or even induce different types of errors. Consequently, a thorough understanding of the relationship between error classes and technology "fixes" is crucial for the evaluation of intervention strategies outlined in the AvSP, so that resources can be effectively directed to maximize the benefit to flight safety. The purpose of the present project, therefore, was to examine the repositories of human factors data to identify the possible relationship between different error class and technology intervention strategies. The first phase of the project, which is summarized here, involved the development of prototype data structures or matrices that map errors onto "fixes" (and vice versa), with the hope of facilitating the development of standards for evaluating safety products. Possible follow-on phases of this project are also discussed. These additional efforts include a thorough and detailed review of the literature to fill in the data matrix and the construction of a complete database and standards checklists.

  3. Nondestructive Detection and Quantification of Blueberry Bruising using Near-infrared (NIR) Hyperspectral Reflectance Imaging

    PubMed Central

    Jiang, Yu; Li, Changying; Takeda, Fumiomi

    2016-01-01

    Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 − 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising. PMID:27767050

  4. Development and implementation of a human accuracy program in patient foodservice.

    PubMed

    Eden, S H; Wood, S M; Ptak, K M

    1987-04-01

    For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.

  5. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  6. Reflections on human error - Matters of life and death

    NASA Technical Reports Server (NTRS)

    Wiener, Earl L.

    1989-01-01

    The last two decades have witnessed a rapid growth in the introduction of automatic devices into aircraft cockpits, and eleswhere in human-machine systems. This was motivated in part by the assumption that when human functioning is replaced by machine functioning, human error is eliminated. Experience to date shows that this is far from true, and that automation does not replace humans, but changes their role in the system, as well as the types and severity of the errors they make. This altered role may lead to fewer, but more critical errors. Intervention strategies to prevent these errors, or ameliorate their consequences include basic human factors engineering of the interface, enhanced warning and alerting systems, and more intelligent interfaces that understand the strategic intent of the crew and can detect and trap inconsistent or erroneous input before it affects the system.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sommer, Karsten, E-mail: sommerk@uni-mainz.de, E-mail: Schreiber-L@ukw.de; Bernat, Dominik; Schmidt, Regine

    Purpose: The extent to which atherosclerotic plaques affect contrast agent (CA) transport in the coronary arteries and, hence, quantification of myocardial blood flow (MBF) using magnetic resonance imaging (MRI) is unclear. The purpose of this work was to evaluate the influence of plaque induced stenosis both on CA transport and on the accuracy of MBF quantification. Methods: Computational fluid dynamics simulations in a high-detailed realistic vascular model were employed to investigate CA bolus transport in the coronary arteries. The impact of atherosclerosis was analyzed by inserting various medium- to high-grade stenoses in the vascular model. The influence of stenosis morphologymore » was examined by varying the stenosis shapes but keeping the area reduction constant. Errors due to CA bolus transport were analyzed using the tracer-kinetic model MMID4. Results: Dispersion of the CA bolus was found in all models and for all outlets, but with a varying magnitude. The impact of stenosis was complex: while high-grade stenoses amplified dispersion, mild stenoses reduced the effect. Morphology was found to have a marked influence on dispersion for a small number of outlets in the post-stenotic region. Despite this marked influence on the concentration–time curves, MBF errors were less affected by stenosis. In total, MBF was underestimated by −7.9% to −44.9%. Conclusions: The presented results reveal that local hemodynamics in the coronary vasculature appears to have a direct impact on CA bolus dispersion. Inclusion of atherosclerotic plaques resulted in a complex alteration of this effect, with both degree of area reduction and stenosis morphology affecting the amount of dispersion. This strong influence of vascular transport effects impairs the accuracy of MRI-based MBF quantification techniques and, potentially, other bolus-based perfusion measurement techniques like computed tomography perfusion imaging.« less

  8. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  9. Reliable estimates of predictive uncertainty for an Alpine catchment using a non-parametric methodology

    NASA Astrophysics Data System (ADS)

    Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.

    2017-04-01

    Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.

  10. A stochastic dynamic model for human error analysis in nuclear power plants

    NASA Astrophysics Data System (ADS)

    Delgado-Loperena, Dharma

    Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.

  11. BIOLOGICAL BASIS OF SIMILARITIES AND DIFFERENCES BETWEEN HUMAN AND ECOSYSTEM HEALTH

    EPA Science Inventory

    In this chapter, we seek to promote the identification, quantification, and application of biological indicators of human health - ecological integrity interconnections and to determine the similarities and differences in ecological and human health responses to stress. Our objec...

  12. Ultra-high Performance Liquid Chromatography Tandem Mass-Spectrometry for Simple and Simultaneous Quantification of Cannabinoids

    PubMed Central

    Jamwal, Rohitash; Topletz, Ariel R.; Ramratnam, Bharat; Akhlaghi, Fatemeh

    2017-01-01

    Cannabis is used widely in the United States, both recreationally and for medical purposes. Current methods for analysis of cannabinoids in human biological specimens rely on complex extraction process and lengthy analysis time. We established a rapid and simple assay for quantification of Δ9-tetrahydrocannabinol (THC), cannabidiol (CBD), 11-hydroxy Δ9-tetrahydrocannabinol (11-OH THC) and 11-nor-9-carboxy-Δ9-tetrahydrocannbinol (THC-COOH) in human plasma by U-HPLC-MS/MS using Δ9-tetrahydrocannabinol-D3 as the internal standard. Chromatographic separation was achieved on an Acquity BEH C18 column using a gradient comprising of water (0.1% formic acid) and methanol (0.1% formic acid) over a 6 min run-time. Analytes from 200 µL plasma were extracted using acetonitrile (containing 1% formic acid and THC-D3). Mass spectrometry was performed in positive ionization mode, and total ion chromatogram was used for quantification of analytes. The assay was validated according to guidelines set forth by Food and Drug Administration of United States. An eight-point calibration curve was fitted with quadratic regression (r2>0.99) from 1.56 to 100 ng mL−1 and a lower limit of quantification (LLOQ) of 1.56 ng mL−1 was achieved. Accuracy and precision calculated from six calibration curves was between 85 to 115% while the mean extraction recovery was >90% for all the analytes. Several plasma phospholipids eluted after the analytes thus did not interfere with the assay. Bench-top, freeze-thaw, auto-sampler and short-term stability ranged from 92.7 to 106.8% of nominal values. Application of the method was evaluated by quantification of analytes in human plasma from six subjects. PMID:28192758

  13. Uncertainty Quantification in Geomagnetic Field Modeling

    NASA Astrophysics Data System (ADS)

    Chulliat, A.; Nair, M. C.; Alken, P.; Meyer, B.; Saltus, R.; Woods, A.

    2017-12-01

    Geomagnetic field models are mathematical descriptions of the various sources of the Earth's magnetic field, and are generally obtained by solving an inverse problem. They are widely used in research to separate and characterize field sources, but also in many practical applications such as aircraft and ship navigation, smartphone orientation, satellite attitude control, and directional drilling. In recent years, more sophisticated models have been developed, thanks to the continuous availability of high quality satellite data and to progress in modeling techniques. Uncertainty quantification has become an integral part of model development, both to assess the progress made and to address specific users' needs. Here we report on recent advances made by our group in quantifying the uncertainty of geomagnetic field models. We first focus on NOAA's World Magnetic Model (WMM) and the International Geomagnetic Reference Field (IGRF), two reference models of the main (core) magnetic field produced every five years. We describe the methods used in quantifying the model commission error as well as the omission error attributed to various un-modeled sources such as magnetized rocks in the crust and electric current systems in the atmosphere and near-Earth environment. A simple error model was derived from this analysis, to facilitate usage in practical applications. We next report on improvements brought by combining a main field model with a high resolution crustal field model and a time-varying, real-time external field model, like in NOAA's High Definition Geomagnetic Model (HDGM). The obtained uncertainties are used by the directional drilling industry to mitigate health, safety and environment risks.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aldegunde, Manuel, E-mail: M.A.Aldegunde-Rodriguez@warwick.ac.uk; Kermode, James R., E-mail: J.R.Kermode@warwick.ac.uk; Zabaras, Nicholas

    This paper presents the development of a new exchange–correlation functional from the point of view of machine learning. Using atomization energies of solids and small molecules, we train a linear model for the exchange enhancement factor using a Bayesian approach which allows for the quantification of uncertainties in the predictions. A relevance vector machine is used to automatically select the most relevant terms of the model. We then test this model on atomization energies and also on bulk properties. The average model provides a mean absolute error of only 0.116 eV for the test points of the G2/97 set butmore » a larger 0.314 eV for the test solids. In terms of bulk properties, the prediction for transition metals and monovalent semiconductors has a very low test error. However, as expected, predictions for types of materials not represented in the training set such as ionic solids show much larger errors.« less

  15. Determination of nutritional parameters of yoghurts by FT Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Czaja, Tomasz; Baranowska, Maria; Mazurek, Sylwester; Szostak, Roman

    2018-05-01

    FT-Raman quantitative analysis of nutritional parameters of yoghurts was performed with the help of partial least squares models. The relative standard errors of prediction for fat, lactose and protein determination in the quantified commercial samples equalled to 3.9, 3.2 and 3.6%, respectively. Models based on attenuated total reflectance spectra of the liquid yoghurt samples and of dried yoghurt films collected with the single reflection diamond accessory showed relative standard errors of prediction values of 1.6-5.0% and 2.7-5.2%, respectively, for the analysed components. Despite a relatively low signal-to-noise ratio in the obtained spectra, Raman spectroscopy, combined with chemometrics, constitutes a fast and powerful tool for macronutrients quantification in yoghurts. Errors received for attenuated total reflectance method were found to be relatively higher than those for Raman spectroscopy due to inhomogeneity of the analysed samples.

  16. Quantitative bioanalysis of strontium in human serum by inductively coupled plasma-mass spectrometry

    PubMed Central

    Somarouthu, Srikanth; Ohh, Jayoung; Shaked, Jonathan; Cunico, Robert L; Yakatan, Gerald; Corritori, Suzana; Tami, Joe; Foehr, Erik D

    2015-01-01

    Aim: A bioanalytical method using inductively-coupled plasma-mass spectrometry to measure endogenous levels of strontium in human serum was developed and validated. Results & methodology: This article details the experimental procedures used for the method development and validation thus demonstrating the application of the inductively-coupled plasma-mass spectrometry method for quantification of strontium in human serum samples. The assay was validated for specificity, linearity, accuracy, precision, recovery and stability. Significant endogenous levels of strontium are present in human serum samples ranging from 19 to 96 ng/ml with a mean of 34.6 ± 15.2 ng/ml (SD). Discussion & conclusion: Calibration procedures and sample pretreatment were simplified for high throughput analysis. The validation demonstrates that the method was sensitive, selective for quantification of strontium (88Sr) and is suitable for routine clinical testing of strontium in human serum samples. PMID:28031925

  17. Simultaneous enantioselective quantification of fluoxetine and norfluoxetine in human milk by direct sample injection using 2-dimensional liquid chromatography-tandem mass spectrometry.

    PubMed

    Alvim, Joel; Lopes, Bianca Rebelo; Cass, Quezia Bezerra

    2016-06-17

    A two-dimensional liquid chromatography system coupled to triple quadrupole tandem mass spectrometer (2D LC-MS/MS) was employed for the simultaneously quantification of fluoxetine (FLX) and norfluoxetine (NFLX) enantiomers in human milk by direct injection of samples. A restricted access media of bovine serum albumin octadecyl column (RAM-BSAC18) was used in the first dimension for the milk proteins depletion, while an antibiotic-based chiral column was used in the second dimension. The results herein described show good selectivity, extraction efficiency, accuracy, and precision with limits of quantification in the order of 7.5ngmL(-1)for the FLX enantiomers and 10.0ngmL(-1) for NFLX enantiomers. Furthermore, it represents a practical tool in terms of sustainability for the sample preparation of such a difficult matrix. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Quantification of the Triazole Antifungal Compounds Voriconazole and Posaconazole in Human Serum or Plasma Using Liquid Chromatography Electrospray Tandem Mass Spectrometry (HPLC-ESI-MS/MS).

    PubMed

    Molinelli, Alejandro R; Rose, Charles H

    2016-01-01

    Voriconazole and posaconazole are triazole antifungal compounds used in the treatment of fungal infections. Therapeutic drug monitoring of both compounds is recommended in order to guide drug dosing to achieve optimal blood concentrations. In this chapter we describe an HPLC-ESI-MS/MS method for the quantification of both compounds in human plasma or serum following a simple specimen preparation procedure. Specimen preparation consists of protein precipitation using methanol and acetonitrile followed by a cleanup step that involves filtration through a cellulose acetate membrane. The specimen is then injected into an HPLC-ESI-MS/MS equipped with a C18 column and separated over an acetonitrile gradient. Quantification of the drugs in the specimen is achieved by comparing the response of the unknown specimen to that of the calibrators in the standard curve using multiple reaction monitoring.

  19. High throughput, parallel imaging and biomarker quantification of human spermatozoa by ImageStream flow cytometry.

    PubMed

    Buckman, Clayton; George, Thaddeus C; Friend, Sherree; Sutovsky, Miriam; Miranda-Vizuete, Antonio; Ozanon, Christophe; Morrissey, Phil; Sutovsky, Peter

    2009-12-01

    Spermatid specific thioredoxin-3 protein (SPTRX-3) accumulates in the superfluous cytoplasm of defective human spermatozoa. Novel ImageStream technology combining flow cytometry with cell imaging was used for parallel quantification and visualization of SPTRX-3 protein in defective spermatozoa of five men from infertile couples. The majority of the SPTRX-3 containing cells were overwhelmingly spermatozoa with a variety of morphological defects, detectable in the ImageStream recorded images. Quantitative parameters of relative SPTRX-3 induced fluorescence measured by ImageStream correlated closely with conventional flow cytometric measurements of the same sample set and reflected the results of clinical semen evaluation. Image Stream quantification of SPTRX-3 combines and surpasses the informative value of both conventional flow cytometry and light microscopic semen evaluation. The observed patterns of the retention of SPTRX-3 in the sperm samples from infertility patients support the view that SPTRX3 is a biomarker of male infertility.

  20. Fluorescent quantification of melanin.

    PubMed

    Fernandes, Bruno; Matamá, Teresa; Guimarães, Diana; Gomes, Andreia; Cavaco-Paulo, Artur

    2016-11-01

    Melanin quantification is reportedly performed by absorption spectroscopy, commonly at 405 nm. Here, we propose the implementation of fluorescence spectroscopy for melanin assessment. In a typical in vitro assay to assess melanin production in response to an external stimulus, absorption spectroscopy clearly overvalues melanin content. This method is also incapable of distinguishing non-melanotic/amelanotic control cells from those that are actually capable of performing melanogenesis. Therefore, fluorescence spectroscopy is the best method for melanin quantification as it proved to be highly specific and accurate, detecting even small variations in the synthesis of melanin. This method can also be applied to the quantification of melanin in more complex biological matrices like zebrafish embryos and human hair. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. An Informatics-assisted Label-free Approach for Personalized Tissue Membrane Proteomics: Case Study on Colorectal Cancer*

    PubMed Central

    Han, Chia-Li; Chen, Jinn-Shiun; Chan, Err-Cheng; Wu, Chien-Peng; Yu, Kun-Hsing; Chen, Kuei-Tien; Tsou, Chih-Chiang; Tsai, Chia-Feng; Chien, Chih-Wei; Kuo, Yung-Bin; Lin, Pei-Yi; Yu, Jau-Song; Hsueh, Chuen; Chen, Min-Chi; Chan, Chung-Chuan; Chang, Yu-Sun; Chen, Yu-Ju

    2011-01-01

    We developed a multiplexed label-free quantification strategy, which integrates an efficient gel-assisted digestion protocol, high-performance liquid chromatography tandem MS analysis, and a bioinformatics alignment method to determine personalized proteomic profiles for membrane proteins in human tissues. This strategy provided accurate (6% error) and reproducible (34% relative S.D.) quantification of three independently purified membrane fractions from the same human colorectal cancer (CRC) tissue. Using CRC as a model, we constructed the personalized membrane protein atlas of paired tumor and adjacent normal tissues from 28 patients with different stages of CRC. Without fractionation, this strategy confidently quantified 856 proteins (≥2 unique peptides) across different patients, including the first and robust detection (Mascot score: 22,074) of the well-documented CRC marker, carcinoembryonic antigen 5 by a discovery-type proteomics approach. Further validation of a panel of proteins, annexin A4, neutrophils defensin A1, and claudin 3, confirmed differential expression levels and high occurrences (48–70%) in 60 CRC patients. The most significant discovery is the overexpression of stomatin-like 2 (STOML2) for early diagnostic and prognostic potential. Increased expression of STOML2 was associated with decreased CRC-related survival; the mean survival period was 34.77 ± 2.03 months in patients with high STOML2 expression, whereas 53.67 ± 3.46 months was obtained for patients with low STOML2 expression. Further analysis by ELISA verified that plasma concentrations of STOML2 in early-stage CRC patients were elevated as compared with those of healthy individuals (p < 0.001), suggesting that STOML2 may be a noninvasive serological biomarker for early CRC diagnosis. The overall sensitivity of STOML2 for CRC detection was 71%, which increased to 87% when combined with CEA measurements. This study demonstrated a sensitive, label-free strategy for differential analysis of tissue membrane proteome, which may provide a roadmap for the subsequent identification of molecular target candidates of multiple cancer types. PMID:21209152

  2. Immobilized Metal Affinity Chromatography Coupled to Multiple Reaction Monitoring Enables Reproducible Quantification of Phospho-signaling*

    PubMed Central

    Kennedy, Jacob J.; Yan, Ping; Zhao, Lei; Ivey, Richard G.; Voytovich, Uliana J.; Moore, Heather D.; Lin, Chenwei; Pogosova-Agadjanyan, Era L.; Stirewalt, Derek L.; Reding, Kerryn W.; Whiteaker, Jeffrey R.; Paulovich, Amanda G.

    2016-01-01

    A major goal in cell signaling research is the quantification of phosphorylation pharmacodynamics following perturbations. Traditional methods of studying cellular phospho-signaling measure one analyte at a time with poor standardization, rendering them inadequate for interrogating network biology and contributing to the irreproducibility of preclinical research. In this study, we test the feasibility of circumventing these issues by coupling immobilized metal affinity chromatography (IMAC)-based enrichment of phosphopeptides with targeted, multiple reaction monitoring (MRM) mass spectrometry to achieve precise, specific, standardized, multiplex quantification of phospho-signaling responses. A multiplex immobilized metal affinity chromatography- multiple reaction monitoring assay targeting phospho-analytes responsive to DNA damage was configured, analytically characterized, and deployed to generate phospho-pharmacodynamic curves from primary and immortalized human cells experiencing genotoxic stress. The multiplexed assays demonstrated linear ranges of ≥3 orders of magnitude, median lower limit of quantification of 0.64 fmol on column, median intra-assay variability of 9.3%, median inter-assay variability of 12.7%, and median total CV of 16.0%. The multiplex immobilized metal affinity chromatography- multiple reaction monitoring assay enabled robust quantification of 107 DNA damage-responsive phosphosites from human cells following DNA damage. The assays have been made publicly available as a resource to the community. The approach is generally applicable, enabling wide interrogation of signaling networks. PMID:26621847

  3. Simultaneous quantification of Δ9-tetrahydrocannabinol, 11-hydroxy-Δ9-tetrahydrocannabinol, and 11-nor-Δ9-tetrahydrocannabinol-9-carboxylic acid in human plasma using two-dimensional gas chromatography, cryofocusing, and electron impact-mass spectrometry

    PubMed Central

    Lowe, Ross H.; Karschner, Erin L.; Schwilke, Eugene W.; Barnes, Allan J.; Huestis, Marilyn A.

    2009-01-01

    A two-dimensional (2D) gas chromatography/electron impact-mass spectrometry (GC/EI-MS) method for simultaneous quantification of Δ9-tetrahydrocannabinol (THC), 11-hydroxy-Δ9-tetrahydrocannabinol (11-OH-THC), and 11-nor-Δ9-tetrahydrocannabinol-9-carboxylic acid (THCCOOH) in human plasma was developed and validated. The method employs 2D capillary GC and cryofocusing for enhanced resolution and sensitivity. THC, 11-OH-THC, and THCCOOH were extracted by precipitation with acetonitrile followed by solid-phase extraction. GC separation of trimethylsilyl derivatives of analytes was accomplished with two capillary columns in series coupled via a pneumatic Deans switch system. Detection and quantification were accomplished with a bench-top single quadrupole mass spectrometer operated in electron impact-selected ion monitoring mode. Limits of quantification (LOQ) were 0.125, 0.25 and 0.125 ng/mL for THC, 11-OH-THC, and THCCOOH, respectively. Accuracy ranged from 86.0 to 113.0% for all analytes. Intra- and inter-assay precision, as percent relative standard deviation, was less than 14.1% for THC, 11-OH-THC, and THCCOOH. The method was successfully applied to quantification of THC and its 11-OH-THC and THCCOOH metabolites in plasma specimens following controlled administration of THC. PMID:17640656

  4. Human factors process failure modes and effects analysis (HF PFMEA) software tool

    NASA Technical Reports Server (NTRS)

    Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)

    2011-01-01

    Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.

  5. Air Force Academy Homepage

    Science.gov Websites

    Chaplain Corps Cadet Chapel Community Center Chapel Institutional Review Board Not Human Subjects Research Requirements 7 Not Human Subjects Research Form 8 Researcher Instructions - Activities Submitted to DoD IRB 9 Review 18 Not Human Subjects Errors 19 Exempt Research Most Frequent Errors 20 Most Frequent Errors for

  6. Development of an FAA-EUROCONTROL technique for the analysis of human error in ATM : final report.

    DOT National Transportation Integrated Search

    2002-07-01

    Human error has been identified as a dominant risk factor in safety-oriented industries such as air traffic control (ATC). However, little is known about the factors leading to human errors in current air traffic management (ATM) systems. The first s...

  7. Human Error: The Stakes Are Raised.

    ERIC Educational Resources Information Center

    Greenberg, Joel

    1980-01-01

    Mistakes related to the operation of nuclear power plants and other technologically complex systems are discussed. Recommendations are given for decreasing the chance of human error in the operation of nuclear plants. The causes of the Three Mile Island incident are presented in terms of the human error element. (SA)

  8. Avoiding Human Error in Mission Operations: Cassini Flight Experience

    NASA Technical Reports Server (NTRS)

    Burk, Thomas A.

    2012-01-01

    Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.

  9. Good people who try their best can have problems: recognition of human factors and how to minimise error.

    PubMed

    Brennan, Peter A; Mitchell, David A; Holmes, Simon; Plint, Simon; Parry, David

    2016-01-01

    Human error is as old as humanity itself and is an appreciable cause of mistakes by both organisations and people. Much of the work related to human factors in causing error has originated from aviation where mistakes can be catastrophic not only for those who contribute to the error, but for passengers as well. The role of human error in medical and surgical incidents, which are often multifactorial, is becoming better understood, and includes both organisational issues (by the employer) and potential human factors (at a personal level). Mistakes as a result of individual human factors and surgical teams should be better recognised and emphasised. Attitudes and acceptance of preoperative briefing has improved since the introduction of the World Health Organization (WHO) surgical checklist. However, this does not address limitations or other safety concerns that are related to performance, such as stress and fatigue, emotional state, hunger, awareness of what is going on situational awareness, and other factors that could potentially lead to error. Here we attempt to raise awareness of these human factors, and highlight how they can lead to error, and how they can be minimised in our day-to-day practice. Can hospitals move from being "high risk industries" to "high reliability organisations"? Copyright © 2015 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. Using a Delphi Method to Identify Human Factors Contributing to Nursing Errors.

    PubMed

    Roth, Cheryl; Brewer, Melanie; Wieck, K Lynn

    2017-07-01

    The purpose of this study was to identify human factors associated with nursing errors. Using a Delphi technique, this study used feedback from a panel of nurse experts (n = 25) on an initial qualitative survey questionnaire followed by summarizing the results with feedback and confirmation. Synthesized factors regarding causes of errors were incorporated into a quantitative Likert-type scale, and the original expert panel participants were queried a second time to validate responses. The list identified 24 items as most common causes of nursing errors, including swamping and errors made by others that nurses are expected to recognize and fix. The responses provided a consensus top 10 errors list based on means with heavy workload and fatigue at the top of the list. The use of the Delphi survey established consensus and developed a platform upon which future study of nursing errors can evolve as a link to future solutions. This list of human factors in nursing errors should serve to stimulate dialogue among nurses about how to prevent errors and improve outcomes. Human and system failures have been the subject of an abundance of research, yet nursing errors continue to occur. © 2016 Wiley Periodicals, Inc.

  11. SU-G-IeP1-07: Inaccuracy of Lesion Blood Flow Quantification Related to the Proton Density Reference Image in Arterial Spin Labeling MRI of Brain Tumors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jen, M; Johnson, J; Hou, P

    Purpose: Cerebral blood flow quantification in arterial spin labeling (ASL) MRI requires an estimate of the equilibrium magnetization of blood, which is often obtained by a set of proton density (PD) reference image. Normally, a constant blood-brain partition coefficient is assumed across the brain. However, this assumption may not be valid for brain lesions. This study aimed to evaluate the impact of lesion-related PD variations on ASL quantification in patients with brain tumors. Methods: MR images for posttreatment evaluation of 42 patients with brain tumors were retrospectively analyzed. These images were acquired on a 3T MRI scanner, including T2-weighted FLAIR,more » 3D pseudo-continuous ASL and post-contrast T1-weighted images. Anatomical images were coregistered with ASL images using the SPM software. Regions of interest (ROIs) of the enhancing and FLAIR lesions were manually drawn on the coregistered images. ROIs of the contralateral normal appearing tissues were also determined, with the consideration of approximating coil sensitivity patterns in lesion ROIs. Relative lesion blood flow (lesion/contralateral tissue) was calculated from both the CBF map (dependent on the PD) and the ΔM map for comparison. Results: The signal intensities in both enhancing and FLAIR lesions were significantly different than contralateral tissues on the PD reference image (p<0.001). The percent signal difference ranged from −15.9 to 19.2%, with a mean of 5.4% for the enhancing lesion, and from −2.8 to 22.9% with a mean of 10.1% for the FLAIR lesion. The high/low lesion-related PD signal resulted in inversely proportional under-/over-estimation of blood flow in both enhancing and FLAIR lesions. Conclusion: Significant signal differences were found between lesions and contralateral tissues in the PD reference image, which introduced errors in blood flow quantification in ASL. The error can be up to 20% in individual patients with an average of 5- 10% for the group of patients with brain tumors.« less

  12. Quantitative Determination of Noa (Naturally Occurring Asbestos) in Rocks : Comparison Between Pcom and SEM Analysis

    NASA Astrophysics Data System (ADS)

    Baietto, Oliviero; Amodeo, Francesco; Giorgis, Ilaria; Vitaliti, Martina

    2017-04-01

    The quantification of NOA (Naturally Occurring Asbestos) in a rock or soil matrix is complex and subject to numerous errors. The purpose of this study is to compare two fundamental methodologies used for the analysis: the first one uses Phase Contrast Optical Microscope (PCOM) while the second one uses Scanning Electron Microscope (SEM). The two methods, although they provide the same result, which is the asbestos mass to total mass ratio, have completely different characteristics and both present pros and cons. The current legislation in Italy involves the use of SEM, DRX, FTIR, PCOM (DM 6/9/94) for the quantification of asbestos in bulk materials and soils and the threshold beyond which the material is considered as hazardous waste is a concentration of asbestos fiber of 1000 mg/kg.(DM 161/2012). The most used technology is the SEM which is the one among these with the better analytical sensitivity.(120mg/Kg DM 6 /9/94) The fundamental differences among the analyses are mainly: - Amount of analyzed sample portion - Representativeness of the sample - Resolution - Analytical precision - Uncertainty of the methodology - Operator errors Due to the problem of quantification of DRX and FTIR (1% DM 6/9/94) our Asbestos Laboratory (DIATI POLITO) since more than twenty years apply the PCOM methodology and in the last years the SEM methodology for quantification of asbestos content. The aim of our research is to compare the results obtained from a PCOM analysis with the results provided by SEM analysis on the base of more than 100 natural samples both from cores (tunnel-boring or explorative-drilling) and from tunnelling excavation . The results obtained show, in most cases, a good correlation between the two techniques. Of particular relevance is the fact that both techniques are reliable for very low quantities of asbestos, even lower than the analytical sensitivity. This work highlights the comparison between the two techniques emphasizing strengths and weaknesses of the two procedures and suggests how an integrated approach, together with the skills and experience of the operator may be the best way forward in order to obtain a constructive improvement of analysis techniques.

  13. Automating Cell Detection and Classification in Human Brain Fluorescent Microscopy Images Using Dictionary Learning and Sparse Coding

    PubMed Central

    Alegro, Maryana; Theofilas, Panagiotis; Nguy, Austin; Castruita, Patricia A.; Seeley, William; Heinsen, Helmut; Ushizima, Daniela M.

    2017-01-01

    Background Immunofluorescence (IF) plays a major role in quantifying protein expression in situ and understanding cell function. It is widely applied in assessing disease mechanisms and in drug discovery research. Automation of IF analysis can transform studies using experimental cell models. However, IF analysis of postmortem human tissue relies mostly on manual interaction, often subjected to low-throughput and prone to error, leading to low inter and intra-observer reproducibility. Human postmortem brain samples challenges neuroscientists because of the high level of autofluorescence caused by accumulation of lipofuscin pigment during aging, hindering systematic analyses. We propose a method for automating cell counting and classification in IF microscopy of human postmortem brains. Our algorithm speeds up the quantification task while improving reproducibility. New method Dictionary learning and sparse coding allow for constructing improved cell representations using IF images. These models are input for detection and segmentation methods. Classification occurs by means of color distances between cells and a learned set. Results Our method successfully detected and classified cells in 49 human brain images. We evaluated our results regarding true positive, false positive, false negative, precision, recall, false positive rate and F1 score metrics. We also measured user-experience and time saved compared to manual countings. Comparison with existing methods We compared our results to four open-access IF-based cell-counting tools available in the literature. Our method showed improved accuracy for all data samples. Conclusion The proposed method satisfactorily detects and classifies cells from human postmortem brain IF images, with potential to be generalized for applications in other counting tasks. PMID:28267565

  14. Short RNA indicator sequences are not completely degraded by autoclaving

    PubMed Central

    Unnithan, Veena V.; Unc, Adrian; Joe, Valerisa; Smith, Geoffrey B.

    2014-01-01

    Short indicator RNA sequences (<100 bp) persist after autoclaving and are recovered intact by molecular amplification. Primers targeting longer sequences are most likely to produce false positives due to amplification errors easily verified by melting curves analyses. If short indicator RNA sequences are used for virus identification and quantification then post autoclave RNA degradation methodology should be employed, which may include further autoclaving. PMID:24518856

  15. Error Quantification and Confidence Assessment of Aerothermal Model Predictions for Hypersonic Aircraft (Preprint)

    DTIC Science & Technology

    2013-09-01

    based confidence metric is used to compare several different model predictions with the experimental data. II. Aerothermal Model Definition and...whereas 5% measurement uncertainty is assumed for aerodynamic pressure and heat flux measurements 4p y and 4Q y . Bayesian updating according... definitive conclusions for these particular aerodynamic models. However, given the confidence associated with the 4 sdp predictions for Run 30 (H/D

  16. Sequencing small genomic targets with high efficiency and extreme accuracy

    PubMed Central

    Schmitt, Michael W.; Fox, Edward J.; Prindle, Marc J.; Reid-Bayliss, Kate S.; True, Lawrence D.; Radich, Jerald P.; Loeb, Lawrence A.

    2015-01-01

    The detection of minority variants in mixed samples demands methods for enrichment and accurate sequencing of small genomic intervals. We describe an efficient approach based on sequential rounds of hybridization with biotinylated oligonucleotides, enabling more than one-million fold enrichment of genomic regions of interest. In conjunction with error correcting double-stranded molecular tags, our approach enables the quantification of mutations in individual DNA molecules. PMID:25849638

  17. Multiple Velocity Profile Measurements in Hypersonic Flows using Sequentially-Imaged Fluorescence Tagging

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Danehy, Paul M.; Inmian, Jennifer A.; Jones, Stephen B.; Ivey, Christopher B.; Goyne, Christopher P.

    2010-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to perform velocity measurements in hypersonic flows by generating multiple tagged lines which fluoresce as they convect downstream. For each laser pulse, a single interline, progressive scan intensified CCD camera was used to obtain separate images of the initial undelayed and delayed NO molecules that had been tagged by the laser. The CCD configuration allowed for sub-microsecond acquisition of both images, resulting in sub-microsecond temporal resolution as well as sub-mm spatial resolution (0.5-mm x 0.7-mm). Determination of axial velocity was made by application of a cross-correlation analysis of the horizontal shift of individual tagged lines. Quantification of systematic errors, the contribution of gating/exposure duration errors, and influence of collision rate on fluorescence to temporal uncertainty were made. Quantification of the spatial uncertainty depended upon the analysis technique and signal-to-noise of the acquired profiles. This investigation focused on two hypersonic flow experiments: (1) a reaction control system (RCS) jet on an Orion Crew Exploration Vehicle (CEV) wind tunnel model and (2) a 10-degree half-angle wedge containing a 2-mm tall, 4-mm wide cylindrical boundary layer trip. The experiments were performed at the NASA Langley Research Center's 31-inch Mach 10 wind tunnel.

  18. An experimental design for quantification of cardiovascular responses to music stimuli in humans.

    PubMed

    Chang, S-H; Luo, C-H; Yeh, T-L

    2004-01-01

    There have been several researches on the relationship between music and human physiological or psychological responses. However, there are cardiovascular index factors that have not been explored quantitatively due to the qualitative nature of acoustic stimuli. This study proposes and demonstrates an experimental design for quantification of cardiovascular responses to music stimuli in humans. The system comprises two components: a unit for generating and monitoring quantitative acoustic stimuli and a portable autonomic nervous system (ANS) analysis unit for quantitative recording and analysis of the cardiovascular responses. The experimental results indicate that the proposed system can exactly achieve the goal of full control and measurement for the music stimuli, and also effectively support many quantitative indices of cardiovascular response in humans. In addition, the analysis results are discussed and predicted in the future clinical research.

  19. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human errormore » analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.« less

  20. Validated Method for the Quantification of Baclofen in Human Plasma Using Solid-Phase Extraction and Liquid Chromatography–Tandem Mass Spectrometry

    PubMed Central

    Nahar, Limon Khatun; Cordero, Rosa Elena; Nutt, David; Lingford-Hughes, Anne; Turton, Samuel; Durant, Claire; Wilson, Sue; Paterson, Sue

    2016-01-01

    Abstract A highly sensitive and fully validated method was developed for the quantification of baclofen in human plasma. After adjusting the pH of the plasma samples using a phosphate buffer solution (pH 4), baclofen was purified using mixed mode (C8/cation exchange) solid-phase extraction (SPE) cartridges. Endogenous water-soluble compounds and lipids were removed from the cartridges before the samples were eluted and concentrated. The samples were analyzed using triple-quadrupole liquid chromatography–tandem mass spectrometry (LC–MS-MS) with triggered dynamic multiple reaction monitoring mode for simultaneous quantification and confirmation. The assay was linear from 25 to 1,000 ng/mL (r2 > 0.999; n = 6). Intraday (n = 6) and interday (n = 15) imprecisions (% relative standard deviation) were <5%, and the average recovery was 30%. The limit of detection of the method was 5 ng/mL, and the limit of quantification was 25 ng/mL. Plasma samples from healthy male volunteers (n = 9, median age: 22) given two single oral doses of baclofen (10 and 60 mg) on nonconsecutive days were analyzed to demonstrate method applicability. PMID:26538544

  1. Application of Targeted Mass Spectrometry for the Quantification of Sirtuins in the Central Nervous System

    NASA Astrophysics Data System (ADS)

    Jayasena, T.; Poljak, A.; Braidy, N.; Zhong, L.; Rowlands, B.; Muenchhoff, J.; Grant, R.; Smythe, G.; Teo, C.; Raftery, M.; Sachdev, P.

    2016-10-01

    Sirtuin proteins have a variety of intracellular targets, thereby regulating multiple biological pathways including neurodegeneration. However, relatively little is currently known about the role or expression of the 7 mammalian sirtuins in the central nervous system. Western blotting, PCR and ELISA are the main techniques currently used to measure sirtuin levels. To achieve sufficient sensitivity and selectivity in a multiplex-format, a targeted mass spectrometric assay was developed and validated for the quantification of all seven mammalian sirtuins (SIRT1-7). Quantification of all peptides was by multiple reaction monitoring (MRM) using three mass transitions per protein-specific peptide, two specific peptides for each sirtuin and a stable isotope labelled internal standard. The assay was applied to a variety of samples including cultured brain cells, mammalian brain tissue, CSF and plasma. All sirtuin peptides were detected in the human brain, with SIRT2 being the most abundant. Sirtuins were also detected in human CSF and plasma, and guinea pig and mouse tissues. In conclusion, we have successfully applied MRM mass spectrometry for the detection and quantification of sirtuin proteins in the central nervous system, paving the way for more quantitative and functional studies.

  2. Sensitivity analysis of brain morphometry based on MRI-derived surface models

    NASA Astrophysics Data System (ADS)

    Klein, Gregory J.; Teng, Xia; Schoenemann, P. T.; Budinger, Thomas F.

    1998-07-01

    Quantification of brain structure is important for evaluating changes in brain size with growth and aging and for characterizing neurodegeneration disorders. Previous quantification efforts using ex vivo techniques suffered considerable error due to shrinkage of the cerebrum after extraction from the skull, deformation of slices during sectioning, and numerous other factors. In vivo imaging studies of brain anatomy avoid these problems and allow repetitive studies following progression of brain structure changes due to disease or natural processes. We have developed a methodology for obtaining triangular mesh models of the cortical surface from MRI brain datasets. The cortex is segmented from nonbrain tissue using a 2D region-growing technique combined with occasional manual edits. Once segmented, thresholding and image morphological operations (erosions and openings) are used to expose the regions between adjacent surfaces in deep cortical folds. A 2D region- following procedure is then used to find a set of contours outlining the cortical boundary on each slice. The contours on all slices are tiled together to form a closed triangular mesh model approximating the cortical surface. This model can be used for calculation of cortical surface area and volume, as well as other parameters of interest. Except for the initial segmentation of the cortex from the skull, the technique is automatic and requires only modest computation time on modern workstations. Though the use of image data avoids many of the pitfalls of ex vivo and sectioning techniques, our MRI-based technique is still vulnerable to errors that may impact the accuracy of estimated brain structure parameters. Potential inaccuracies include segmentation errors due to incorrect thresholding, missed deep sulcal surfaces, falsely segmented holes due to image noise and surface tiling artifacts. The focus of this paper is the characterization of these errors and how they affect measurements of cortical surface area and volume.

  3. Analysis of measured data of human body based on error correcting frequency

    NASA Astrophysics Data System (ADS)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  4. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    PubMed

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N A; Bin Zaheer, Kashif

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  5. Uncertainty Quantification of CFD Data Generated for a Model Scramjet Isolator Flowfield

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.; Axdahl, E. L.

    2017-01-01

    Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a "black box". Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow.

  6. Ultra-high-performance liquid chromatography-tandem mass spectrometry measurement of climbazole deposition from hair care products onto artificial skin and human scalp.

    PubMed

    Chen, Guoqiang; Hoptroff, Michael; Fei, Xiaoqing; Su, Ya; Janssen, Hans-Gerd

    2013-11-22

    A sensitive and specific ultra-high-performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for the measurement of climbazole deposition from hair care products onto artificial skin and human scalp. Deuterated climbazole was used as the internal standard. Atmospheric pressure chemical ionization (APCI) in positive mode was applied for the detection of climbazole. For quantification, multiple reaction monitoring (MRM) transition 293.0>69.0 was monitored for climbazole, and MRM transition 296.0>225.1 for the deuterated climbazole. The linear range ran from 4 to 2000 ng mL(-1). The limit of detection (LOD) and the limit of quantification (LOQ) were 1 ng mL(-1) and 4 ng mL(-1), respectively, which enabled quantification of climbazole on artificial skin and human scalp at ppb level (corresponding to 16 ng cm(-2)). For the sampling of climbazole from human scalp the buffer scrub method using a surfactant-modified phosphate buffered saline (PBS) solution was selected based on a performance comparison of tape stripping, the buffer scrub method and solvent extraction in in vitro studies. Using this method, climbazole deposition in in vitro and in vivo studies was successfully quantified. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Quantification of nerve agent VX-butyrylcholinesterase adduct biomarker from an accidental exposure.

    PubMed

    Solano, Maria I; Thomas, Jerry D; Taylor, James T; McGuire, Jeffrey M; Jakubowski, Edward M; Thomson, Sandra A; Maggio, Vincent L; Holland, Kerry E; Smith, J Richard; Capacio, Benedict; Woolfitt, Adrian R; Ashley, David L; Barr, John R

    2008-01-01

    The lack of data in the open literature on human exposure to the nerve agent O-ethyl-S-(2-diisopropylaminoethyl) methylphosphonothioate (VX) gives a special relevance to the data presented in this study in which we report the quantification of VX-butyrylcholinesterase adduct from a relatively low-level accidental human exposure. The samples were analyzed by gas chromatography-high resolution mass spectrometry using the fluoride ion regeneration method for the quantification of multiple nerve agents including VX. Six human plasma samples from the same individual were collected after the patient had been treated once with oxime immediately after exhibiting signs of exposure. Detection limits of approximately 5.5 pg/mL plasma were achieved for the G-analogue of VX (G-VX). Levels of the G-VX ranged from 81.4 pg/mL on the first day after the exposure to 6.9 pg/mL in the sample taken 27 days after the exposure. Based on the reported concentration of human butyrylcholinesterase in plasma of approximately 80 nM, it can be calculated that inhibition levels of >or= 0.05% of BuChE can be accurately quantified. These data further indicate that the fluoride ion regeneration method is a potentially powerful tool that can be used to assess low-level exposure to VX.

  8. Visual Field Defects and Retinal Ganglion Cell Losses in Human Glaucoma Patients

    PubMed Central

    Harwerth, Ronald S.; Quigley, Harry A.

    2007-01-01

    Objective The depth of visual field defects are correlated with retinal ganglion cell densities in experimental glaucoma. This study was to determine whether a similar structure-function relationship holds for human glaucoma. Methods The study was based on retinal ganglion cell densities and visual thresholds of patients with documented glaucoma (Kerrigan-Baumrind, et al.) The data were analyzed by a model that predicted ganglion cell densities from standard clinical perimetry, which were then compared to histologic cell counts. Results The model, without free parameters, produced accurate and relatively precise quantification of ganglion cell densities associated with visual field defects. For 437 sets of data, the unity correlation for predicted vs. measured cell densities had a coefficient of determination of 0.39. The mean absolute deviation of the predicted vs. measured values was 2.59 dB, the mean and SD of the distribution of residual errors of prediction was -0.26 ± 3.22 dB. Conclusions Visual field defects by standard clinical perimetry are proportional to neural losses caused by glaucoma. Clinical Relevance The evidence for quantitative structure-function relationships provides a scientific basis of interpreting glaucomatous neuropathy from visual thresholds and supports the application of standard perimetry to establish the stage of the disease. PMID:16769839

  9. Rapid, automated, parallel quantitative immunoassays using highly integrated microfluidics and AlphaLISA

    PubMed Central

    Tak For Yu, Zeta; Guan, Huijiao; Ki Cheung, Mei; McHugh, Walker M.; Cornell, Timothy T.; Shanley, Thomas P.; Kurabayashi, Katsuo; Fu, Jianping

    2015-01-01

    Immunoassays represent one of the most popular analytical methods for detection and quantification of biomolecules. However, conventional immunoassays such as ELISA and flow cytometry, even though providing high sensitivity and specificity and multiplexing capability, can be labor-intensive and prone to human error, making them unsuitable for standardized clinical diagnoses. Using a commercialized no-wash, homogeneous immunoassay technology (‘AlphaLISA’) in conjunction with integrated microfluidics, herein we developed a microfluidic immunoassay chip capable of rapid, automated, parallel immunoassays of microliter quantities of samples. Operation of the microfluidic immunoassay chip entailed rapid mixing and conjugation of AlphaLISA components with target analytes before quantitative imaging for analyte detections in up to eight samples simultaneously. Aspects such as fluid handling and operation, surface passivation, imaging uniformity, and detection sensitivity of the microfluidic immunoassay chip using AlphaLISA were investigated. The microfluidic immunoassay chip could detect one target analyte simultaneously for up to eight samples in 45 min with a limit of detection down to 10 pg mL−1. The microfluidic immunoassay chip was further utilized for functional immunophenotyping to examine cytokine secretion from human immune cells stimulated ex vivo. Together, the microfluidic immunoassay chip provides a promising high-throughput, high-content platform for rapid, automated, parallel quantitative immunosensing applications. PMID:26074253

  10. Tailoring a Human Reliability Analysis to Your Industry Needs

    NASA Technical Reports Server (NTRS)

    DeMott, D. L.

    2016-01-01

    Companies at risk of accidents caused by human error that result in catastrophic consequences include: airline industry mishaps, medical malpractice, medication mistakes, aerospace failures, major oil spills, transportation mishaps, power production failures and manufacturing facility incidents. Human Reliability Assessment (HRA) is used to analyze the inherent risk of human behavior or actions introducing errors into the operation of a system or process. These assessments can be used to identify where errors are most likely to arise and the potential risks involved if they do occur. Using the basic concepts of HRA, an evolving group of methodologies are used to meet various industry needs. Determining which methodology or combination of techniques will provide a quality human reliability assessment is a key element to developing effective strategies for understanding and dealing with risks caused by human errors. There are a number of concerns and difficulties in "tailoring" a Human Reliability Assessment (HRA) for different industries. Although a variety of HRA methodologies are available to analyze human error events, determining the most appropriate tools to provide the most useful results can depend on industry specific cultures and requirements. Methodology selection may be based on a variety of factors that include: 1) how people act and react in different industries, 2) expectations based on industry standards, 3) factors that influence how the human errors could occur such as tasks, tools, environment, workplace, support, training and procedure, 4) type and availability of data, 5) how the industry views risk & reliability, and 6) types of emergencies, contingencies and routine tasks. Other considerations for methodology selection should be based on what information is needed from the assessment. If the principal concern is determination of the primary risk factors contributing to the potential human error, a more detailed analysis method may be employed versus a requirement to provide a numerical value as part of a probabilistic risk assessment. Industries involved with humans operating large equipment or transport systems (ex. railroads or airlines) would have more need to address the man machine interface than medical workers administering medications. Human error occurs in every industry; in most cases the consequences are relatively benign and occasionally beneficial. In cases where the results can have disastrous consequences, the use of Human Reliability techniques to identify and classify the risk of human errors allows a company more opportunities to mitigate or eliminate these types of risks and prevent costly tragedies.

  11. Evaluation of genomic high-throughput sequencing data generated on Illumina HiSeq and Genome Analyzer systems

    PubMed Central

    2011-01-01

    Background The generation and analysis of high-throughput sequencing data are becoming a major component of many studies in molecular biology and medical research. Illumina's Genome Analyzer (GA) and HiSeq instruments are currently the most widely used sequencing devices. Here, we comprehensively evaluate properties of genomic HiSeq and GAIIx data derived from two plant genomes and one virus, with read lengths of 95 to 150 bases. Results We provide quantifications and evidence for GC bias, error rates, error sequence context, effects of quality filtering, and the reliability of quality values. By combining different filtering criteria we reduced error rates 7-fold at the expense of discarding 12.5% of alignable bases. While overall error rates are low in HiSeq data we observed regions of accumulated wrong base calls. Only 3% of all error positions accounted for 24.7% of all substitution errors. Analyzing the forward and reverse strands separately revealed error rates of up to 18.7%. Insertions and deletions occurred at very low rates on average but increased to up to 2% in homopolymers. A positive correlation between read coverage and GC content was found depending on the GC content range. Conclusions The errors and biases we report have implications for the use and the interpretation of Illumina sequencing data. GAIIx and HiSeq data sets show slightly different error profiles. Quality filtering is essential to minimize downstream analysis artifacts. Supporting previous recommendations, the strand-specificity provides a criterion to distinguish sequencing errors from low abundance polymorphisms. PMID:22067484

  12. Rapid quantification and sex determination of forensic evidence materials.

    PubMed

    Andréasson, Hanna; Allen, Marie

    2003-11-01

    DNA quantification of forensic evidence is very valuable for an optimal use of the available biological material. Moreover, sex determination is of great importance as additional information in criminal investigations as well as in identification of missing persons, no suspect cases, and ancient DNA studies. While routine forensic DNA analysis based on short tandem repeat markers includes a marker for sex determination, analysis of samples containing scarce amounts of DNA is often based on mitochondrial DNA, and sex determination is not performed. In order to allow quantification and simultaneous sex determination on minute amounts of DNA, an assay based on real-time PCR analysis of a marker within the human amelogenin gene has been developed. The sex determination is based on melting curve analysis, while an externally standardized kinetic analysis allows quantification of the nuclear DNA copy number in the sample. This real-time DNA quantification assay has proven to be highly sensitive, enabling quantification of single DNA copies. Although certain limitations were apparent, the system is a rapid, cost-effective, and flexible assay for analysis of forensic casework samples.

  13. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

    NASA Technical Reports Server (NTRS)

    VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

    1997-01-01

    The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

  14. PET Quantification of the Norepinephrine Transporter in Human Brain with (S,S)-18F-FMeNER-D2.

    PubMed

    Moriguchi, Sho; Kimura, Yasuyuki; Ichise, Masanori; Arakawa, Ryosuke; Takano, Harumasa; Seki, Chie; Ikoma, Yoko; Takahata, Keisuke; Nagashima, Tomohisa; Yamada, Makiko; Mimura, Masaru; Suhara, Tetsuya

    2017-07-01

    Norepinephrine transporter (NET) in the brain plays important roles in human cognition and the pathophysiology of psychiatric disorders. Two radioligands, ( S , S )- 11 C-MRB and ( S , S )- 18 F-FMeNER-D 2 , have been used for imaging NETs in the thalamus and midbrain (including locus coeruleus) using PET in humans. However, NET density in the equally important cerebral cortex has not been well quantified because of unfavorable kinetics with ( S , S )- 11 C-MRB and defluorination with ( S , S )- 18 F-FMeNER-D 2 , which can complicate NET quantification in the cerebral cortex adjacent to the skull containing defluorinated 18 F radioactivity. In this study, we have established analysis methods of quantification of NET density in the brain including the cerebral cortex using ( S , S )- 18 F-FMeNER-D 2 PET. Methods: We analyzed our previous ( S , S )- 18 F-FMeNER-D 2 PET data of 10 healthy volunteers dynamically acquired for 240 min with arterial blood sampling. The effects of defluorination on the NET quantification in the superficial cerebral cortex was evaluated by establishing a time stability of NET density estimations with an arterial input 2-tissue-compartment model, which guided the less-invasive reference tissue model and area under the time-activity curve methods to accurately quantify NET density in all brain regions including the cerebral cortex. Results: Defluorination of ( S , S )- 18 F-FMeNER-D 2 became prominent toward the latter half of the 240-min scan. Total distribution volumes in the superficial cerebral cortex increased with the scan duration beyond 120 min. We verified that 90-min dynamic scans provided a sufficient amount of data for quantification of NET density unaffected by defluorination. Reference tissue model binding potential values from the 90-min scan data and area under the time-activity curve ratios of 70- to 90-min data allowed for the accurate quantification of NET density in the cerebral cortex. Conclusion: We have established methods of quantification of NET densities in the brain including the cerebral cortex unaffected by defluorination using ( S , S )- 18 F-FMeNER-D 2 These results suggest that we can accurately quantify NET density with a 90-min ( S , S )- 18 F-FMeNER-D 2 scan in broad brain areas. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  15. Effects of RNA integrity on transcript quantification by total RNA sequencing of clinically collected human placental samples.

    PubMed

    Reiman, Mario; Laan, Maris; Rull, Kristiina; Sõber, Siim

    2017-08-01

    RNA degradation is a ubiquitous process that occurs in living and dead cells, as well as during handling and storage of extracted RNA. Reduced RNA quality caused by degradation is an established source of uncertainty for all RNA-based gene expression quantification techniques. RNA sequencing is an increasingly preferred method for transcriptome analyses, and dependence of its results on input RNA integrity is of significant practical importance. This study aimed to characterize the effects of varying input RNA integrity [estimated as RNA integrity number (RIN)] on transcript level estimates and delineate the characteristic differences between transcripts that differ in degradation rate. The study used ribodepleted total RNA sequencing data from a real-life clinically collected set ( n = 32) of human solid tissue (placenta) samples. RIN-dependent alterations in gene expression profiles were quantified by using DESeq2 software. Our results indicate that small differences in RNA integrity affect gene expression quantification by introducing a moderate and pervasive bias in expression level estimates that significantly affected 8.1% of studied genes. The rapidly degrading transcript pool was enriched in pseudogenes, short noncoding RNAs, and transcripts with extended 3' untranslated regions. Typical slowly degrading transcripts (median length, 2389 nt) represented protein coding genes with 4-10 exons and high guanine-cytosine content.-Reiman, M., Laan, M., Rull, K., Sõber, S. Effects of RNA integrity on transcript quantification by total RNA sequencing of clinically collected human placental samples. © FASEB.

  16. Quantitative photoacoustic microscopy of optical absorption coefficients from acoustic spectra in the optical diffusive regime

    NASA Astrophysics Data System (ADS)

    Guo, Zijian; Favazza, Christopher; Garcia-Uribe, Alejandro; Wang, Lihong V.

    2012-06-01

    Photoacoustic (PA) microscopy (PAM) can image optical absorption contrast with ultrasonic spatial resolution in the optical diffusive regime. Conventionally, accurate quantification in PAM requires knowledge of the optical fluence attenuation, acoustic pressure attenuation, and detection bandwidth. We circumvent this requirement by quantifying the optical absorption coefficients from the acoustic spectra of PA signals acquired at multiple optical wavelengths. With the acoustic spectral method, the absorption coefficients of an oxygenated bovine blood phantom at 560, 565, 570, and 575 nm were quantified with errors of <3%. We also quantified the total hemoglobin concentration and hemoglobin oxygen saturation in a live mouse. Compared with the conventional amplitude method, the acoustic spectral method provides greater quantification accuracy in the optical diffusive regime. The limitations of the acoustic spectral method was also discussed.

  17. Quantitative photoacoustic microscopy of optical absorption coefficients from acoustic spectra in the optical diffusive regime

    PubMed Central

    Guo, Zijian; Favazza, Christopher; Garcia-Uribe, Alejandro

    2012-01-01

    Abstract. Photoacoustic (PA) microscopy (PAM) can image optical absorption contrast with ultrasonic spatial resolution in the optical diffusive regime. Conventionally, accurate quantification in PAM requires knowledge of the optical fluence attenuation, acoustic pressure attenuation, and detection bandwidth. We circumvent this requirement by quantifying the optical absorption coefficients from the acoustic spectra of PA signals acquired at multiple optical wavelengths. With the acoustic spectral method, the absorption coefficients of an oxygenated bovine blood phantom at 560, 565, 570, and 575 nm were quantified with errors of <3%. We also quantified the total hemoglobin concentration and hemoglobin oxygen saturation in a live mouse. Compared with the conventional amplitude method, the acoustic spectral method provides greater quantification accuracy in the optical diffusive regime. The limitations of the acoustic spectral method was also discussed. PMID:22734767

  18. Quantitative photoacoustic microscopy of optical absorption coefficients from acoustic spectra in the optical diffusive regime.

    PubMed

    Guo, Zijian; Favazza, Christopher; Garcia-Uribe, Alejandro; Wang, Lihong V

    2012-06-01

    Photoacoustic (PA) microscopy (PAM) can image optical absorption contrast with ultrasonic spatial resolution in the optical diffusive regime. Conventionally, accurate quantification in PAM requires knowledge of the optical fluence attenuation, acoustic pressure attenuation, and detection bandwidth. We circumvent this requirement by quantifying the optical absorption coefficients from the acoustic spectra of PA signals acquired at multiple optical wavelengths. With the acoustic spectral method, the absorption coefficients of an oxygenated bovine blood phantom at 560, 565, 570, and 575 nm were quantified with errors of <3%. We also quantified the total hemoglobin concentration and hemoglobin oxygen saturation in a live mouse. Compared with the conventional amplitude method, the acoustic spectral method provides greater quantification accuracy in the optical diffusive regime. The limitations of the acoustic spectral method was also discussed.

  19. Assessment and quantification of patient set-up errors in nasopharyngeal cancer patients and their biological and dosimetric impact in terms of generalized equivalent uniform dose (gEUD), tumour control probability (TCP) and normal tissue complication probability (NTCP)

    PubMed Central

    Marcie, S; Fellah, M; Chami, S; Mekki, F

    2015-01-01

    Objective: The aim of this study is to assess and quantify patients' set-up errors using an electronic portal imaging device and to evaluate their dosimetric and biological impact in terms of generalized equivalent uniform dose (gEUD) on predictive models, such as the tumour control probability (TCP) and the normal tissue complication probability (NTCP). Methods: 20 patients treated for nasopharyngeal cancer were enrolled in the radiotherapy–oncology department of HCA. Systematic and random errors were quantified. The dosimetric and biological impact of these set-up errors on the target volume and the organ at risk (OARs) coverage were assessed using calculation of dose–volume histogram, gEUD, TCP and NTCP. For this purpose, an in-house software was developed and used. Results: The standard deviations (1SDs) of the systematic set-up and random set-up errors were calculated for the lateral and subclavicular fields and gave the following results: ∑ = 0.63 ± (0.42) mm and σ = 3.75 ± (0.79) mm, respectively. Thus a planning organ at risk volume (PRV) margin of 3 mm was defined around the OARs, and a 5-mm margin used around the clinical target volume. The gEUD, TCP and NTCP calculations obtained with and without set-up errors have shown increased values for tumour, where ΔgEUD (tumour) = 1.94% Gy (p = 0.00721) and ΔTCP = 2.03%. The toxicity of OARs was quantified using gEUD and NTCP. The values of ΔgEUD (OARs) vary from 0.78% to 5.95% in the case of the brainstem and the optic chiasm, respectively. The corresponding ΔNTCP varies from 0.15% to 0.53%, respectively. Conclusion: The quantification of set-up errors has a dosimetric and biological impact on the tumour and on the OARs. The developed in-house software using the concept of gEUD, TCP and NTCP biological models has been successfully used in this study. It can be used also to optimize the treatment plan established for our patients. Advances in knowledge: The gEUD, TCP and NTCP may be more suitable tools to assess the treatment plans before treating the patients. PMID:25882689

  20. Exploring human error in military aviation flight safety events using post-incident classification systems.

    PubMed

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  1. Combination of Complex-Based and Magnitude-Based Multiecho Water-Fat Separation for Accurate Quantification of Fat-Fraction

    PubMed Central

    Yu, Huanzhou; Shimakawa, Ann; Hines, Catherine D. G.; McKenzie, Charles A.; Hamilton, Gavin; Sirlin, Claude B.; Brittain, Jean H.; Reeder, Scott B.

    2011-01-01

    Multipoint water–fat separation techniques rely on different water–fat phase shifts generated at multiple echo times to decompose water and fat. Therefore, these methods require complex source images and allow unambiguous separation of water and fat signals. However, complex-based water–fat separation methods are sensitive to phase errors in the source images, which may lead to clinically important errors. An alternative approach to quantify fat is through “magnitude-based” methods that acquire multiecho magnitude images. Magnitude-based methods are insensitive to phase errors, but cannot estimate fat-fraction greater than 50%. In this work, we introduce a water–fat separation approach that combines the strengths of both complex and magnitude reconstruction algorithms. A magnitude-based reconstruction is applied after complex-based water–fat separation to removes the effect of phase errors. The results from the two reconstructions are then combined. We demonstrate that using this hybrid method, 0–100% fat-fraction can be estimated with improved accuracy at low fat-fractions. PMID:21695724

  2. Study of a selection of 10 historical types of dosemeter: variation of the response to Hp(10) with photon energy and geometry of exposure.

    PubMed

    Thierry-Chef, I; Pernicka, F; Marshall, M; Cardis, E; Andreo, P

    2002-01-01

    An international collaborative study of cancer risk among workers in the nuclear industry is tinder way to estimate direetly the cancer risk following protracted low-dose exposure to ionising radiation. An essential aspect of this study is the characterisation and quantification of errors in available dose estimates. One major source of errors is dosemeter response in workplace exposure conditions. Little information is available on energy and geometry response for most of the 124 different dosemeters used historically in participating facilities. Experiments were therefore set up to assess this. using 10 dosemeter types representative of those used over time. Results show that the largest errors were associated with the response of early dosemeters to low-energy photon radiation. Good response was found with modern dosemeters. even at low energy. These results are being used to estimate errors in the response for each dosemeter type, used in the participating facilities, so that these can be taken into account in the estimates of cancer risk.

  3. Minimizing finite-volume discretization errors on polyhedral meshes

    NASA Astrophysics Data System (ADS)

    Mouly, Quentin; Evrard, Fabien; van Wachem, Berend; Denner, Fabian

    2017-11-01

    Tetrahedral meshes are widely used in CFD to simulate flows in and around complex geometries, as automatic generation tools now allow tetrahedral meshes to represent arbitrary domains in a relatively accessible manner. Polyhedral meshes, however, are an increasingly popular alternative. While tetrahedron have at most four neighbours, the higher number of neighbours per polyhedral cell leads to a more accurate evaluation of gradients, essential for the numerical resolution of PDEs. The use of polyhedral meshes, nonetheless, introduces discretization errors for finite-volume methods: skewness and non-orthogonality, which occur with all sorts of unstructured meshes, as well as errors due to non-planar faces, specific to polygonal faces with more than three vertices. Indeed, polyhedral mesh generation algorithms cannot, in general, guarantee to produce meshes free of non-planar faces. The presented work focuses on the quantification and optimization of discretization errors on polyhedral meshes in the context of finite-volume methods. A quasi-Newton method is employed to optimize the relevant mesh quality measures. Various meshes are optimized and CFD results of cases with known solutions are presented to assess the improvements the optimization approach can provide.

  4. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  5. An Analysis of U.S. Army Fratricide Incidents during the Global War on Terror (11 September 2001 to 31 March 2008)

    DTIC Science & Technology

    2010-03-15

    Swiss cheese model of human error causation. ................................................................... 3  2. Results for the classification of...based on Reason’s “ Swiss cheese ” model of human error (1990). Figure 1 describes how an accident is likely to occur when all of the errors, or “holes...align. A detailed description of HFACS can be found in Wiegmann and Shappell (2003). Figure 1. The Swiss cheese model of human error

  6. Towards high-resolution 4D flow MRI in the human aorta using kt-GRAPPA and B1+ shimming at 7T.

    PubMed

    Schmitter, Sebastian; Schnell, Susanne; Uğurbil, Kâmil; Markl, Michael; Van de Moortele, Pierre-François

    2016-08-01

    To evaluate the feasibility of aortic 4D flow magnetic resonance imaging (MRI) at 7T with improved spatial resolution using kt-GRAPPA acceleration while restricting acquisition time and to address radiofrequency (RF) excitation heterogeneities with B1+ shimming. 4D flow MRI data were obtained in the aorta of eight subjects using a 16-channel transmit/receive coil array at 7T. Flow quantification and acquisition time were compared for a kt-GRAPPA accelerated (R = 5) and a standard GRAPPA (R = 2) accelerated protocol. The impact of different dynamic B1+ shimming strategies on flow quantification was investigated. Two kt-GRAPPA accelerated protocols with 1.2 × 1.2 × 1.2 mm(3) and 1.8 × 1.8 × 2.4 mm(3) spatial resolution were compared. Using kt-GRAPPA, we achieved a 4.3-fold reduction in net acquisition time resulting in scan times of about 10 minutes. No significant effect on flow quantification was observed compared to standard GRAPPA with R = 2. Optimizing the B1+ fields for the aorta impacted significantly (P <  0.05) the flow quantification while specific B1+ settings were required for respiration navigators. The high-resolution protocol yielded similar flow quantification, but allowed the depiction of branching vessels. 7T in combination with B1+ shimming allows for high-resolution 4D flow MRI acquisitions in the human aorta, while kt-GRAPPA limits total scan times without affecting flow quantification. J. Magn. Reson. Imaging 2016;44:486-499. © 2016 Wiley Periodicals, Inc.

  7. A Quality Improvement Project to Decrease Human Milk Errors in the NICU.

    PubMed

    Oza-Frank, Reena; Kachoria, Rashmi; Dail, James; Green, Jasmine; Walls, Krista; McClead, Richard E

    2017-02-01

    Ensuring safe human milk in the NICU is a complex process with many potential points for error, of which one of the most serious is administration of the wrong milk to the wrong infant. Our objective was to describe a quality improvement initiative that was associated with a reduction in human milk administration errors identified over a 6-year period in a typical, large NICU setting. We employed a quasi-experimental time series quality improvement initiative by using tools from the model for improvement, Six Sigma methodology, and evidence-based interventions. Scanned errors were identified from the human milk barcode medication administration system. Scanned errors of interest were wrong-milk-to-wrong-infant, expired-milk, or preparation errors. The scanned error rate and the impact of additional improvement interventions from 2009 to 2015 were monitored by using statistical process control charts. From 2009 to 2015, the total number of errors scanned declined from 97.1 per 1000 bottles to 10.8. Specifically, the number of expired milk error scans declined from 84.0 per 1000 bottles to 8.9. The number of preparation errors (4.8 per 1000 bottles to 2.2) and wrong-milk-to-wrong-infant errors scanned (8.3 per 1000 bottles to 2.0) also declined. By reducing the number of errors scanned, the number of opportunities for errors also decreased. Interventions that likely had the greatest impact on reducing the number of scanned errors included installation of bedside (versus centralized) scanners and dedicated staff to handle milk. Copyright © 2017 by the American Academy of Pediatrics.

  8. 31 P magnetic resonance fingerprinting for rapid quantification of creatine kinase reaction rate in vivo.

    PubMed

    Wang, Charlie Y; Liu, Yuchi; Huang, Shuying; Griswold, Mark A; Seiberlich, Nicole; Yu, Xin

    2017-12-01

    The purpose of this work was to develop a 31 P spectroscopic magnetic resonance fingerprinting (MRF) method for fast quantification of the chemical exchange rate between phosphocreatine (PCr) and adenosine triphosphate (ATP) via creatine kinase (CK). A 31 P MRF sequence (CK-MRF) was developed to quantify the forward rate constant of ATP synthesis via CK ( kfCK), the T 1 relaxation time of PCr ( T1PCr), and the PCr-to-ATP concentration ratio ( MRPCr). The CK-MRF sequence used a balanced steady-state free precession (bSSFP)-type excitation with ramped flip angles and a unique saturation scheme sensitive to the exchange between PCr and γATP. Parameter estimation was accomplished by matching the acquired signals to a dictionary generated using the Bloch-McConnell equation. Simulation studies were performed to examine the susceptibility of the CK-MRF method to several potential error sources. The accuracy of nonlocalized CK-MRF measurements before and after an ischemia-reperfusion (IR) protocol was compared with the magnetization transfer (MT-MRS) method in rat hindlimb at 9.4 T (n = 14). The reproducibility of CK-MRF was also assessed by comparing CK-MRF measurements with both MT-MRS (n = 17) and four angle saturation transfer (FAST) (n = 7). Simulation results showed that CK-MRF quantification of kfCK was robust, with less than 5% error in the presence of model inaccuracies including dictionary resolution, metabolite T 2 values, inorganic phosphate metabolism, and B 1 miscalibration. Estimation of kfCK by CK-MRF (0.38 ± 0.02 s -1 at baseline and 0.42 ± 0.03 s -1 post-IR) showed strong agreement with MT-MRS (0.39 ± 0.03 s -1 at baseline and 0.44 ± 0.04 s -1 post-IR). kfCK estimation was also similar between CK-MRF and FAST (0.38 ± 0.02 s -1 for CK-MRF and 0.38 ± 0.11 s -1 for FAST). The coefficient of variation from 20 s CK-MRF quantification of kfCK was 42% of that by 150 s MT-MRS acquisition and was 12% of that by 20 s FAST acquisition. This study demonstrates the potential of a 31 P spectroscopic MRF framework for rapid, accurate and reproducible quantification of chemical exchange rate of CK in vivo. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Candida tropicalis biofilm and human epithelium invasion is highly influenced by environmental pH.

    PubMed

    Ferreira, Carina; Gonçalves, Bruna; Vilas Boas, Diana; Oliveira, Hugo; Henriques, Mariana; Azeredo, Joana; Silva, Sónia

    2016-11-01

    The main goal of this study was to investigate the role of pH on Candida tropicalis virulence determinants, namely the ability to form biofilms and to colonize/invade reconstituted human vaginal epithelia. Biofilm formation was evaluated by enumeration of cultivable cells, total biomass quantification and structural analysis by scanning electron microscopy and confocal laser scanning microscopy. Candida tropicalis human vaginal epithelium colonization and invasiveness were examined qualitatively by epifluorescence microscopy and quantitatively by a novel quantitative real-time PCR protocol for Candida quantification in tissues. The results revealed that environmental pH influences C. tropicalis biofilm formation as well as the colonization and potential to invade human epithelium with intensification at neutral and alkaline conditions compared to acidic conditions. For the first time, we have demonstrated that C. tropicalis biofilm formation and invasion is highly influenced by environmental pH. © Crown copyright 2016.

  10. Sensitive Targeted Quantification of ERK Phosphorylation Dynamics and Stoichiometry in Human Cells without Affinity Enrichment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Tujin; Gao, Yuqian; Gaffrey, Matthew J.

    2014-12-17

    Mass spectrometry-based targeted quantification is a promising technology for site-specific quantification of posttranslational modifications (PTMs). However, a major constraint of most targeted MS approaches is the limited sensitivity for quantifying low-abundance PTMs, requiring the use of affinity reagents to enrich specific PTMs. Herein, we demonstrate the direct site-specific quantification of ERK phosphorylation isoforms (pT, pY, pTpY) and their relative stoichiometries using a highly sensitive targeted MS approach termed high-pressure, high-resolution separations with intelligent selection and multiplexing (PRISM). PRISM provides effective enrichment of target peptides within a given fraction from complex biological matrix with minimal sample losses, followed by selected reactionmore » monitoring (SRM) quantification. The PRISM-SRM approach enabled direct quantification of ERK phosphorylation in human mammary epithelial cells (HMEC) from as little as 25 µg tryptic peptides from whole cell lysates. Compared to immobilized metal-ion affinity chromatography, PRISM provided >10-fold improvement in signal intensities, presumably due to the better peptide recovery of PRISM for handling small size samples. This approach was applied to quantify ERK phosphorylation dynamics in HMEC treated by different doses of EGF at both the peak activation (10 min) and steady state (2 h). At 10 min, the maximal ERK activation was observed with 0.3 ng/mL dose, whereas the maximal steady state level of ERK activation at 2 h was at 3 ng/ml dose, corresponding to 1200 and 9000 occupied receptors, respectively. At 10 min, the maximally activated pTpY isoform represented ~40% of total ERK, falling to less than 10% at 2 h. The time course and dose-response profiles of individual phosphorylated ERK isoforms indicated that singly phosphorylated pT-ERK never increases significantly, while the increase of pY-ERK paralleled that of pTpY-ERK. This data supports for a processive, rather than distributed, model of ERK phosphorylation. The PRISM-SRM quantification of protein phosphorylation illustrates the potential for simultaneous quantification of multiple PTMs.« less

  11. Quantification of cytokine mRNA in peripheral blood mononuclear cells using branched DNA (bDNA) technology.

    PubMed

    Shen, L P; Sheridan, P; Cao, W W; Dailey, P J; Salazar-Gonzalez, J F; Breen, E C; Fahey, J L; Urdea, M S; Kolberg, J A

    1998-06-01

    Changes in the patterns of cytokine expression are thought to be of central importance in human infectious and inflammatory diseases. As such, there is a need for precise, reproducible assays for quantification of cytokine mRNA that are amenable to routine use in a clinical setting. In this report, we describe the design and performance of a branched DNA (bDNA) assay for the direct quantification of multiple cytokine mRNA levels in peripheral blood mononuclear cells (PBMCs). Oligonucleotide target probe sets were designed for several human cytokines, including TNFalpha, IL-2, IL-4, IL-6, IL-10, and IFNgamma. The bDNA assay yielded highly reproducible quantification of cytokine mRNAs, exhibited a broad linear dynamic range of over 3-log10, and showed a sensitivity sufficient to measure at least 3000 molecules. The potential clinical utility of the bDNA assay was explored by measuring cytokine mRNA levels in PBMCs from healthy and immunocompromised individuals. Cytokine expression levels in PBMCs from healthy blood donors were found to remain relatively stable over a one-month period of time. Elevated levels of IFNgamma mRNA were detected in PBMCs from HIV-1 seropositive individuals, but no differences in mean levels of TNFalpha or IL-6 mRNA were detected between seropositive and seronegative individuals. By providing a reproducible method for quantification of low abundance transcripts in clinical specimens, the bDNA assay may be useful for studies addressing the role of cytokine expression in disease.

  12. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2013-06-24

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Hoist, Submitted for publication . • Finite...1996. [20] C. LANCZOS, Linear Differential Operators, Dover Publications , Mineola, NY, 1997. [21] G.I. MARCHUK, Adjoint Equations and Analysis of...NUMBER(S) 16. SECURITY CLASSIFICATION OF: 19b. TELEPHONE NUMBER (Include area code) The public reporting burden for this collection of information is

  13. Incorporation of a Redfern Integrated Optics ORION Laser Module with an IPG Photonics Erbium Fiber Laser to Create a Frequency Conversion Photon Doppler Velocimeter for US Army Research Laboratory Measurements: Hardware, Data Analysis, and Error Quantification

    DTIC Science & Technology

    2017-04-01

    measurements of oscillating surfaces, such as a vehicle hull subjected to an under-body blast, or a reactive armor tile subject to nearest neighbor...Cast steel (CS) subscale tub test: hull displacement measurements made via photon Doppler velocimetry. Aberdeen Proving Ground (MD): Army Research

  14. Multi-Scale Fusion of Information for Uncertainty Quantification and Management in Large-Scale Simulations

    DTIC Science & Technology

    2015-12-02

    simplification of the equations but at the expense of introducing modeling errors. We have shown that the Wick solutions have accuracy comparable to...the system of equations for the coefficients of formal power series solutions . Moreover, the structure of this propagator is seemingly universal, i.e...the problem of computing the numerical solution to kinetic partial differential equa- tions involving many phase variables. These types of equations

  15. Interest of fluorine-19 nuclear magnetic resonance spectroscopy in the detection, identification and quantification of metabolites of anticancer and antifungal fluoropyrimidine drugs in human biofluids.

    PubMed

    Martino, Robert; Gilard, Véronique; Desmoulin, Franck; Malet-Martino, Myriam

    2006-01-01

    The metabolism of fluorouracil and fluorocytosine, two 5-fluoropyrimidine drugs in clinical use, was investigated. (19)F nuclear magnetic resonance (NMR) spectroscopy was used as an analytical technique for the detection, identification and quantification of fluorinated metabolites of these drugs in intact human biofluids as well as fluorinated degradation compounds of fluorouracil in commercial vials. (19)F NMR provides a highly specific tool for the detection and absolute quantification, in a single run, of all the fluorinated species, including unexpected substances, present in biofluids of patients treated with fluorouracil or fluorocytosine. Besides the parent drug and the already known fluorinated metabolites, nine new metabolites were identified for the first time with (19)F NMR in human biofluids. Six of them can only be observed with this technique: fluoride ion, N-carboxy-alpha-fluoro-beta-alanine, alpha-fluoro-beta-alanine conjugate with deoxycholic acid, 2-fluoro-3-hydroxypropanoic acid, fluoroacetic acid, O(2)-beta-glucuronide of fluorocytosine. (19)F NMR studies of biological fluids of patients treated with anticancer fluorouracil or antifungal fluorocytosine have furthered the understanding of their catabolic pathways.

  16. Fully automated, deep learning segmentation of oxygen-induced retinopathy images

    PubMed Central

    Xiao, Sa; Bucher, Felicitas; Wu, Yue; Rokem, Ariel; Lee, Cecilia S.; Marra, Kyle V.; Fallon, Regis; Diaz-Aguilar, Sophia; Aguilar, Edith; Friedlander, Martin; Lee, Aaron Y.

    2017-01-01

    Oxygen-induced retinopathy (OIR) is a widely used model to study ischemia-driven neovascularization (NV) in the retina and to serve in proof-of-concept studies in evaluating antiangiogenic drugs for ocular, as well as nonocular, diseases. The primary parameters that are analyzed in this mouse model include the percentage of retina with vaso-obliteration (VO) and NV areas. However, quantification of these two key variables comes with a great challenge due to the requirement of human experts to read the images. Human readers are costly, time-consuming, and subject to bias. Using recent advances in machine learning and computer vision, we trained deep learning neural networks using over a thousand segmentations to fully automate segmentation in OIR images. While determining the percentage area of VO, our algorithm achieved a similar range of correlation coefficients to that of expert inter-human correlation coefficients. In addition, our algorithm achieved a higher range of correlation coefficients compared with inter-expert correlation coefficients for quantification of the percentage area of neovascular tufts. In summary, we have created an open-source, fully automated pipeline for the quantification of key values of OIR images using deep learning neural networks. PMID:29263301

  17. DNA methylation analysis from saliva samples for epidemiological studies.

    PubMed

    Nishitani, Shota; Parets, Sasha E; Haas, Brian W; Smith, Alicia K

    2018-06-18

    Saliva is a non-invasive, easily accessible tissue, which is regularly collected in large epidemiological studies to examine genetic questions. Recently, it is becoming more common to use saliva to assess DNA methylation. However, DNA extracted from saliva is a mixture of both bacterial and human DNA derived from epithelial and immune cells in the mouth. Thus, there are unique challenges to using salivary DNA in methylation studies that can influence data quality. This study assesses: (1) quantification of human DNA after extraction; (2) delineation of human and bacterial DNA; (3) bisulfite conversion (BSC); (4) quantification of BSC DNA; (5) PCR amplification of BSC DNA from saliva and; (6) quantitation of DNA methylation with a targeted assay. The framework proposed will allow saliva samples to be more widely used in targeted epigenetic studies.

  18. Human errors and measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Kuselman, Ilya; Pennecchi, Francesca

    2015-04-01

    Evaluating the residual risk of human errors in a measurement and testing laboratory, remaining after the error reduction by the laboratory quality system, and quantifying the consequences of this risk for the quality of the measurement/test results are discussed based on expert judgments and Monte Carlo simulations. A procedure for evaluation of the contribution of the residual risk to the measurement uncertainty budget is proposed. Examples are provided using earlier published sets of expert judgments on human errors in pH measurement of groundwater, elemental analysis of geological samples by inductively coupled plasma mass spectrometry, and multi-residue analysis of pesticides in fruits and vegetables. The human error contribution to the measurement uncertainty budget in the examples was not negligible, yet also not dominant. This was assessed as a good risk management result.

  19. Noninvasive quantification of human brain antioxidant concentrations after an intravenous bolus of vitamin C

    USDA-ARS?s Scientific Manuscript database

    Background: Until now, antioxidant based initiatives for preventing dementia have lacked a means to detect deficiency or measure pharmacologic effect in the human brain in situ. Objective: Our objective was to apply a novel method to measure key human brain antioxidant concentrations throughout the ...

  20. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L.

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices usedmore » in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.« less

  1. Human error analysis of commercial aviation accidents using the human factors analysis and classification system (HFACS)

    DOT National Transportation Integrated Search

    2001-02-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...

  2. Diagnosis and therapeutic monitoring of inborn errors of creatine metabolism and transport using liquid chromatography-tandem mass spectrometry in urine, plasma and CSF.

    PubMed

    Haas, Dorothea; Gan-Schreier, Hongying; Langhans, Claus-Dieter; Anninos, Alexandros; Haege, Gisela; Burgard, Peter; Schulze, Andreas; Hoffmann, Georg F; Okun, Jürgen G

    2014-03-15

    Biochemical detection of inborn errors of creatine metabolism or transport relies on the analysis of three main metabolites in biological fluids: guanidinoacetate (GAA), creatine (CT) and creatinine (CTN). Unspecific clinical presentation of the diseases might be the cause that only few patients have been diagnosed so far. We describe a LC-MS/MS method allowing fast and reliable diagnosis by simultaneous quantification of GAA, CT and CTN in urine, plasma and cerebrospinal fluid (CSF) and established reference values for each material. For quantification deuterated stable isotopes of each analyte were used as internal standards. GAA, CT and CTN were separated by reversed-phase HPLC. The characterization was carried out by scanning the ions of each compound by negative ion tandem mass spectrometry. Butylation is needed to achieve sufficient signal intensity for GAA and CT but it is not useful for analyzing CTN. The assay is linear in a broad range of analyte concentrations usually found in urine, plasma and CSF. Comparison of the "traditional" cation-exchange chromatography and LC-MS/MS showed proportional differences but linear relationships between the two methods. The described method is characterized by high speed and linearity over large concentration ranges comparable to other published LC-MS methods but with higher sensitivity for GAA and CT. In addition, we present the largest reference group ever published for guanidino compounds in all relevant body fluids. Therefore this method is applicable for high-throughput approaches for diagnosis and follow-up of inborn errors of creatine metabolism and transport. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. High-speed holographic system for full-field transient vibrometry of the human tympanic membrane

    NASA Astrophysics Data System (ADS)

    Dobrev, I.; Harrington, E. J.; Cheng, T.; Furlong, C.; Rosowski, J. J.

    2014-07-01

    Understanding of the human hearing process requires the quantification of the transient response of the human ear and the human tympanic membrane (TM or eardrum) in particular. Current state-of-the-art medical methods to quantify the transient acousto-mechanical response of the TM provide only averaged acoustic or local information at a few points. This may be insufficient to fully describe the complex patterns unfolding across the full surface of the TM. Existing engineering systems for full-field nanometer measurements of transient events, typically based on holographic methods, constrain the maximum sampling speed and/or require complex experimental setups. We have developed and implemented of a new high-speed (i.e., > 40 Kfps) holographic system (HHS) with a hybrid spatio-temporal local correlation phase sampling method that allows quantification of the full-field nanometer transient (i.e., > 10 kHz) displacement of the human TM. The HHS temporal accuracy and resolution is validated versus a LDV on both artificial membranes and human TMs. The high temporal (i.e., < 24 μs) and spatial (i.e., >100k data points) resolution of our HHS enables simultaneous measurement of the time waveform of the full surface of the TM. These capabilities allow for quantification of spatially-dependent motion parameters such as energy propagation delays surface wave speeds, which can be used to infer local material properties across the surface of the TM. The HHS could provide a new tool for the investigation of the auditory system with applications in medical research, in-vivo clinical diagnosis as well as hearing aids design.

  4. Her2Net: A Deep Framework for Semantic Segmentation and Classification of Cell Membranes and Nuclei in Breast Cancer Evaluation.

    PubMed

    Saha, Monjoy; Chakraborty, Chandan

    2018-05-01

    We present an efficient deep learning framework for identifying, segmenting, and classifying cell membranes and nuclei from human epidermal growth factor receptor-2 (HER2)-stained breast cancer images with minimal user intervention. This is a long-standing issue for pathologists because the manual quantification of HER2 is error-prone, costly, and time-consuming. Hence, we propose a deep learning-based HER2 deep neural network (Her2Net) to solve this issue. The convolutional and deconvolutional parts of the proposed Her2Net framework consisted mainly of multiple convolution layers, max-pooling layers, spatial pyramid pooling layers, deconvolution layers, up-sampling layers, and trapezoidal long short-term memory (TLSTM). A fully connected layer and a softmax layer were also used for classification and error estimation. Finally, HER2 scores were calculated based on the classification results. The main contribution of our proposed Her2Net framework includes the implementation of TLSTM and a deep learning framework for cell membrane and nucleus detection, segmentation, and classification and HER2 scoring. Our proposed Her2Net achieved 96.64% precision, 96.79% recall, 96.71% F-score, 93.08% negative predictive value, 98.33% accuracy, and a 6.84% false-positive rate. Our results demonstrate the high accuracy and wide applicability of the proposed Her2Net in the context of HER2 scoring for breast cancer evaluation.

  5. An accurate proteomic quantification method: fluorescence labeling absolute quantification (FLAQ) using multidimensional liquid chromatography and tandem mass spectrometry.

    PubMed

    Liu, Junyan; Liu, Yang; Gao, Mingxia; Zhang, Xiangmin

    2012-08-01

    A facile proteomic quantification method, fluorescent labeling absolute quantification (FLAQ), was developed. Instead of using MS for quantification, the FLAQ method is a chromatography-based quantification in combination with MS for identification. Multidimensional liquid chromatography (MDLC) with laser-induced fluorescence (LIF) detection with high accuracy and tandem MS system were employed for FLAQ. Several requirements should be met for fluorescent labeling in MS identification: Labeling completeness, minimum side-reactions, simple MS spectra, and no extra tandem MS fragmentations for structure elucidations. A fluorescence dye, 5-iodoacetamidofluorescein, was finally chosen to label proteins on all cysteine residues. The fluorescent dye was compatible with the process of the trypsin digestion and MALDI MS identification. Quantitative labeling was achieved with optimization of reacting conditions. A synthesized peptide and model proteins, BSA (35 cysteines), OVA (five cysteines), were used for verifying the completeness of labeling. Proteins were separated through MDLC and quantified based on fluorescent intensities, followed by MS identification. High accuracy (RSD% < 1.58) and wide linearity of quantification (1-10(5) ) were achieved by LIF detection. The limit of quantitation for the model protein was as low as 0.34 amol. Parts of proteins in human liver proteome were quantified and demonstrated using FLAQ. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. PSEA-Quant: a protein set enrichment analysis on label-free and label-based protein quantification data.

    PubMed

    Lavallée-Adam, Mathieu; Rauniyar, Navin; McClatchy, Daniel B; Yates, John R

    2014-12-05

    The majority of large-scale proteomics quantification methods yield long lists of quantified proteins that are often difficult to interpret and poorly reproduced. Computational approaches are required to analyze such intricate quantitative proteomics data sets. We propose a statistical approach to computationally identify protein sets (e.g., Gene Ontology (GO) terms) that are significantly enriched with abundant proteins with reproducible quantification measurements across a set of replicates. To this end, we developed PSEA-Quant, a protein set enrichment analysis algorithm for label-free and label-based protein quantification data sets. It offers an alternative approach to classic GO analyses, models protein annotation biases, and allows the analysis of samples originating from a single condition, unlike analogous approaches such as GSEA and PSEA. We demonstrate that PSEA-Quant produces results complementary to GO analyses. We also show that PSEA-Quant provides valuable information about the biological processes involved in cystic fibrosis using label-free protein quantification of a cell line expressing a CFTR mutant. Finally, PSEA-Quant highlights the differences in the mechanisms taking place in the human, rat, and mouse brain frontal cortices based on tandem mass tag quantification. Our approach, which is available online, will thus improve the analysis of proteomics quantification data sets by providing meaningful biological insights.

  7. PSEA-Quant: A Protein Set Enrichment Analysis on Label-Free and Label-Based Protein Quantification Data

    PubMed Central

    2015-01-01

    The majority of large-scale proteomics quantification methods yield long lists of quantified proteins that are often difficult to interpret and poorly reproduced. Computational approaches are required to analyze such intricate quantitative proteomics data sets. We propose a statistical approach to computationally identify protein sets (e.g., Gene Ontology (GO) terms) that are significantly enriched with abundant proteins with reproducible quantification measurements across a set of replicates. To this end, we developed PSEA-Quant, a protein set enrichment analysis algorithm for label-free and label-based protein quantification data sets. It offers an alternative approach to classic GO analyses, models protein annotation biases, and allows the analysis of samples originating from a single condition, unlike analogous approaches such as GSEA and PSEA. We demonstrate that PSEA-Quant produces results complementary to GO analyses. We also show that PSEA-Quant provides valuable information about the biological processes involved in cystic fibrosis using label-free protein quantification of a cell line expressing a CFTR mutant. Finally, PSEA-Quant highlights the differences in the mechanisms taking place in the human, rat, and mouse brain frontal cortices based on tandem mass tag quantification. Our approach, which is available online, will thus improve the analysis of proteomics quantification data sets by providing meaningful biological insights. PMID:25177766

  8. Source separation on hyperspectral cube applied to dermatology

    NASA Astrophysics Data System (ADS)

    Mitra, J.; Jolivot, R.; Vabres, P.; Marzani, F. S.

    2010-03-01

    This paper proposes a method of quantification of the components underlying the human skin that are supposed to be responsible for the effective reflectance spectrum of the skin over the visible wavelength. The method is based on independent component analysis assuming that the epidermal melanin and the dermal haemoglobin absorbance spectra are independent of each other. The method extracts the source spectra that correspond to the ideal absorbance spectra of melanin and haemoglobin. The noisy melanin spectrum is fixed using a polynomial fit and the quantifications associated with it are reestimated. The results produce feasible quantifications of each source component in the examined skin patch.

  9. Quantification of Crypt and Stem Cell Evolution in the Normal and Neoplastic Human Colon

    PubMed Central

    Baker, Ann-Marie; Cereser, Biancastella; Melton, Samuel; Fletcher, Alexander G.; Rodriguez-Justo, Manuel; Tadrous, Paul J.; Humphries, Adam; Elia, George; McDonald, Stuart A.C.; Wright, Nicholas A.; Simons, Benjamin D.; Jansen, Marnix; Graham, Trevor A.

    2014-01-01

    Summary Human intestinal stem cell and crypt dynamics remain poorly characterized because transgenic lineage-tracing methods are impractical in humans. Here, we have circumvented this problem by quantitatively using somatic mtDNA mutations to trace clonal lineages. By analyzing clonal imprints on the walls of colonic crypts, we show that human intestinal stem cells conform to one-dimensional neutral drift dynamics with a “functional” stem cell number of five to six in both normal patients and individuals with familial adenomatous polyposis (germline APC−/+). Furthermore, we show that, in adenomatous crypts (APC−/−), there is a proportionate increase in both functional stem cell number and the loss/replacement rate. Finally, by analyzing fields of mtDNA mutant crypts, we show that a normal colon crypt divides around once every 30–40 years, and the division rate is increased in adenomas by at least an order of magnitude. These data provide in vivo quantification of human intestinal stem cell and crypt dynamics. PMID:25127143

  10. Improved HPLC method for determination of four PPIs, omeprazole, pantoprazole, lansoprazole and rabeprazole in human plasma.

    PubMed

    Noubarani, Maryam; Keyhanfar, Fariborz; Motevalian, Manijeh; Mahmoudian, Masoud

    2010-01-01

    To develop a simple and rapid HPLC method for measuring of four proton-pump inhibitors (PPIs), omeprazole (OPZ), pantoprazole (PPZ), lansoprazole (LPZ) and rabeprazole (RPZ) concentrations in human plasma. Following a single step liquid-liquid extraction analytes along with an internal standard (IS) were separated using an isocratic mobile phase of phosphate buffer (10 mM)/acetonitrile (53/47, v/v adjusted pH to 7.3 with triethylamine) at flow rate of 1 mL/min on reverse phase TRACER EXCEL 120 ODS-A column at room temperature. Total analytical run time for selected PPIs was 10 min. The assays exhibited good linearity (r(2)>0.99) over the studied range of 20 to 2500 ng/mL for OPZ, 20 to 4000 ng/mL for PPZ, 20 to 3000 ng/mL for LPZ and 20 to 1500 ng/mL for RPZ. The recovery of method was equal or greater than 80% and lower limit of quantification (LLOQ) was 20 ng/mL for four PPIs. Coefficient of variation and error at all of the intra-day and inter-day assessment were less than 9.2% for all compounds. The results indicated that this method is a simple, rapid, precise and accurate assay for determination of four PPIs concentrations in human plasma. This validated method is sensitive and reproducible enough to be used in pharmacokinetic studies and also is time- and cost-benefit when selected PPIs are desired to be analyzed.

  11. Simultaneous quantification of alternatively spliced transcripts in a single droplet digital PCR reaction.

    PubMed

    Sun, Bing; Tao, Lian; Zheng, Yun-Ling

    2014-06-01

    Human telomerase reverse transcriptase (hTERT) is an essential component required for telomerase activity and telomere maintenance. Several alternatively spliced forms of hTERT mRNA have been reported in human primary and tumor cells. Currently, however, there is no sensitive and accurate method for the simultaneous quantification of multiple alternatively spliced RNA transcripts, such as in the case of hTERT. Here we show droplet digital PCR (ddPCR) provides sensitive, simultaneous digital quantification in a single reaction of two alternatively spliced single deletion hTERT transcripts (α-/β+ and α+/β-) as well as the opportunity to manually quantify non-deletion (α+/β+) and double deletion (α-/β-) transcripts. Our ddPCR method enables direct comparison among four alternatively spliced mRNAs without the need for internal standards or multiple primer pairs specific for each variant as real-time PCR (qPCR) requires, thus eliminating potential variation due to differences in PCR amplification efficiency.

  12. Error correction in multi-fidelity molecular dynamics simulations using functional uncertainty quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reeve, Samuel Temple; Strachan, Alejandro, E-mail: strachan@purdue.edu

    We use functional, Fréchet, derivatives to quantify how thermodynamic outputs of a molecular dynamics (MD) simulation depend on the potential used to compute atomic interactions. Our approach quantifies the sensitivity of the quantities of interest with respect to the input functions as opposed to its parameters as is done in typical uncertainty quantification methods. We show that the functional sensitivity of the average potential energy and pressure in isothermal, isochoric MD simulations using Lennard–Jones two-body interactions can be used to accurately predict those properties for other interatomic potentials (with different functional forms) without re-running the simulations. This is demonstrated undermore » three different thermodynamic conditions, namely a crystal at room temperature, a liquid at ambient pressure, and a high pressure liquid. The method provides accurate predictions as long as the change in potential can be reasonably described to first order and does not significantly affect the region in phase space explored by the simulation. The functional uncertainty quantification approach can be used to estimate the uncertainties associated with constitutive models used in the simulation and to correct predictions if a more accurate representation becomes available.« less

  13. Robust high-throughput batch screening method in 384-well format with optical in-line resin quantification.

    PubMed

    Kittelmann, Jörg; Ottens, Marcel; Hubbuch, Jürgen

    2015-04-15

    High-throughput batch screening technologies have become an important tool in downstream process development. Although continuative miniaturization saves time and sample consumption, there is yet no screening process described in the 384-well microplate format. Several processes are established in the 96-well dimension to investigate protein-adsorbent interactions, utilizing between 6.8 and 50 μL resin per well. However, as sample consumption scales with resin volumes and throughput scales with experiments per microplate, they are limited in costs and saved time. In this work, a new method for in-well resin quantification by optical means, applicable in the 384-well format, and resin volumes as small as 0.1 μL is introduced. A HTS batch isotherm process is described, utilizing this new method in combination with optical sample volume quantification for screening of isotherm parameters in 384-well microplates. Results are qualified by confidence bounds determined by bootstrap analysis and a comprehensive Monte Carlo study of error propagation. This new approach opens the door to a variety of screening processes in the 384-well format on HTS stations, higher quality screening data and an increase in throughput. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Mixture quantification using PLS in plastic scintillation measurements.

    PubMed

    Bagán, H; Tarancón, A; Rauret, G; García, J F

    2011-06-01

    This article reports the capability of plastic scintillation (PS) combined with multivariate calibration (Partial least squares; PLS) to detect and quantify alpha and beta emitters in mixtures. While several attempts have been made with this purpose in mind using liquid scintillation (LS), no attempt was done using PS that has the great advantage of not producing mixed waste after the measurements are performed. Following this objective, ternary mixtures of alpha and beta emitters ((241)Am, (137)Cs and (90)Sr/(90)Y) have been quantified. Procedure optimisation has evaluated the use of the net spectra or the sample spectra, the inclusion of different spectra obtained at different values of the Pulse Shape Analysis parameter and the application of the PLS1 or PLS2 algorithms. The conclusions show that the use of PS+PLS2 applied to the sample spectra, without the use of any pulse shape discrimination, allows quantification of the activities with relative errors less than 10% in most of the cases. This procedure not only allows quantification of mixtures but also reduces measurement time (no blanks are required) and the application of this procedure does not require detectors that include the pulse shape analysis parameter. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Comparison of high-resolution ultrasonic resonator technology and Raman spectroscopy as novel process analytical tools for drug quantification in self-emulsifying drug delivery systems.

    PubMed

    Stillhart, Cordula; Kuentz, Martin

    2012-02-05

    Self-emulsifying drug delivery systems (SEDDS) are complex mixtures in which drug quantification can become a challenging task. Thus, a general need exists for novel analytical methods and a particular interest lies in techniques with the potential for process monitoring. This article compares Raman spectroscopy with high-resolution ultrasonic resonator technology (URT) for drug quantification in SEDDS. The model drugs fenofibrate, indomethacin, and probucol were quantitatively assayed in different self-emulsifying formulations. We measured ultrasound velocity and attenuation in the bulk formulation containing drug at different concentrations. The formulations were also studied by Raman spectroscopy. We used both, an in-line immersion probe for the bulk formulation and a multi-fiber sensor for measuring through hard-gelatin capsules that were filled with SEDDS. Each method was assessed by calculating the relative standard error of prediction (RSEP) as well as the limit of quantification (LOQ) and the mean recovery. Raman spectroscopy led to excellent calibration models for the bulk formulation as well as the capsules. The RSEP depended on the SEDDS type with values of 1.5-3.8%, while LOQ was between 0.04 and 0.35% (w/w) for drug quantification in the bulk. Similarly, the analysis of the capsules led to RSEP of 1.9-6.5% and LOQ of 0.01-0.41% (w/w). On the other hand, ultrasound attenuation resulted in RSEP of 2.3-4.4% and LOQ of 0.1-0.6% (w/w). Moreover, ultrasound velocity provided an interesting analytical response in cases where the drug strongly affected the density or compressibility of the SEDDS. We conclude that ultrasonic resonator technology and Raman spectroscopy constitute suitable methods for drug quantification in SEDDS, which is promising for their use as process analytical technologies. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. The Ethics of Quantification and Why It Doesn't Work; or Life among the Numerologists, Inumerates, and Qualitatives.

    ERIC Educational Resources Information Center

    Sax, Gilbert

    The paper states that quantification is neither ethical nor unethical, but is ethically neutral. It is the behavior or intent of the human being that is clearly a matter of ethical concern. Like numerology and the sects of inumerates and qualitatives, there is not so much an unethical practice that is supported as there is a lack of vision and…

  17. Quantitative interaction proteomics using mass spectrometry.

    PubMed

    Wepf, Alexander; Glatter, Timo; Schmidt, Alexander; Aebersold, Ruedi; Gstaiger, Matthias

    2009-03-01

    We present a mass spectrometry-based strategy for the absolute quantification of protein complex components isolated through affinity purification. We quantified bait proteins via isotope-labeled reference peptides corresponding to an affinity tag sequence and prey proteins by label-free correlational quantification using the precursor ion signal intensities of proteotypic peptides generated in reciprocal purifications. We used this method to quantitatively analyze interaction stoichiometries in the human protein phosphatase 2A network.

  18. The Challenges of Credible Thermal Protection System Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2013-01-01

    The paper discusses several of the challenges associated with developing a credible reliability estimate for a human-rated crew capsule thermal protection system. The process of developing such a credible estimate is subject to the quantification, modeling and propagation of numerous uncertainties within a probabilistic analysis. The development of specific investment recommendations, to improve the reliability prediction, among various potential testing and programmatic options is then accomplished through Bayesian analysis.

  19. Validated Method for the Quantification of Baclofen in Human Plasma Using Solid-Phase Extraction and Liquid Chromatography-Tandem Mass Spectrometry.

    PubMed

    Nahar, Limon Khatun; Cordero, Rosa Elena; Nutt, David; Lingford-Hughes, Anne; Turton, Samuel; Durant, Claire; Wilson, Sue; Paterson, Sue

    2016-03-01

    A highly sensitive and fully validated method was developed for the quantification of baclofen in human plasma. After adjusting the pH of the plasma samples using a phosphate buffer solution (pH 4), baclofen was purified using mixed mode (C8/cation exchange) solid-phase extraction (SPE) cartridges. Endogenous water-soluble compounds and lipids were removed from the cartridges before the samples were eluted and concentrated. The samples were analyzed using triple-quadrupole liquid chromatography-tandem mass spectrometry (LC-MS-MS) with triggered dynamic multiple reaction monitoring mode for simultaneous quantification and confirmation. The assay was linear from 25 to 1,000 ng/mL (r(2) > 0.999; n = 6). Intraday (n = 6) and interday (n = 15) imprecisions (% relative standard deviation) were <5%, and the average recovery was 30%. The limit of detection of the method was 5 ng/mL, and the limit of quantification was 25 ng/mL. Plasma samples from healthy male volunteers (n = 9, median age: 22) given two single oral doses of baclofen (10 and 60 mg) on nonconsecutive days were analyzed to demonstrate method applicability. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Rapid Quantification of 25-Hydroxyvitamin D3 in Human Serum by Matrix-Assisted Laser Desorption/Ionization Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Qi, Yulin; Müller, Miriam; Stokes, Caroline S.; Volmer, Dietrich A.

    2018-04-01

    LC-MS/MS is widely utilized today for quantification of vitamin D in biological fluids. Mass spectrometric assays for vitamin D require very careful method optimization for precise and interference-free, accurate analyses however. Here, we explore chemical derivatization and matrix-assisted laser desorption/ionization (MALDI) as a rapid alternative for quantitative measurement of 25-hydroxyvitamin D3 in human serum, and compare it to results from LC-MS/MS. The method implemented an automated imaging step of each MALDI spot, to locate areas of high intensity, avoid sweet spot phenomena, and thus improve precision. There was no statistically significant difference in vitamin D quantification between the MALDI-MS/MS and LC-MS/MS: mean ± standard deviation for MALDI-MS—29.4 ± 10.3 ng/mL—versus LC-MS/MS—30.3 ± 11.2 ng/mL (P = 0.128)—for the sum of the 25-hydroxyvitamin D epimers. The MALDI-based assay avoided time-consuming chromatographic separation steps and was thus much faster than the LC-MS/MS assay. It also consumed less sample, required no organic solvents, and was readily automated. In this proof-of-concept study, MALDI-MS readily demonstrated its potential for mass spectrometric quantification of vitamin D compounds in biological fluids.

  1. Volumetric adsorptive microsampling-liquid chromatography tandem mass spectrometry assay for the simultaneous quantification of four antibiotics in human blood: Method development, validation and comparison with dried blood spot.

    PubMed

    Barco, Sebastiano; Castagnola, Elio; Moscatelli, Andrea; Rudge, James; Tripodi, Gino; Cangemi, Giuliana

    2017-10-25

    In this paper we show the development and validation of a volumetric absorptive microsampling (VAMS™)-LC-MS/MS method for the simultaneous quantification of four antibiotics: piperacillin-tazobactam, meropenem, linezolid and ceftazidime in 10μL human blood. The novel VAMS-LC-MS/MS method has been compared with a dried blood spot (DBS)-based method in terms of impact of hematocrit (HCT) on accuracy, reproducibility, recovery and matrix effect. Antibiotics were extracted from VAMS and DBS by protein precipitation with methanol after a re-hydration step at 37°C for 10min. LC-MS/MS was carried out on a Thermo Scientific™ TSQ Quantum™ Access MAX triple quadrupole coupled to an Accela ™UHPLC system. The VAMS-LC-MS/MS method is selective, precise and reproducible. In contrast to DBS, it allows an accurate quantification without any HCT influence. It has been applied to samples derived from pediatric patients under therapy. VAMS is a valid alternative sampling strategy for the quantification of antibiotics and is valuable in support of clinical PK/PD studies and consequently therapeutic drug monitoring (TDM) in pediatrics. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Compound Stimulus Presentation Does Not Deepen Extinction in Human Causal Learning

    PubMed Central

    Griffiths, Oren; Holmes, Nathan; Westbrook, R. Fred

    2017-01-01

    Models of associative learning have proposed that cue-outcome learning critically depends on the degree of prediction error encountered during training. Two experiments examined the role of error-driven extinction learning in a human causal learning task. Target cues underwent extinction in the presence of additional cues, which differed in the degree to which they predicted the outcome, thereby manipulating outcome expectancy and, in the absence of any change in reinforcement, prediction error. These prediction error manipulations have each been shown to modulate extinction learning in aversive conditioning studies. While both manipulations resulted in increased prediction error during training, neither enhanced extinction in the present human learning task (one manipulation resulted in less extinction at test). The results are discussed with reference to the types of associations that are regulated by prediction error, the types of error terms involved in their regulation, and how these interact with parameters involved in training. PMID:28232809

  3. StakeMeter: Value-Based Stakeholder Identification and Quantification Framework for Value-Based Software Systems

    PubMed Central

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N. A.; Zaheer, Kashif Bin

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called ‘StakeMeter’. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error. PMID:25799490

  4. Lesion detection and quantification performance of the Tachyon-I time-of-flight PET scanner: phantom and human studies.

    PubMed

    Zhang, Xuezhu; Peng, Qiyu; Zhou, Jian; Huber, Jennifer S; Moses, William W; Qi, Jinyi

    2018-03-16

    The first generation Tachyon PET (Tachyon-I) is a demonstration single-ring PET scanner that reaches a coincidence timing resolution of 314 ps using LSO scintillator crystals coupled to conventional photomultiplier tubes. The objective of this study was to quantify the improvement in both lesion detection and quantification performance resulting from the improved time-of-flight (TOF) capability of the Tachyon-I scanner. We developed a quantitative TOF image reconstruction method for the Tachyon-I and evaluated its TOF gain for lesion detection and quantification. Scans of either a standard NEMA torso phantom or healthy volunteers were used as the normal background data. Separately scanned point source and sphere data were superimposed onto the phantom or human data after accounting for the object attenuation. We used the bootstrap method to generate multiple independent noisy datasets with and without a lesion present. The signal-to-noise ratio (SNR) of a channelized hotelling observer (CHO) was calculated for each lesion size and location combination to evaluate the lesion detection performance. The bias versus standard deviation trade-off of each lesion uptake was also calculated to evaluate the quantification performance. The resulting CHO-SNR measurements showed improved performance in lesion detection with better timing resolution. The detection performance was also dependent on the lesion size and location, in addition to the background object size and shape. The results of bias versus noise trade-off showed that the noise (standard deviation) reduction ratio was about 1.1-1.3 over the TOF 500 ps and 1.5-1.9 over the non-TOF modes, similar to the SNR gains for lesion detection. In conclusion, this Tachyon-I PET study demonstrated the benefit of improved time-of-flight capability on lesion detection and ROI quantification for both phantom and human subjects.

  5. Quantification of Flavin-containing Monooxygenases 1, 3, and 5 in Human Liver Microsomes by UPLC-MRM-Based Targeted Quantitative Proteomics and Its Application to the Study of Ontogeny.

    PubMed

    Chen, Yao; Zane, Nicole R; Thakker, Dhiren R; Wang, Michael Zhuo

    2016-07-01

    Flavin-containing monooxygenases (FMOs) have a significant role in the metabolism of small molecule pharmaceuticals. Among the five human FMOs, FMO1, FMO3, and FMO5 are the most relevant to hepatic drug metabolism. Although age-dependent hepatic protein expression, based on immunoquantification, has been reported previously for FMO1 and FMO3, there is very little information on hepatic FMO5 protein expression. To overcome the limitations of immunoquantification, an ultra-performance liquid chromatography (UPLC)-multiple reaction monitoring (MRM)-based targeted quantitative proteomic method was developed and optimized for the quantification of FMO1, FMO3, and FMO5 in human liver microsomes (HLM). A post-in silico product ion screening process was incorporated to verify LC-MRM detection of potential signature peptides before their synthesis. The developed method was validated by correlating marker substrate activity and protein expression in a panel of adult individual donor HLM (age 39-67 years). The mean (range) protein expression of FMO3 and FMO5 was 46 (26-65) pmol/mg HLM protein and 27 (11.5-49) pmol/mg HLM protein, respectively. To demonstrate quantification of FMO1, a panel of fetal individual donor HLM (gestational age 14-20 weeks) was analyzed. The mean (range) FMO1 protein expression was 7.0 (4.9-9.7) pmol/mg HLM protein. Furthermore, the ontogenetic protein expression of FMO5 was evaluated in fetal, pediatric, and adult HLM. The quantification of FMO proteins also was compared using two different calibration standards, recombinant proteins versus synthetic signature peptides, to assess the ratio between holoprotein versus total protein. In conclusion, a UPLC-MRM-based targeted quantitative proteomic method has been developed for the quantification of FMO enzymes in HLM. Copyright © 2016 by The American Society for Pharmacology and Experimental Therapeutics.

  6. Quantification of Flavin-containing Monooxygenases 1, 3, and 5 in Human Liver Microsomes by UPLC-MRM-Based Targeted Quantitative Proteomics and Its Application to the Study of Ontogeny

    PubMed Central

    Chen, Yao; Zane, Nicole R.; Thakker, Dhiren R.

    2016-01-01

    Flavin-containing monooxygenases (FMOs) have a significant role in the metabolism of small molecule pharmaceuticals. Among the five human FMOs, FMO1, FMO3, and FMO5 are the most relevant to hepatic drug metabolism. Although age-dependent hepatic protein expression, based on immunoquantification, has been reported previously for FMO1 and FMO3, there is very little information on hepatic FMO5 protein expression. To overcome the limitations of immunoquantification, an ultra-performance liquid chromatography (UPLC)-multiple reaction monitoring (MRM)-based targeted quantitative proteomic method was developed and optimized for the quantification of FMO1, FMO3, and FMO5 in human liver microsomes (HLM). A post-in silico product ion screening process was incorporated to verify LC-MRM detection of potential signature peptides before their synthesis. The developed method was validated by correlating marker substrate activity and protein expression in a panel of adult individual donor HLM (age 39–67 years). The mean (range) protein expression of FMO3 and FMO5 was 46 (26–65) pmol/mg HLM protein and 27 (11.5–49) pmol/mg HLM protein, respectively. To demonstrate quantification of FMO1, a panel of fetal individual donor HLM (gestational age 14–20 weeks) was analyzed. The mean (range) FMO1 protein expression was 7.0 (4.9–9.7) pmol/mg HLM protein. Furthermore, the ontogenetic protein expression of FMO5 was evaluated in fetal, pediatric, and adult HLM. The quantification of FMO proteins also was compared using two different calibration standards, recombinant proteins versus synthetic signature peptides, to assess the ratio between holoprotein versus total protein. In conclusion, a UPLC-MRM-based targeted quantitative proteomic method has been developed for the quantification of FMO enzymes in HLM. PMID:26839369

  7. Lesion detection and quantification performance of the Tachyon-I time-of-flight PET scanner: phantom and human studies

    NASA Astrophysics Data System (ADS)

    Zhang, Xuezhu; Peng, Qiyu; Zhou, Jian; Huber, Jennifer S.; Moses, William W.; Qi, Jinyi

    2018-03-01

    The first generation Tachyon PET (Tachyon-I) is a demonstration single-ring PET scanner that reaches a coincidence timing resolution of 314 ps using LSO scintillator crystals coupled to conventional photomultiplier tubes. The objective of this study was to quantify the improvement in both lesion detection and quantification performance resulting from the improved time-of-flight (TOF) capability of the Tachyon-I scanner. We developed a quantitative TOF image reconstruction method for the Tachyon-I and evaluated its TOF gain for lesion detection and quantification. Scans of either a standard NEMA torso phantom or healthy volunteers were used as the normal background data. Separately scanned point source and sphere data were superimposed onto the phantom or human data after accounting for the object attenuation. We used the bootstrap method to generate multiple independent noisy datasets with and without a lesion present. The signal-to-noise ratio (SNR) of a channelized hotelling observer (CHO) was calculated for each lesion size and location combination to evaluate the lesion detection performance. The bias versus standard deviation trade-off of each lesion uptake was also calculated to evaluate the quantification performance. The resulting CHO-SNR measurements showed improved performance in lesion detection with better timing resolution. The detection performance was also dependent on the lesion size and location, in addition to the background object size and shape. The results of bias versus noise trade-off showed that the noise (standard deviation) reduction ratio was about 1.1–1.3 over the TOF 500 ps and 1.5–1.9 over the non-TOF modes, similar to the SNR gains for lesion detection. In conclusion, this Tachyon-I PET study demonstrated the benefit of improved time-of-flight capability on lesion detection and ROI quantification for both phantom and human subjects.

  8. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, T; Kumaraswamy, L

    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10more » CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.« less

  9. Automatic cerebrospinal fluid segmentation in non-contrast CT images using a 3D convolutional network

    NASA Astrophysics Data System (ADS)

    Patel, Ajay; van de Leemput, Sil C.; Prokop, Mathias; van Ginneken, Bram; Manniesing, Rashindra

    2017-03-01

    Segmentation of anatomical structures is fundamental in the development of computer aided diagnosis systems for cerebral pathologies. Manual annotations are laborious, time consuming and subject to human error and observer variability. Accurate quantification of cerebrospinal fluid (CSF) can be employed as a morphometric measure for diagnosis and patient outcome prediction. However, segmenting CSF in non-contrast CT images is complicated by low soft tissue contrast and image noise. In this paper we propose a state-of-the-art method using a multi-scale three-dimensional (3D) fully convolutional neural network (CNN) to automatically segment all CSF within the cranial cavity. The method is trained on a small dataset comprised of four manually annotated cerebral CT images. Quantitative evaluation of a separate test dataset of four images shows a mean Dice similarity coefficient of 0.87 +/- 0.01 and mean absolute volume difference of 4.77 +/- 2.70 %. The average prediction time was 68 seconds. Our method allows for fast and fully automated 3D segmentation of cerebral CSF in non-contrast CT, and shows promising results despite a limited amount of training data.

  10. Human errors and occupational injuries of older female workers in the residential healthcare facilities for the elderly.

    PubMed

    Kim, Jun Sik; Jeong, Byung Yong

    2018-05-03

    The study aimed to describe the characteristics of occupational injuries of female workers in the residential healthcare facilities for the elderly, and analyze human errors as causes of accidents. From the national industrial accident compensation data, 506 female injuries were analyzed by age and occupation. The results showed that medical service worker was the most prevalent (54.1%), followed by social welfare worker (20.4%). Among injuries, 55.7% were <1 year of work experience, and 37.9% were ≥60 years old. Slips/falls were the most common type of accident (42.7%), and proportion of injured by slips/falls increases with age. Among human errors, action errors were the primary reasons, followed by perception errors, and cognition errors. Besides, the ratios of injuries by perception errors and action errors increase with age, respectively. The findings of this study suggest that there is a need to design workplaces that accommodate the characteristics of older female workers.

  11. Quantitative PCR for genetic markers of human fecal pollution

    EPA Science Inventory

    Assessment of health risk and fecal bacteria loads associated with human fecal pollution requires reliable host-specific analytical methods and a rapid quantification approach. We report the development of quantitative PCR assays for enumeration of two recently described hum...

  12. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    PubMed

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  13. High-speed broadband nanomechanical property quantification and imaging of life science materials using atomic force microscope

    NASA Astrophysics Data System (ADS)

    Ren, Juan

    Nanoscale morphological characterization and mechanical properties quantification of soft and biological materials play an important role in areas ranging from nano-composite material synthesis and characterization, cellular mechanics to drug design. Frontier studies in these areas demand the coordination between nanoscale morphological evolution and mechanical behavior variations through simultaneous measurement of these two aspects of properties. Atomic force microscope (AFM) is very promising in achieving such simultaneous measurements at high-speed and broadband owing to its unique capability in applying force stimuli and then, measuring the response at specific locations in a physiologically friendly environment with pico-newton force and nanometer spatial resolution. Challenges, however, arise as current AFM systems are unable to account for the complex and coupled dynamics of the measurement system and probe-sample interaction during high-speed imaging and broadband measurements. In this dissertation, the creation of a set of dynamics and control tools to probe-based high-speed imaging and rapid broadband nanomechanical spectroscopy of soft and biological materials are presented. Firstly, advanced control-based approaches are presented to improve the imaging performance of AFM imaging both in air and in liquid. An adaptive contact mode (ACM) imaging scheme is proposed to replace the traditional contact mode (CM) imaging by addressing the major concerns in both the speed and the force exerted to the sample. In this work, the image distortion caused by the topography tracking error is accounted for in the topography quantification and the quantified sample topography is utilized in a gradient-based optimization method to adjust the cantilever deflection set-point for each scanline closely around the minimal level needed for maintaining a stable probe-sample contact, and a data-driven iterative feedforward control that utilizes a prediction of the next-line tracking is implemented to enhance the sample topography tracking. An adaptive multi-loop mode (AMLM) imaging approach is proposed to substantially increase the imaging speed of tapping mode (TM) while preserving the advantages of TM over CM by integrating an inner-outer feedback control loop to regulate the TM-deflection on top of the conventional TM-amplitude feedback control to improve the sample topography tracking. Experiments demonstrated that the proposed ACM and AMLM are capable of increasing the imaging speed by at least 20 times for conventional contact and tapping mode imaging, respectively, with no loss of imaging quality and well controlled tip-sample interaction force. In addition, an adaptive mode imaging for in-liquid topography quantification on live cells is presented. The experiment results demonstrated that instead of keeping constant scanning speed, the proposed speed optimization scheme is able to increase the imaging speed on live human prostate cancer cells by at least eight-fold with no loss of imaging quality. Secondly, control based approaches to accurate nanomechanical quantification on soft materials for both broadband and in-liquid force-curve measurements are proposed to address the adverse effects caused by the system coupling dynamics and the cantilever acceleration, which were not compensated for by the conventional AFM measurement approach. The proposed nanomechanical measurement approaches are demonstrated through experiments to measure the viscoelastic properties of different polymer samples in air and live human cells in liquid to study the variation of rate-dependent elastic modulus of cervix cancer cell during the epithelial-mesenchymal transition process.

  14. Quantification of Ethanol's Anti-Punishment Effect in Humans Using the Generalized Matching Equation

    ERIC Educational Resources Information Center

    Rasmussen, Erin B.; Newland, M. Christopher

    2009-01-01

    Increases in rates of punished behavior by the administration of drugs with anxiolytic effects (called antipunishment effects) are well established in animals but not humans. The present study examined antipunishment effects of ethanol in humans using a choice procedure. The behavior of 5 participants was placed under six concurrent…

  15. The "Discernment of a Human Face" in Research: A Reaction to Van Hesteren's Human Science and Counselling.

    ERIC Educational Resources Information Center

    Knowles, Don

    1986-01-01

    Responds to Van Hesteren's advocacy of a "human science" orientation, using constructs and analytical methods from phenomenology to enhance methodology. Approves of a philosophy of science and of a self-perspective. Questions the need for complex terminology, Van Hesteren's definition of what constitutes research, quantification, and…

  16. 21 CFR 530.24 - Procedure for announcing analytical methods for drug residue quantification.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ..., DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS EXTRALABEL DRUG USE IN ANIMALS Specific Provisions Relating to Extralabel Use of Animal and Human Drugs in Food... approved human drug or an approved animal drug. The agency will publish in the Federal Register a notice of...

  17. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Crevillén-García, D.; Power, H.

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  18. Identification and quantification of nitrofurazone metabolites by ultraperformance liquid chromatography-quadrupole time-of-flight high-resolution mass spectrometry with precolumn derivatization.

    PubMed

    Zhang, Shuai; Li, PeiPei; Yan, Zhongyong; Long, Ju; Zhang, Xiaojun

    2017-03-01

    An ultraperformance liquid chromatography-quadrupole time-of-flight high-resolution mass spectrometry method was developed and validated for the determination of nitrofurazone metabolites. Precolumn derivatization with 2,4-dinitrophenylhydrazine and p-dimethylaminobenzaldehyde as an internal standard was used successfully to determine the biomarker 5-nitro-2-furaldehyde. In negative electrospray ionization mode, the precise molecular weights of the derivatives were 320.0372 for the biomarker and 328.1060 for the internal standard (relative error 1.08 ppm). The matrix effect was evaluated and the analytical characteristics of the method and derivatization reaction conditions were validated. For comparison purposes, spiked samples were tested by both internal and external standard methods. The results show high precision can be obtained with p-dimethylaminobenzaldehyde as an internal standard for the identification and quantification of nitrofurazone metabolites in complex biological samples. Graphical Abstract A simplified preparation strategy for biological samples.

  19. A comparative study of the use of powder X-ray diffraction, Raman and near infrared spectroscopy for quantification of binary polymorphic mixtures of piracetam.

    PubMed

    Croker, Denise M; Hennigan, Michelle C; Maher, Anthony; Hu, Yun; Ryder, Alan G; Hodnett, Benjamin K

    2012-04-07

    Diffraction and spectroscopic methods were evaluated for quantitative analysis of binary powder mixtures of FII(6.403) and FIII(6.525) piracetam. The two polymorphs of piracetam could be distinguished using powder X-ray diffraction (PXRD), Raman and near-infrared (NIR) spectroscopy. The results demonstrated that Raman and NIR spectroscopy are most suitable for quantitative analysis of this polymorphic mixture. When the spectra are treated with the combination of multiplicative scatter correction (MSC) and second derivative data pretreatments, the partial least squared (PLS) regression model gave a root mean square error of calibration (RMSEC) of 0.94 and 0.99%, respectively. FIII(6.525) demonstrated some preferred orientation in PXRD analysis, making PXRD the least preferred method of quantification. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Raman spectroscopy for DNA quantification in cell nucleus.

    PubMed

    Okotrub, K A; Surovtsev, N V; Semeshin, V F; Omelyanchuk, L V

    2015-01-01

    Here we demonstrate the feasibility of a novel approach to quantify DNA in cell nuclei. This approach is based on spectroscopy analysis of Raman light scattering, and avoids the problem of nonstoichiometric binding of dyes to DNA, as it directly measures the signal from DNA. Quantitative analysis of nuclear DNA contribution to Raman spectrum could be reliably performed using intensity of a phosphate mode at 1096 cm(-1) . When compared to the known DNA standards from cells of different animals, our results matched those values at error of 10%. We therefore suggest that this approach will be useful to expand the list of DNA standards, to properly adjust the duration of hydrolysis in Feulgen staining, to assay the applicability of fuchsines for DNA quantification, as well as to measure DNA content in cells with complex hydrolysis patterns, when Feulgen densitometry is inappropriate. © 2014 International Society for Advancement of Cytometry.

  1. On a PLIF quantification methodology in a nonlinear dye response regime

    NASA Astrophysics Data System (ADS)

    Baj, P.; Bruce, P. J. K.; Buxton, O. R. H.

    2016-06-01

    A new technique of planar laser-induced fluorescence calibration is presented in this work. It accounts for a nonlinear dye response at high concentrations, an illumination light attenuation and a secondary fluorescence's influence in particular. An analytical approximation of a generic solution of the Beer-Lambert law is provided and utilized for effective concentration evaluation. These features make the technique particularly well suited for high concentration measurements, or those with a large range of concentration values, c, present (i.e. a high dynamic range of c). The method is applied to data gathered in a water flume experiment where a stream of a fluorescent dye (rhodamine 6G) was released into a grid-generated turbulent flow. Based on these results, it is shown that the illumination attenuation and the secondary fluorescence introduce a significant error into the data quantification (up to 15 and 80 %, respectively, for the case considered in this work) unless properly accounted for.

  2. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media.

    PubMed

    Crevillén-García, D; Power, H

    2017-08-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.

  3. Multilevel and quasi-Monte Carlo methods for uncertainty quantification in particle travel times through random heterogeneous porous media

    PubMed Central

    Power, H.

    2017-01-01

    In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen–Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error. PMID:28878974

  4. Detailed Uncertainty Analysis of the ZEM-3 Measurement System

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.

  5. Quantification of evaporation induced error in atom probe tomography using molecular dynamics simulation.

    PubMed

    Chen, Shu Jian; Yao, Xupei; Zheng, Changxi; Duan, Wen Hui

    2017-11-01

    Non-equilibrium molecular dynamics was used to simulate the dynamics of atoms at the atom probe surface and five objective functions were used to quantify errors. The results suggested that before ionization, thermal vibration and collision caused the atoms to displace up to 1Å and 25Å respectively. The average atom displacements were found to vary between 0.2 and 0.5Å. About 9 to 17% of the atoms were affected by collision. Due to the effects of collision and ion-ion repulsion, the back-calculated positions were on average 0.3-0.5Å different from the pre-ionized positions of the atoms when the number of ions generated per pulse was minimal. This difference could increase up to 8-10Å when 1.5ion/nm 2 were evaporated per pulse. On the basis of the results, surface ion density was considered an important factor that needed to be controlled to minimize error in the evaporation process. Copyright © 2017. Published by Elsevier B.V.

  6. [Risk and risk management in aviation].

    PubMed

    Müller, Manfred

    2004-10-01

    RISK MANAGEMENT: The large proportion of human errors in aviation accidents suggested the solution--at first sight brilliant--to replace the fallible human being by an "infallible" digitally-operating computer. However, even after the introduction of the so-called HITEC-airplanes, the factor human error still accounts for 75% of all accidents. Thus, if the computer is ruled out as the ultimate safety system, how else can complex operations involving quick and difficult decisions be controlled? OPTIMIZED TEAM INTERACTION/PARALLEL CONNECTION OF THOUGHT MACHINES: Since a single person is always "highly error-prone", support and control have to be guaranteed by a second person. The independent work of mind results in a safety network that more efficiently cushions human errors. NON-PUNITIVE ERROR MANAGEMENT: To be able to tackle the actual problems, the open discussion of intervened errors must not be endangered by the threat of punishment. It has been shown in the past that progress is primarily achieved by investigating and following up mistakes, failures and catastrophes shortly after they happened. HUMAN FACTOR RESEARCH PROJECT: A comprehensive survey showed the following result: By far the most frequent safety-critical situation (37.8% of all events) consists of the following combination of risk factors: 1. A complication develops. 2. In this situation of increased stress a human error occurs. 3. The negative effects of the error cannot be corrected or eased because there are deficiencies in team interaction on the flight deck. This means, for example, that a negative social climate has the effect of a "turbocharger" when a human error occurs. It needs to be pointed out that a negative social climate is not identical with a dispute. In many cases the working climate is burdened without the responsible person even noticing it: A first negative impression, too much or too little respect, contempt, misunderstandings, not expressing unclear concern, etc. can considerably reduce the efficiency of a team.

  7. Developing and Implementing the Data Mining Algorithms in RAVEN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantificationmore » analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.« less

  8. Imaging-based quantification of hepatic fat: methods and clinical applications.

    PubMed

    Ma, Xiaozhou; Holalkere, Nagaraj-Setty; Kambadakone R, Avinash; Mino-Kenudson, Mari; Hahn, Peter F; Sahani, Dushyant V

    2009-01-01

    Fatty liver disease comprises a spectrum of conditions (simple hepatic steatosis, steatohepatitis with inflammatory changes, and end-stage liver disease with fibrosis and cirrhosis). Hepatic steatosis is often associated with diabetes and obesity and may be secondary to alcohol and drug use, toxins, viral infections, and metabolic diseases. Detection and quantification of liver fat have many clinical applications, and early recognition is crucial to institute appropriate management and prevent progression. Histopathologic analysis is the reference standard to detect and quantify fat in the liver, but results are vulnerable to sampling error. Moreover, it can cause morbidity and complications and cannot be repeated often enough to monitor treatment response. Imaging can be repeated regularly and allows assessment of the entire liver, thus avoiding sampling error. Selection of appropriate imaging methods demands understanding of their advantages and limitations and the suitable clinical setting. Ultrasonography is effective for detecting moderate or severe fatty infiltration but is limited by lack of interobserver reliability and intraobserver reproducibility. Computed tomography allows quantitative and qualitative evaluation and is generally highly accurate and reliable; however, the results may be confounded by hepatic parenchymal changes due to cirrhosis or depositional diseases. Magnetic resonance (MR) imaging with appropriate sequences (eg, chemical shift techniques) has similarly high sensitivity, and MR spectroscopy provides unique advantages for some applications. However, both are expensive and too complex to be used to monitor steatosis. (c) RSNA, 2009.

  9. Analysis of pork adulteration in beef meatball using Fourier transform infrared (FTIR) spectroscopy.

    PubMed

    Rohman, A; Sismindari; Erwanto, Y; Che Man, Yaakob B

    2011-05-01

    Meatball is one of the favorite foods in Indonesia. The adulteration of pork in beef meatball is frequently occurring. This study was aimed to develop a fast and non destructive technique for the detection and quantification of pork in beef meatball using Fourier transform infrared (FTIR) spectroscopy and partial least square (PLS) calibration. The spectral bands associated with pork fat (PF), beef fat (BF), and their mixtures in meatball formulation were scanned, interpreted, and identified by relating them to those spectroscopically representative to pure PF and BF. For quantitative analysis, PLS regression was used to develop a calibration model at the selected fingerprint regions of 1200-1000 cm(-1). The equation obtained for the relationship between actual PF value and FTIR predicted values in PLS calibration model was y = 0.999x + 0.004, with coefficient of determination (R(2)) and root mean square error of calibration are 0.999 and 0.442, respectively. The PLS calibration model was subsequently used for the prediction of independent samples using laboratory made meatball samples containing the mixtures of BF and PF. Using 4 principal components, root mean square error of prediction is 0.742. The results showed that FTIR spectroscopy can be used for the detection and quantification of pork in beef meatball formulation for Halal verification purposes. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.

  10. Competition between learned reward and error outcome predictions in anterior cingulate cortex.

    PubMed

    Alexander, William H; Brown, Joshua W

    2010-02-15

    The anterior cingulate cortex (ACC) is implicated in performance monitoring and cognitive control. Non-human primate studies of ACC show prominent reward signals, but these are elusive in human studies, which instead show mainly conflict and error effects. Here we demonstrate distinct appetitive and aversive activity in human ACC. The error likelihood hypothesis suggests that ACC activity increases in proportion to the likelihood of an error, and ACC is also sensitive to the consequence magnitude of the predicted error. Previous work further showed that error likelihood effects reach a ceiling as the potential consequences of an error increase, possibly due to reductions in the average reward. We explored this issue by independently manipulating reward magnitude of task responses and error likelihood while controlling for potential error consequences in an Incentive Change Signal Task. The fMRI results ruled out a modulatory effect of expected reward on error likelihood effects in favor of a competition effect between expected reward and error likelihood. Dynamic causal modeling showed that error likelihood and expected reward signals are intrinsic to the ACC rather than received from elsewhere. These findings agree with interpretations of ACC activity as signaling both perceptions of risk and predicted reward. Copyright 2009 Elsevier Inc. All rights reserved.

  11. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  12. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    PubMed

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification performance was lower for the local database than the research database for both semi-quantitative and machine learning algorithms. However, for both databases, the machine learning methods generated equal or higher mean accuracies (with lower variance) than any of the semi-quantification approaches. The gain in performance from using machine learning algorithms as compared to semi-quantification was relatively small and may be insufficient, when considered in isolation, to offer significant advantages in the clinical context.

  13. Temporal uncertainty analysis of human errors based on interrelationships among multiple factors: a case of Minuteman III missile accident.

    PubMed

    Rong, Hao; Tian, Jin; Zhao, Tingdi

    2016-01-01

    In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    PubMed

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  15. Large-Scale Uncertainty and Error Analysis for Time-dependent Fluid/Structure Interactions in Wind Turbine Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alonso, Juan J.; Iaccarino, Gianluca

    2013-08-25

    The following is the final report covering the entire period of this aforementioned grant, June 1, 2011 - May 31, 2013 for the portion of the effort corresponding to Stanford University (SU). SU has partnered with Sandia National Laboratories (PI: Mike S. Eldred) and Purdue University (PI: Dongbin Xiu) to complete this research project and this final report includes those contributions made by the members of the team at Stanford. Dr. Eldred is continuing his contributions to this project under a no-cost extension and his contributions to the overall effort will be detailed at a later time (once his effortmore » has concluded) on a separate project submitted by Sandia National Laboratories. At Stanford, the team is made up of Profs. Alonso, Iaccarino, and Duraisamy, post-doctoral researcher Vinod Lakshminarayan, and graduate student Santiago Padron. At Sandia National Laboratories, the team includes Michael Eldred, Matt Barone, John Jakeman, and Stefan Domino, and at Purdue University, we have Prof. Dongbin Xiu as our main collaborator. The overall objective of this project was to develop a novel, comprehensive methodology for uncertainty quantification by combining stochastic expansions (nonintrusive polynomial chaos and stochastic collocation), the adjoint approach, and fusion with experimental data to account for aleatory and epistemic uncertainties from random variable, random field, and model form sources. The expected outcomes of this activity were detailed in the proposal and are repeated here to set the stage for the results that we have generated during the time period of execution of this project: 1. The rigorous determination of an error budget comprising numerical errors in physical space and statistical errors in stochastic space and its use for optimal allocation of resources; 2. A considerable increase in efficiency when performing uncertainty quantification with a large number of uncertain variables in complex non-linear multi-physics problems; 3. A solution to the long-time integration problem of spectral chaos approaches; 4. A rigorous methodology to account for aleatory and epistemic uncertainties, to emphasize the most important variables via dimension reduction and dimension-adaptive refinement, and to support fusion with experimental data using Bayesian inference; 5. The application of novel methodologies to time-dependent reliability studies in wind turbine applications including a number of efforts relating to the uncertainty quantification in vertical-axis wind turbine applications. In this report, we summarize all accomplishments in the project (during the time period specified) focusing on advances in UQ algorithms and deployment efforts to the wind turbine application area. Detailed publications in each of these areas have also been completed and are available from the respective conference proceedings and journals as detailed in a later section.« less

  16. Breast density quantification with cone-beam CT: A post-mortem study

    PubMed Central

    Johnson, Travis; Ding, Huanjun; Le, Huy Q.; Ducote, Justin L.; Molloi, Sabee

    2014-01-01

    Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The percent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson’s r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate (SEE) was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. PMID:24254317

  17. Overall Key Performance Indicator to Optimizing Operation of High-Pressure Homogenizers for a Reliable Quantification of Intracellular Components in Pichia pastoris.

    PubMed

    Garcia-Ortega, Xavier; Reyes, Cecilia; Montesinos, José Luis; Valero, Francisco

    2015-01-01

    The most commonly used cell disruption procedures may present lack of reproducibility, which introduces significant errors in the quantification of intracellular components. In this work, an approach consisting in the definition of an overall key performance indicator (KPI) was implemented for a lab scale high-pressure homogenizer (HPH) in order to determine the disruption settings that allow the reliable quantification of a wide sort of intracellular components. This innovative KPI was based on the combination of three independent reporting indicators: decrease of absorbance, release of total protein, and release of alkaline phosphatase activity. The yeast Pichia pastoris growing on methanol was selected as model microorganism due to it presents an important widening of the cell wall needing more severe methods and operating conditions than Escherichia coli and Saccharomyces cerevisiae. From the outcome of the reporting indicators, the cell disruption efficiency achieved using HPH was about fourfold higher than other lab standard cell disruption methodologies, such bead milling cell permeabilization. This approach was also applied to a pilot plant scale HPH validating the methodology in a scale-up of the disruption process. This innovative non-complex approach developed to evaluate the efficacy of a disruption procedure or equipment can be easily applied to optimize the most common disruption processes, in order to reach not only reliable quantification but also recovery of intracellular components from cell factories of interest.

  18. Overall Key Performance Indicator to Optimizing Operation of High-Pressure Homogenizers for a Reliable Quantification of Intracellular Components in Pichia pastoris

    PubMed Central

    Garcia-Ortega, Xavier; Reyes, Cecilia; Montesinos, José Luis; Valero, Francisco

    2015-01-01

    The most commonly used cell disruption procedures may present lack of reproducibility, which introduces significant errors in the quantification of intracellular components. In this work, an approach consisting in the definition of an overall key performance indicator (KPI) was implemented for a lab scale high-pressure homogenizer (HPH) in order to determine the disruption settings that allow the reliable quantification of a wide sort of intracellular components. This innovative KPI was based on the combination of three independent reporting indicators: decrease of absorbance, release of total protein, and release of alkaline phosphatase activity. The yeast Pichia pastoris growing on methanol was selected as model microorganism due to it presents an important widening of the cell wall needing more severe methods and operating conditions than Escherichia coli and Saccharomyces cerevisiae. From the outcome of the reporting indicators, the cell disruption efficiency achieved using HPH was about fourfold higher than other lab standard cell disruption methodologies, such bead milling cell permeabilization. This approach was also applied to a pilot plant scale HPH validating the methodology in a scale-up of the disruption process. This innovative non-complex approach developed to evaluate the efficacy of a disruption procedure or equipment can be easily applied to optimize the most common disruption processes, in order to reach not only reliable quantification but also recovery of intracellular components from cell factories of interest. PMID:26284241

  19. LoC-SERS toward clinical application: quantification of antibiotics in human urine samples

    NASA Astrophysics Data System (ADS)

    Hidi, I. J.; Jahn, M.; Weber, K.; Pletz, M. W.; Bocklitz, T. W.; Cialla-May, D.; Popp, J.

    2017-02-01

    The determination of the concentration of xenobiotics in biological matrix followed by the change of the prescribing procedure plays a major role in the transition from general to personalized medicine. For this contribution, human urine samples collected from healthy volunteers and from patients having urinary tract infection were used as biological matrix to assess the potential and limitation of LoC-SERS to detected levofloxacin and nitroxoline. The determination of both antibiotics at clinically relevant concentrations, 1.38 mM +/- 0.68 mM for levofloxacin and 10-40 µM for nitroxoline, will be presented. For quantification purposes the standard addition method is combined with LoC-SERS.

  20. Headspace Solid Phase Micro Extraction Gas Chromatographic Determination of Fenthion in Human Serum

    PubMed Central

    Kasiotis, Konstantinos M.; Souki, Helen; Tsakirakis, Angelos N.; Carageorgiou, Haris; Theotokatos, Spiridon A.; Haroutounian, Serkos A.; Machera, Kyriaki

    2008-01-01

    A simple and effective analytical procedure was developed for the determination of fenthion residues in human serum samples. The sample treatment was performed using the headspace solid-phase micro extraction with polyacrylate fiber, which has the advantage to require low amount of serum (1 mL) without tedious pre-treatment. The quantification of fenthion was carried out by gas chromatography-mass spectrometry and the recoveries ranged from 79 to 104% at two spiking levels for 6 replicates. Detection and quantification limits were calculated as 1.51 and 4.54 ng/mL of serum respectively. Two fenthion metabolites fenoxon and fenthion–sulfoxide were also identified. PMID:19325792

  1. Chemometrics resolution and quantification power evaluation: Application on pharmaceutical quaternary mixture of Paracetamol, Guaifenesin, Phenylephrine and p-aminophenol

    NASA Astrophysics Data System (ADS)

    Yehia, Ali M.; Mohamed, Heba M.

    2016-01-01

    Three advanced chemmometric-assisted spectrophotometric methods namely; Concentration Residuals Augmented Classical Least Squares (CRACLS), Multivariate Curve Resolution-Alternating Least Squares (MCR-ALS) and Principal Component Analysis-Artificial Neural Networks (PCA-ANN) were developed, validated and benchmarked to PLS calibration; to resolve the severely overlapped spectra and simultaneously determine; Paracetamol (PAR), Guaifenesin (GUA) and Phenylephrine (PHE) in their ternary mixture and in presence of p-aminophenol (AP) the main degradation product and synthesis impurity of Paracetamol. The analytical performance of the proposed methods was described by percentage recoveries, root mean square error of calibration and standard error of prediction. The four multivariate calibration methods could be directly used without any preliminary separation step and successfully applied for pharmaceutical formulation analysis, showing no excipients' interference.

  2. Criteria for the use of regression analysis for remote sensing of sediment and pollutants

    NASA Technical Reports Server (NTRS)

    Whitlock, C. H.; Kuo, C. Y.; Lecroy, S. R.

    1982-01-01

    An examination of limitations, requirements, and precision of the linear multiple-regression technique for quantification of marine environmental parameters is conducted. Both environmental and optical physics conditions have been defined for which an exact solution to the signal response equations is of the same form as the multiple regression equation. Various statistical parameters are examined to define a criteria for selection of an unbiased fit when upwelled radiance values contain error and are correlated with each other. Field experimental data are examined to define data smoothing requirements in order to satisfy the criteria of Daniel and Wood (1971). Recommendations are made concerning improved selection of ground-truth locations to maximize variance and to minimize physical errors associated with the remote sensing experiment.

  3. Human error analysis of commercial aviation accidents: application of the Human Factors Analysis and Classification system (HFACS).

    PubMed

    Wiegmann, D A; Shappell, S A

    2001-11-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework originally developed and tested within the U.S. military as a tool for investigating and analyzing the human causes of aviation accidents. Based on Reason's (1990) model of latent and active failures, HFACS addresses human error at all levels of the system, including the condition of aircrew and organizational factors. The purpose of the present study was to assess the utility of the HFACS framework as an error analysis and classification tool outside the military. The HFACS framework was used to analyze human error data associated with aircrew-related commercial aviation accidents that occurred between January 1990 and December 1996 using database records maintained by the NTSB and the FAA. Investigators were able to reliably accommodate all the human causal factors associated with the commercial aviation accidents examined in this study using the HFACS system. In addition, the classification of data using HFACS highlighted several critical safety issues in need of intervention research. These results demonstrate that the HFACS framework can be a viable tool for use within the civil aviation arena. However, additional research is needed to examine its applicability to areas outside the flight deck, such as aircraft maintenance and air traffic control domains.

  4. To Err Is Human; To Structurally Prime from Errors Is Also Human

    ERIC Educational Resources Information Center

    Slevc, L. Robert; Ferreira, Victor S.

    2013-01-01

    Natural language contains disfluencies and errors. Do listeners simply discard information that was clearly produced in error, or can erroneous material persist to affect subsequent processing? Two experiments explored this question using a structural priming paradigm. Speakers described dative-eliciting pictures after hearing prime sentences that…

  5. Human factors analysis and classification system-HFACS.

    DOT National Transportation Integrated Search

    2000-02-01

    Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident : reporting systems are not designed around any theoretical framework of human error. As a result, most : accident databases are not conduci...

  6. Technical approaches for measurement of human errors

    NASA Technical Reports Server (NTRS)

    Clement, W. F.; Heffley, R. K.; Jewell, W. F.; Mcruer, D. T.

    1980-01-01

    Human error is a significant contributing factor in a very high proportion of civil transport, general aviation, and rotorcraft accidents. The technical details of a variety of proven approaches for the measurement of human errors in the context of the national airspace system are presented. Unobtrusive measurements suitable for cockpit operations and procedures in part of full mission simulation are emphasized. Procedure, system performance, and human operator centered measurements are discussed as they apply to the manual control, communication, supervisory, and monitoring tasks which are relevant to aviation operations.

  7. Biochemical Fractionation and Stable Isotope Dilution Liquid Chromatography-mass Spectrometry for Targeted and Microdomain-specific Protein Quantification in Human Postmortem Brain Tissue*

    PubMed Central

    MacDonald, Matthew L.; Ciccimaro, Eugene; Prakash, Amol; Banerjee, Anamika; Seeholzer, Steven H.; Blair, Ian A.; Hahn, Chang-Gyu

    2012-01-01

    Synaptic architecture and its adaptive changes require numerous molecular events that are both highly ordered and complex. A majority of neuropsychiatric illnesses are complex trait disorders, in which multiple etiologic factors converge at the synapse via many signaling pathways. Investigating the protein composition of synaptic microdomains from human patient brain tissues will yield valuable insights into the interactions of risk genes in many disorders. These types of studies in postmortem tissues have been limited by the lack of proper study paradigms. Thus, it is necessary not only to develop strategies to quantify protein and post-translational modifications at the synapse, but also to rigorously validate them for use in postmortem human brain tissues. In this study we describe the development of a liquid chromatography-selected reaction monitoring method, using a stable isotope-labeled neuronal proteome standard prepared from the brain tissue of a stable isotope-labeled mouse, for the multiplexed quantification of target synaptic proteins in mammalian samples. Additionally, we report the use of this method to validate a biochemical approach for the preparation of synaptic microdomain enrichments from human postmortem prefrontal cortex. Our data demonstrate that a targeted mass spectrometry approach with a true neuronal proteome standard facilitates accurate and precise quantification of over 100 synaptic proteins in mammalian samples, with the potential to quantify over 1000 proteins. Using this method, we found that protein enrichments in subcellular fractions prepared from human postmortem brain tissue were strikingly similar to those prepared from fresh mouse brain tissue. These findings demonstrate that biochemical fractionation methods paired with targeted proteomic strategies can be used in human brain tissues, with important implications for the study of neuropsychiatric disease. PMID:22942359

  8. The Zugspitze radiative closure experiment for quantifying water vapor absorption over the terrestrial and solar infrared - Part 2: Accurate calibration of high spectral-resolution infrared measurements of surface solar radiation

    NASA Astrophysics Data System (ADS)

    Reichert, Andreas; Rettinger, Markus; Sussmann, Ralf

    2016-09-01

    Quantitative knowledge of water vapor absorption is crucial for accurate climate simulations. An open science question in this context concerns the strength of the water vapor continuum in the near infrared (NIR) at atmospheric temperatures, which is still to be quantified by measurements. This issue can be addressed with radiative closure experiments using solar absorption spectra. However, the spectra used for water vapor continuum quantification have to be radiometrically calibrated. We present for the first time a method that yields sufficient calibration accuracy for NIR water vapor continuum quantification in an atmospheric closure experiment. Our method combines the Langley method with spectral radiance measurements of a high-temperature blackbody calibration source (< 2000 K). The calibration scheme is demonstrated in the spectral range 2500 to 7800 cm-1, but minor modifications to the method enable calibration also throughout the remainder of the NIR spectral range. The resulting uncertainty (2σ) excluding the contribution due to inaccuracies in the extra-atmospheric solar spectrum (ESS) is below 1 % in window regions and up to 1.7 % within absorption bands. The overall radiometric accuracy of the calibration depends on the ESS uncertainty, on which at present no firm consensus has been reached in the NIR. However, as is shown in the companion publication Reichert and Sussmann (2016), ESS uncertainty is only of minor importance for the specific aim of this study, i.e., the quantification of the water vapor continuum in a closure experiment. The calibration uncertainty estimate is substantiated by the investigation of calibration self-consistency, which yields compatible results within the estimated errors for 91.1 % of the 2500 to 7800 cm-1 range. Additionally, a comparison of a set of calibrated spectra to radiative transfer model calculations yields consistent results within the estimated errors for 97.7 % of the spectral range.

  9. [To consider negative viral loads below the limit of quantification can lead to errors in the diagnosis and treatment of hepatitis C virus infection].

    PubMed

    Acero Fernández, Doroteo; Ferri Iglesias, María José; López Nuñez, Carme; Louvrie Freire, René; Aldeguer Manté, Xavier

    2013-01-01

    For years many clinical laboratories have routinely classified undetectable and unquantifiable levels of hepatitis C virus RNA (HCV-RNA) determined by RT-PCR as below limit of quantification (BLOQ). This practice might result in erroneous clinical decisions. To assess the frequency and clinical relevance of assuming that samples that are BLOQ are negative. We performed a retrospective analysis of RNA determinations performed between 2009 and 2011 (Cobas/Taqman, lower LOQ: 15 IU/ml). We distinguished between samples classified as «undetectable» and those classified as «<1.50E+01IU/mL» (BLOQ). We analyzed 2.432 HCV-RNA measurements in 1.371 patients. RNA was BLOQ in 26 samples (1.07%) from 23 patients (1.68%). BLOQ results were highly prevalent among patients receiving Peg-Riba: 23 of 216 samples (10.6%) from 20 of 88 patients receiving treatment (22.7%). The clinical impact of BLOQ RNA samples was as follows: a) 2 patients initially considered to have negative results subsequently showed quantifiable RNA; b) 8 of 9 patients (88.9%) with BLOQ RNA at week 4 of treatment later showed sustained viral response; c) 3 patients with BLOQ RNA at weeks 12 and 48 of treatment relapsed; d) 4 patients with BLOQ RNA at week 24 and/or later had partial or breakthrough treatment responses, and e) in 5 patients the impact were null or could not be ascertained. This study suggests that BLOQ HCV-RNA indicates viremia and that equating a BLOQ result with a negative result can lead to treatment errors. BLOQ results are highly prevalent in on-treatment patients. The results of HCV-RNA quantification should be classified clearly, distinguishing between undetectable levels and levels that are BLOQ. Copyright © 2013 Elsevier España, S.L. and AEEH y AEG. All rights reserved.

  10. Segmentation and quantification of materials with energy discriminating computed tomography: A phantom study

    PubMed Central

    Le, Huy Q.; Molloi, Sabee

    2011-01-01

    Purpose: To experimentally investigate whether a computed tomography (CT) system based on CdZnTe (CZT) detectors in conjunction with a least-squares parameter estimation technique can be used to decompose four different materials. Methods: The material decomposition process was divided into a segmentation task and a quantification task. A least-squares minimization algorithm was used to decompose materials with five measurements of the energy dependent linear attenuation coefficients. A small field-of-view energy discriminating CT system was built. The CT system consisted of an x-ray tube, a rotational stage, and an array of CZT detectors. The CZT array was composed of 64 pixels, each of which is 0.8×0.8×3 mm. Images were acquired at 80 kVp in fluoroscopic mode at 50 ms per frame. The detector resolved the x-ray spectrum into energy bins of 22–32, 33–39, 40–46, 47–56, and 57–80 keV. Four phantoms were constructed from polymethylmethacrylate (PMMA), polyethylene, polyoxymethylene, hydroxyapatite, and iodine. Three phantoms were composed of three materials with embedded hydroxyapatite (50, 150, 250, and 350 mg∕ml) and iodine (4, 8, 12, and 16 mg∕ml) contrast elements. One phantom was composed of four materials with embedded hydroxyapatite (150 and 350 mg∕ml) and iodine (8 and 16 mg∕ml). Calibrations consisted of PMMA phantoms with either hydroxyapatite (100, 200, 300, 400, and 500 mg∕ml) or iodine (5, 15, 25, 35, and 45 mg∕ml) embedded. Filtered backprojection and a ramp filter were used to reconstruct images from each energy bin. Material segmentation and quantification were performed and compared between different phantoms. Results: All phantoms were decomposed accurately, but some voxels in the base material regions were incorrectly identified. Average quantification errors of hydroxyapatite∕iodine were 9.26∕7.13%, 7.73∕5.58%, and 12.93∕8.23% for the three-material PMMA, polyethylene, and polyoxymethylene phantoms, respectively. The average errors for the four-material phantom were 15.62% and 2.76% for hydroxyapatite and iodine, respectively. Conclusions: The calibrated least-squares minimization technique of decomposition performed well in breast imaging tasks with an energy resolving detector. This method can provide material basis images containing concentrations of the relevant materials that can potentially be valuable in the diagnostic process. PMID:21361191

  11. Multi-atlas attenuation correction supports full quantification of static and dynamic brain PET data in PET-MR

    NASA Astrophysics Data System (ADS)

    Mérida, Inés; Reilhac, Anthonin; Redouté, Jérôme; Heckemann, Rolf A.; Costes, Nicolas; Hammers, Alexander

    2017-04-01

    In simultaneous PET-MR, attenuation maps are not directly available. Essential for absolute radioactivity quantification, they need to be derived from MR or PET data to correct for gamma photon attenuation by the imaged object. We evaluate a multi-atlas attenuation correction method for brain imaging (MaxProb) on static [18F]FDG PET and, for the first time, on dynamic PET, using the serotoninergic tracer [18F]MPPF. A database of 40 MR/CT image pairs (atlases) was used. The MaxProb method synthesises subject-specific pseudo-CTs by registering each atlas to the target subject space. Atlas CT intensities are then fused via label propagation and majority voting. Here, we compared these pseudo-CTs with the real CTs in a leave-one-out design, contrasting the MaxProb approach with a simplified single-atlas method (SingleAtlas). We evaluated the impact of pseudo-CT accuracy on reconstructed PET images, compared to PET data reconstructed with real CT, at the regional and voxel levels for the following: radioactivity images; time-activity curves; and kinetic parameters (non-displaceable binding potential, BPND). On static [18F]FDG, the mean bias for MaxProb ranged between 0 and 1% for 73 out of 84 regions assessed, and exceptionally peaked at 2.5% for only one region. Statistical parametric map analysis of MaxProb-corrected PET data showed significant differences in less than 0.02% of the brain volume, whereas SingleAtlas-corrected data showed significant differences in 20% of the brain volume. On dynamic [18F]MPPF, most regional errors on BPND ranged from -1 to  +3% (maximum bias 5%) for the MaxProb method. With SingleAtlas, errors were larger and had higher variability in most regions. PET quantification bias increased over the duration of the dynamic scan for SingleAtlas, but not for MaxProb. We show that this effect is due to the interaction of the spatial tracer-distribution heterogeneity variation over time with the degree of accuracy of the attenuation maps. This work demonstrates that inaccuracies in attenuation maps can induce bias in dynamic brain PET studies. Multi-atlas attenuation correction with MaxProb enables quantification on hybrid PET-MR scanners, eschewing the need for CT.

  12. Multi-atlas attenuation correction supports full quantification of static and dynamic brain PET data in PET-MR.

    PubMed

    Mérida, Inés; Reilhac, Anthonin; Redouté, Jérôme; Heckemann, Rolf A; Costes, Nicolas; Hammers, Alexander

    2017-04-07

    In simultaneous PET-MR, attenuation maps are not directly available. Essential for absolute radioactivity quantification, they need to be derived from MR or PET data to correct for gamma photon attenuation by the imaged object. We evaluate a multi-atlas attenuation correction method for brain imaging (MaxProb) on static [ 18 F]FDG PET and, for the first time, on dynamic PET, using the serotoninergic tracer [ 18 F]MPPF. A database of 40 MR/CT image pairs (atlases) was used. The MaxProb method synthesises subject-specific pseudo-CTs by registering each atlas to the target subject space. Atlas CT intensities are then fused via label propagation and majority voting. Here, we compared these pseudo-CTs with the real CTs in a leave-one-out design, contrasting the MaxProb approach with a simplified single-atlas method (SingleAtlas). We evaluated the impact of pseudo-CT accuracy on reconstructed PET images, compared to PET data reconstructed with real CT, at the regional and voxel levels for the following: radioactivity images; time-activity curves; and kinetic parameters (non-displaceable binding potential, BP ND ). On static [ 18 F]FDG, the mean bias for MaxProb ranged between 0 and 1% for 73 out of 84 regions assessed, and exceptionally peaked at 2.5% for only one region. Statistical parametric map analysis of MaxProb-corrected PET data showed significant differences in less than 0.02% of the brain volume, whereas SingleAtlas-corrected data showed significant differences in 20% of the brain volume. On dynamic [ 18 F]MPPF, most regional errors on BP ND ranged from -1 to  +3% (maximum bias 5%) for the MaxProb method. With SingleAtlas, errors were larger and had higher variability in most regions. PET quantification bias increased over the duration of the dynamic scan for SingleAtlas, but not for MaxProb. We show that this effect is due to the interaction of the spatial tracer-distribution heterogeneity variation over time with the degree of accuracy of the attenuation maps. This work demonstrates that inaccuracies in attenuation maps can induce bias in dynamic brain PET studies. Multi-atlas attenuation correction with MaxProb enables quantification on hybrid PET-MR scanners, eschewing the need for CT.

  13. High precision quantification of human plasma proteins using the automated SISCAPA Immuno-MS workflow.

    PubMed

    Razavi, Morteza; Leigh Anderson, N; Pope, Matthew E; Yip, Richard; Pearson, Terry W

    2016-09-25

    Efficient robotic workflows for trypsin digestion of human plasma and subsequent antibody-mediated peptide enrichment (the SISCAPA method) were developed with the goal of improving assay precision and throughput for multiplexed protein biomarker quantification. First, an 'addition only' tryptic digestion protocol was simplified from classical methods, eliminating the need for sample cleanup, while improving reproducibility, scalability and cost. Second, methods were developed to allow multiplexed enrichment and quantification of peptide surrogates of protein biomarkers representing a very broad range of concentrations and widely different molecular masses in human plasma. The total workflow coefficients of variation (including the 3 sequential steps of digestion, SISCAPA peptide enrichment and mass spectrometric analysis) for 5 proteotypic peptides measured in 6 replicates of each of 6 different samples repeated over 6 days averaged 3.4% within-run and 4.3% across all runs. An experiment to identify sources of variation in the workflow demonstrated that MRM measurement and tryptic digestion steps each had average CVs of ∼2.7%. Because of the high purity of the peptide analytes enriched by antibody capture, the liquid chromatography step is minimized and in some cases eliminated altogether, enabling throughput levels consistent with requirements of large biomarker and clinical studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. High-throughput monitoring of major cell functions by means of lensfree video microscopy

    PubMed Central

    Kesavan, S. Vinjimore; Momey, F.; Cioni, O.; David-Watine, B.; Dubrulle, N.; Shorte, S.; Sulpice, E.; Freida, D.; Chalmond, B.; Dinten, J. M.; Gidrol, X.; Allier, C.

    2014-01-01

    Quantification of basic cell functions is a preliminary step to understand complex cellular mechanisms, for e.g., to test compatibility of biomaterials, to assess the effectiveness of drugs and siRNAs, and to control cell behavior. However, commonly used quantification methods are label-dependent, and end-point assays. As an alternative, using our lensfree video microscopy platform to perform high-throughput real-time monitoring of cell culture, we introduce specifically devised metrics that are capable of non-invasive quantification of cell functions such as cell-substrate adhesion, cell spreading, cell division, cell division orientation and cell death. Unlike existing methods, our platform and associated metrics embrace entire population of thousands of cells whilst monitoring the fate of every single cell within the population. This results in a high content description of cell functions that typically contains 25,000 – 900,000 measurements per experiment depending on cell density and period of observation. As proof of concept, we monitored cell-substrate adhesion and spreading kinetics of human Mesenchymal Stem Cells (hMSCs) and primary human fibroblasts, we determined the cell division orientation of hMSCs, and we observed the effect of transfection of siCellDeath (siRNA known to induce cell death) on hMSCs and human Osteo Sarcoma (U2OS) Cells. PMID:25096726

  15. The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG.

    PubMed

    Völker, Martin; Fiederer, Lukas D J; Berberich, Sofie; Hammer, Jiří; Behncke, Joos; Kršek, Pavel; Tomášek, Martin; Marusič, Petr; Reinacher, Peter C; Coenen, Volker A; Helias, Moritz; Schulze-Bonhage, Andreas; Burgard, Wolfram; Ball, Tonio

    2018-06-01

    Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Accounting for the Confound of Meninges in Segmenting Entorhinal and Perirhinal Cortices in T1-Weighted MRI.

    PubMed

    Xie, Long; Wisse, Laura E M; Das, Sandhitsu R; Wang, Hongzhi; Wolk, David A; Manjón, Jose V; Yushkevich, Paul A

    2016-10-01

    Quantification of medial temporal lobe (MTL) cortices, including entorhinal cortex (ERC) and perirhinal cortex (PRC), from in vivo MRI is desirable for studying the human memory system as well as in early diagnosis and monitoring of Alzheimer's disease. However, ERC and PRC are commonly over-segmented in T1-weighted (T1w) MRI because of the adjacent meninges that have similar intensity to gray matter in T1 contrast. This introduces errors in the quantification and could potentially confound imaging studies of ERC/PRC. In this paper, we propose to segment MTL cortices along with the adjacent meninges in T1w MRI using an established multi-atlas segmentation framework together with super-resolution technique. Experimental results comparing the proposed pipeline with existing pipelines support the notion that a large portion of meninges is segmented as gray matter by existing algorithms but not by our algorithm. Cross-validation experiments demonstrate promising segmentation accuracy. Further, agreement between the volume and thickness measures from the proposed pipeline and those from the manual segmentations increase dramatically as a result of accounting for the confound of meninges. Evaluated in the context of group discrimination between patients with amnestic mild cognitive impairment and normal controls, the proposed pipeline generates more biologically plausible results and improves the statistical power in discriminating groups in absolute terms comparing to other techniques using T1w MRI. Although the performance of the proposed pipeline is inferior to that using T2-weighted MRI, which is optimized to image MTL sub-structures, the proposed pipeline could still provide important utilities in analyzing many existing large datasets that only have T1w MRI available.

  17. Quantification of trace metals in infant formula premixes using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Cama-Moncunill, Raquel; Casado-Gavalda, Maria P.; Cama-Moncunill, Xavier; Markiewicz-Keszycka, Maria; Dixit, Yash; Cullen, Patrick J.; Sullivan, Carl

    2017-09-01

    Infant formula is a human milk substitute generally based upon fortified cow milk components. In order to mimic the composition of breast milk, trace elements such as copper, iron and zinc are usually added in a single operation using a premix. The correct addition of premixes must be verified to ensure that the target levels in infant formulae are achieved. In this study, a laser-induced breakdown spectroscopy (LIBS) system was assessed as a fast validation tool for trace element premixes. LIBS is a promising emission spectroscopic technique for elemental analysis, which offers real-time analyses, little to no sample preparation and ease of use. LIBS was employed for copper and iron determinations of premix samples ranging approximately from 0 to 120 mg/kg Cu/1640 mg/kg Fe. LIBS spectra are affected by several parameters, hindering subsequent quantitative analyses. This work aimed at testing three matrix-matched calibration approaches (simple-linear regression, multi-linear regression and partial least squares regression (PLS)) as means for precision and accuracy enhancement of LIBS quantitative analysis. All calibration models were first developed using a training set and then validated with an independent test set. PLS yielded the best results. For instance, the PLS model for copper provided a coefficient of determination (R2) of 0.995 and a root mean square error of prediction (RMSEP) of 14 mg/kg. Furthermore, LIBS was employed to penetrate through the samples by repetitively measuring the same spot. Consequently, LIBS spectra can be obtained as a function of sample layers. This information was used to explore whether measuring deeper into the sample could reduce possible surface-contaminant effects and provide better quantifications.

  18. An overview of the main foodstuff sample preparation technologies for tetracycline residue determination.

    PubMed

    Pérez-Rodríguez, Michael; Pellerano, Roberto Gerardo; Pezza, Leonardo; Pezza, Helena Redigolo

    2018-05-15

    Tetracyclines are widely used for both the treatment and prevention of diseases in animals as well as for the promotion of rapid animal growth and weight gain. This practice may result in trace amounts of these drugs in products of animal origin, such as milk and eggs, posing serious risks to human health. The presence of tetracycline residues in foods can lead to the transmission of antibiotic-resistant pathogenic bacteria through the food chain. In order to ensure food safety and avoid exposure to these substances, national and international regulatory agencies have established tolerance levels for authorized veterinary drugs, including tetracycline antimicrobials. In view of that, numerous sensitive and specific methods have been developed for the quantification of these compounds in different food matrices. One will note, however, that the determination of trace residues in foods such as milk and eggs often requires extensive sample extraction and preparation prior to conducting instrumental analysis. Sample pretreatment is usually the most complicated step in the analytical process and covers both cleaning and pre-concentration. Optimal sample preparation can reduce analysis time and sources of error, enhance sensitivity, apart from enabling unequivocal identification, confirmation and quantification of target analytes. The development and implementation of more environmentally friendly analytical procedures, which involve the use of less hazardous solvents and smaller sample sizes compared to traditional methods, is a rapidly increasing trend in analytical chemistry. This review seeks to provide an updated overview of the main trends in sample preparation for the determination of tetracycline residues in foodstuffs. The applicability of several extraction and clean-up techniques employed in the analysis of foodstuffs, especially milk and egg samples, is also thoroughly discussed. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Quantification of the toxic hexavalent chromium content in an organic matrix by X-ray photoelectron spectroscopy (XPS) and ultra-low-angle microtomy (ULAM)

    NASA Astrophysics Data System (ADS)

    Greunz, Theresia; Duchaczek, Hubert; Sagl, Raffaela; Duchoslav, Jiri; Steinberger, Roland; Strauß, Bernhard; Stifter, David

    2017-02-01

    Cr(VI) is known for its corrosion inhibitive properties and is, despite legal regulations, still a potential candidate to be added to thin (1-3 μm) protective coatings applied on, e.g., electrical steel as used for transformers, etc. However, Cr(VI) is harmful to the environment and to the human health. Hence, a reliable quantification of it is of decisive interest. Commonly, an alkaline extraction with a photometric endpoint detection of Cr(VI) is used for such material systems. However, this procedure requires an accurate knowledge on sample parameters such as dry film thickness and coating density that are occasionally associated with significant experimental errors. We present a comprehensive study of a coating system with a defined Cr(VI) pigment concentration applied on electrical steel. X-ray photoelectron spectroscopy (XPS) was employed to resolve the elemental chromium concentration and the chemical state. Turning to the fact that XPS is extremely surface sensitive (<10 nm) and that the lowest commonly achievable lateral resolution is a number of times higher than the coating thickness (∼2 μm), a bulk analysis was achieved with XPS line scans on extended wedge-shaped tapers through the coating. For that purpose a special sample preparation step performed on an ultra-microtome was required prior to analysis. Since a temperature increase leads to a reduction of Cr(VI) we extend our method on samples, which were subjected to different curing temperatures. We show that our proposed approach now allows to determine the elemental and Cr(VI) concentration and distribution inside the coating.

  20. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar.

    PubMed

    Li, Zhan; Jupp, David L B; Strahler, Alan H; Schaaf, Crystal B; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S; Chakrabarti, Supriya; Cook, Timothy A; Paynter, Ian; Saenz, Edward J; Schaefer, Michael

    2016-03-02

    Radiometric calibration of the Dual-Wavelength Echidna(®) Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρ(app)), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρ(app) are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρ(app) error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρ(app) from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars.

  1. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar

    PubMed Central

    Li, Zhan; Jupp, David L. B.; Strahler, Alan H.; Schaaf, Crystal B.; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S.; Chakrabarti, Supriya; Cook, Timothy A.; Paynter, Ian; Saenz, Edward J.; Schaefer, Michael

    2016-01-01

    Radiometric calibration of the Dual-Wavelength Echidna® Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρapp), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρapp are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρapp error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρapp from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars. PMID:26950126

  2. Development of an errorable car-following driver model

    NASA Astrophysics Data System (ADS)

    Yang, H.-H.; Peng, H.

    2010-06-01

    An errorable car-following driver model is presented in this paper. An errorable driver model is one that emulates human driver's functions and can generate both nominal (error-free), as well as devious (with error) behaviours. This model was developed for evaluation and design of active safety systems. The car-following data used for developing and validating the model were obtained from a large-scale naturalistic driving database. The stochastic car-following behaviour was first analysed and modelled as a random process. Three error-inducing behaviours were then introduced. First, human perceptual limitation was studied and implemented. Distraction due to non-driving tasks was then identified based on the statistical analysis of the driving data. Finally, time delay of human drivers was estimated through a recursive least-square identification process. By including these three error-inducing behaviours, rear-end collisions with the lead vehicle could occur. The simulated crash rate was found to be similar but somewhat higher than that reported in traffic statistics.

  3. In vivo Magnetic Resonance Spectroscopy of cerebral glycogen metabolism in animals and humans.

    PubMed

    Khowaja, Ameer; Choi, In-Young; Seaquist, Elizabeth R; Öz, Gülin

    2015-02-01

    Glycogen serves as an important energy reservoir in the human body. Despite the abundance of glycogen in the liver and skeletal muscles, its concentration in the brain is relatively low, hence its significance has been questioned. A major challenge in studying brain glycogen metabolism has been the lack of availability of non-invasive techniques for quantification of brain glycogen in vivo. Invasive methods for brain glycogen quantification such as post mortem extraction following high energy microwave irradiation are not applicable in the human brain. With the advent of (13)C Magnetic Resonance Spectroscopy (MRS), it has been possible to measure brain glycogen concentrations and turnover in physiological conditions, as well as under the influence of stressors such as hypoglycemia and visual stimulation. This review presents an overview of the principles of the (13)C MRS methodology and its applications in both animals and humans to further our understanding of glycogen metabolism under normal physiological and pathophysiological conditions such as hypoglycemia unawareness.

  4. Automated quantification of myocardial infarction from MR images by accounting for partial volume effects: animal, phantom, and human study.

    PubMed

    Heiberg, Einar; Ugander, Martin; Engblom, Henrik; Götberg, Matthias; Olivecrona, Göran K; Erlinge, David; Arheden, Håkan

    2008-02-01

    Ethics committees approved human and animal study components; informed written consent was provided (prospective human study [20 men; mean age, 62 years]) or waived (retrospective human study [16 men, four women; mean age, 59 years]). The purpose of this study was to prospectively evaluate a clinically applicable method, accounting for the partial volume effect, to automatically quantify myocardial infarction from delayed contrast material-enhanced magnetic resonance images. Pixels were weighted according to signal intensity to calculate infarct fraction for each pixel. Mean bias +/- variability (or standard deviation), expressed as percentage left ventricular myocardium (%LVM), were -0.3 +/- 1.3 (animals), -1.2 +/- 1.7 (phantoms), and 0.3 +/- 2.7 (patients), respectively. Algorithm had lower variability than dichotomous approach (2.7 vs 7.7 %LVM, P < .01) and did not differ from interobserver variability for bias (P = .31) or variability (P = .38). The weighted approach provides automatic quantification of myocardial infarction with higher accuracy and lower variability than a dichotomous algorithm. (c) RSNA, 2007.

  5. In vivo Magnetic Resonance Spectroscopy of cerebral glycogen metabolism in animals and humans

    PubMed Central

    Khowaja, Ameer; Choi, In-Young; Seaquist, Elizabeth R.; Öz, Gülin

    2015-01-01

    Glycogen serves as an important energy reservoir in the human body. Despite the abundance of glycogen in the liver and skeletal muscles, its concentration in the brain is relatively low, hence its significance has been questioned. A major challenge in studying brain glycogen metabolism has been the lack of availability of non-invasive techniques for quantification of brain glycogen in vivo. Invasive methods for brain glycogen quantification such as post mortem extraction following high energy microwave irradiation are not applicable in the human brain. With the advent of 13C Magnetic Resonance Spectroscopy (MRS), it has been possible to measure brain glycogen concentrations and turnover in physiological conditions, as well as under the influence of stressors such as hypoglycemia and visual stimulation. This review presents an overview of the principles of the 13C MRS methodology and its applications in both animals and humans to further our understanding of glycogen metabolism under normal physiological and pathophysiological conditions such as hypoglycemia unawareness. PMID:24676563

  6. Lost in Translation: the Case for Integrated Testing

    NASA Technical Reports Server (NTRS)

    Young, Aaron

    2017-01-01

    The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.

  7. Human factors in aircraft incidents - Results of a 7-year study (Andre Allard Memorial Lecture)

    NASA Technical Reports Server (NTRS)

    Billings, C. E.; Reynard, W. D.

    1984-01-01

    It is pointed out that nearly all fatal aircraft accidents are preventable, and that most such accidents are due to human error. The present discussion is concerned with the results of a seven-year study of the data collected by the NASA Aviation Safety Reporting System (ASRS). The Aviation Safety Reporting System was designed to stimulate as large a flow as possible of information regarding errors and operational problems in the conduct of air operations. It was implemented in April, 1976. In the following 7.5 years, 35,000 reports have been received from pilots, controllers, and the armed forces. Human errors are found in more than 80 percent of these reports. Attention is given to the types of events reported, possible causal factors in incidents, the relationship of incidents and accidents, and sources of error in the data. ASRS reports include sufficient detail to permit authorities to institute changes in the national aviation system designed to minimize the likelihood of human error, and to insulate the system against the effects of errors.

  8. Human factors in surgery: from Three Mile Island to the operating room.

    PubMed

    D'Addessi, Alessandro; Bongiovanni, Luca; Volpe, Andrea; Pinto, Francesco; Bassi, PierFrancesco

    2009-01-01

    Human factors is a definition that includes the science of understanding the properties of human capability, the application of this understanding to the design and development of systems and services, the art of ensuring their successful applications to a program. The field of human factors traces its origins to the Second World War, but Three Mile Island has been the best example of how groups of people react and make decisions under stress: this nuclear accident was exacerbated by wrong decisions made because the operators were overwhelmed with irrelevant, misleading or incorrect information. Errors and their nature are the same in all human activities. The predisposition for error is so intrinsic to human nature that scientifically it is best considered as inherently biologic. The causes of error in medical care may not be easily generalized. Surgery differs in important ways: most errors occur in the operating room and are technical in nature. Commonly, surgical error has been thought of as the consequence of lack of skill or ability, and is the result of thoughtless actions. Moreover the 'operating theatre' has a unique set of team dynamics: professionals from multiple disciplines are required to work in a closely coordinated fashion. This complex environment provides multiple opportunities for unclear communication, clashing motivations, errors arising not from technical incompetence but from poor interpersonal skills. Surgeons have to work closely with human factors specialists in future studies. By improving processes already in place in many operating rooms, safety will be enhanced and quality increased.

  9. Liquid chromatography-tandem MS/MS method for simultaneous quantification of paracetamol, chlorzoxazone and aceclofenac in human plasma: An application to a clinical pharmacokinetic study.

    PubMed

    Mohamed, Dalia; Hegazy, Maha A; Elshahed, Mona S; Toubar, Safaa S; Helmy, Marwa I

    2018-07-01

    A facile, fast and specific method based on liquid chromatography-tandem mass spectrometry (LC-MS/MS) for the simultaneous quantitation of paracetamol, chlorzoxazone and aceclofenac in human plasma was developed and validated. Sample preparation was achieved by liquid-liquid extraction. The analysis was performed on a reversed-phase C 18 HPLC column (5 μm, 4.6 × 50 mm) using acetonitrile-10 mM ammonium formate pH 3.0 (65:35, v/v) as the mobile phase where atrovastatin was used as an internal standard. A very small injection volume (3 μL) was applied and the run time was 2.0 min. The detection was carried out by electrospray positive and negative ionization mass spectrometry in the multiple-reaction monitoring mode. The developed method was capable of determining the analytes over the concentration ranges of 0.03-30.0, 0.015-15.00 and 0.15-15.00 μg/mL for paracetamol, chlorzoxazone and aceclofenac, respectively. Intraday and interday precisions (as coefficient of variation) were found to be ≤12.3% with an accuracy (as relative error) of ±5.0%. The method was successfully applied to a pharmacokinetic study of the three analytes after being orally administered to six healthy volunteers. Copyright © 2018 John Wiley & Sons, Ltd.

  10. A fully validated method for the determination of lacosamide in human plasma using gas chromatography with mass spectrometry: application for therapeutic drug monitoring.

    PubMed

    Nikolaou, Panagiota; Papoutsis, Ioannis; Spiliopoulou, Chara; Voudris, Constantinos; Athanaselis, Sotiris

    2015-01-01

    A simple gas chromatographic method with mass spectrometry detection was developed and validated for the determination of lacosamide in human plasma. Lacosamide and the internal standard, levetiracetam-d6, were extracted from 200 μL plasma, by a solid-phase extraction through HF Bond Elut C18 columns, and derivatized using N-methyl-N-tert-butyldimethylsilyltrifluoroacetamide with 1% tert-butyldimethylsilylchloride in acetonitrile. The limit of quantification was found to be 0.20 μg/mL and the assay was linear up to 20.0 μg/mL with correlation coefficient ≥0.994. The intra- and interday precision values were <4.1% in terms of relative standard deviation (%) and the values of intra- and interday accuracy were found to be within -7.2 and 5.3% in terms of relative error (%). Absolute recovery of the method for lacosamide was determined at three concentration levels and ranged from 92.5 to 97.6%. The developed method uses small volumes of plasma and proved to be simple, rapid, and sensitive for the determination of lacosamide in plasma. This method can be used in routine every day analysis of plasma samples obtained from patients who follow respective antiepileptic treatment and for the investigation of clinical and forensic cases where lacosamide is involved. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. The Human Factors Analysis and Classification System : HFACS : final report.

    DOT National Transportation Integrated Search

    2000-02-01

    Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident reporting systems are not designed around any theoretical framework of human error. As a result, most accident databases are not conducive t...

  12. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE PAGES

    Wang, Yan; Swiler, Laura

    2017-09-07

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  13. Special Issue on Uncertainty Quantification in Multiscale System Design and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Swiler, Laura

    The importance of uncertainty has been recognized in various modeling, simulation, and analysis applications, where inherent assumptions and simplifications affect the accuracy of model predictions for physical phenomena. As model predictions are now heavily relied upon for simulation-based system design, which includes new materials, vehicles, mechanical and civil structures, and even new drugs, wrong model predictions could potentially cause catastrophic consequences. Therefore, uncertainty and associated risks due to model errors should be quantified to support robust systems engineering.

  14. Practical uncertainty reduction and quantification in shock physics measurements

    DOE PAGES

    Akin, M. C.; Nguyen, J. H.

    2015-04-20

    We report the development of a simple error analysis sampling method for identifying intersections and inflection points to reduce total uncertainty in experimental data. This technique was used to reduce uncertainties in sound speed measurements by 80% over conventional methods. Here, we focused on its impact on a previously published set of Mo sound speed data and possible implications for phase transition and geophysical studies. However, this technique's application can be extended to a wide range of experimental data.

  15. Differential reliance of chimpanzees and humans on automatic and deliberate control of motor actions.

    PubMed

    Kaneko, Takaaki; Tomonaga, Masaki

    2014-06-01

    Humans are often unaware of how they control their limb motor movements. People pay attention to their own motor movements only when their usual motor routines encounter errors. Yet little is known about the extent to which voluntary actions rely on automatic control and when automatic control shifts to deliberate control in nonhuman primates. In this study, we demonstrate that chimpanzees and humans showed similar limb motor adjustment in response to feedback error during reaching actions, whereas attentional allocation inferred from gaze behavior differed. We found that humans shifted attention to their own motor kinematics as errors were induced in motor trajectory feedback regardless of whether the errors actually disrupted their reaching their action goals. In contrast, chimpanzees shifted attention to motor execution only when errors actually interfered with their achieving a planned action goal. These results indicate that the species differed in their criteria for shifting from automatic to deliberate control of motor actions. It is widely accepted that sophisticated motor repertoires have evolved in humans. Our results suggest that the deliberate monitoring of one's own motor kinematics may have evolved in the human lineage. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Preventable Medical Errors Driven Modeling of Medical Best Practice Guidance Systems.

    PubMed

    Ou, Andrew Y-Z; Jiang, Yu; Wu, Po-Liang; Sha, Lui; Berlin, Richard B

    2017-01-01

    In a medical environment such as Intensive Care Unit, there are many possible reasons to cause errors, and one important reason is the effect of human intellectual tasks. When designing an interactive healthcare system such as medical Cyber-Physical-Human Systems (CPHSystems), it is important to consider whether the system design can mitigate the errors caused by these tasks or not. In this paper, we first introduce five categories of generic intellectual tasks of humans, where tasks among each category may lead to potential medical errors. Then, we present an integrated modeling framework to model a medical CPHSystem and use UPPAAL as the foundation to integrate and verify the whole medical CPHSystem design models. With a verified and comprehensive model capturing the human intellectual tasks effects, we can design a more accurate and acceptable system. We use a cardiac arrest resuscitation guidance and navigation system (CAR-GNSystem) for such medical CPHSystem modeling. Experimental results show that the CPHSystem models help determine system design flaws and can mitigate the potential medical errors caused by the human intellectual tasks.

  17. Quantification of free and total desmosine and isodesmosine in human urine by liquid chromatography tandem mass spectrometry: a comparison of the surrogate-analyte and the surrogate-matrix approach for quantitation.

    PubMed

    Ongay, Sara; Hendriks, Gert; Hermans, Jos; van den Berge, Maarten; ten Hacken, Nick H T; van de Merbel, Nico C; Bischoff, Rainer

    2014-01-24

    In spite of the data suggesting the potential of urinary desmosine (DES) and isodesmosine (IDS) as biomarkers for elevated lung elastic fiber turnover, further validation in large-scale studies of COPD populations, as well as the analysis of longitudinal samples is required. Validated analytical methods that allow the accurate and precise quantification of DES and IDS in human urine are mandatory in order to properly evaluate the outcome of such clinical studies. In this work, we present the development and full validation of two methods that allow DES and IDS measurement in human urine, one for the free and one for the total (free+peptide-bound) forms. To this end we compared the two principle approaches that are used for the absolute quantification of endogenous compounds in biological samples, analysis against calibrators containing authentic analyte in surrogate matrix or containing surrogate analyte in authentic matrix. The validated methods were employed for the analysis of a small set of samples including healthy never-smokers, healthy current-smokers and COPD patients. This is the first time that the analysis of urinary free DES, free IDS, total DES, and total IDS has been fully validated and that the surrogate analyte approach has been evaluated for their quantification in biological samples. Results indicate that the presented methods have the necessary quality and level of validation to assess the potential of urinary DES and IDS levels as biomarkers for the progression of COPD and the effect of therapeutic interventions. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Quantification of Human Fecal Bifidobacterium Species by Use of Quantitative Real-Time PCR Analysis Targeting the groEL Gene

    PubMed Central

    Junick, Jana

    2012-01-01

    Quantitative real-time PCR assays targeting the groEL gene for the specific enumeration of 12 human fecal Bifidobacterium species were developed. The housekeeping gene groEL (HSP60 in eukaryotes) was used as a discriminative marker for the differentiation of Bifidobacterium adolescentis, B. angulatum, B. animalis, B. bifidum, B. breve, B. catenulatum, B. dentium, B. gallicum, B. longum, B. pseudocatenulatum, B. pseudolongum, and B. thermophilum. The bifidobacterial chromosome contains a single copy of the groEL gene, allowing the determination of the cell number by quantification of the groEL copy number. Real-time PCR assays were validated by comparing fecal samples spiked with known numbers of a given Bifidobacterium species. Independent of the Bifidobacterium species tested, the proportion of groEL copies recovered from fecal samples spiked with 5 to 9 log10 cells/g feces was approximately 50%. The quantification limit was 5 to 6 log10 groEL copies/g feces. The interassay variability was less than 10%, and variability between different DNA extractions was less than 23%. The method developed was applied to fecal samples from healthy adults and full-term breast-fed infants. Bifidobacterial diversity in both adults and infants was low, with mostly ≤3 Bifidobacterium species and B. longum frequently detected. The predominant species in infant and adult fecal samples were B. breve and B. adolescentis, respectively. It was possible to distinguish B. catenulatum and B. pseudocatenulatum. We conclude that the groEL gene is a suitable molecular marker for the specific and accurate quantification of human fecal Bifidobacterium species by real-time PCR. PMID:22307308

  19. Noninvasive quantification of T2 and concentrations of ascorbate and glutathione in the human brain from the same double-edited spectra.

    PubMed

    Emir, Uzay E; Deelchand, Dinesh; Henry, Pierre-Gilles; Terpstra, Melissa

    2011-04-01

    The transverse relaxation times (T(2)) and concentrations of Ascorbate (Asc) and glutathione (GSH) were measured from a single dataset of double-edited spectra that were acquired at several TEs at 4 T in the human brain. Six TEs between 102 and 152  ms were utilized to calculate T(2) for the group of 12 subjects scanned five times each. Spectra measured at all six TEs were summed to quantify the concentration in each individual scan. LCModel fitting was optimized for the quantification of the Asc and GSH double-edited spectra. When the fitted baseline was constrained to be flat, T(2) was found to be 67  ms (95% confidence interval, 50-83  ms) for GSH and ≤115  ms for Asc using the sum of spectra measured over 60 scans. The Asc and GSH concentrations quantified in each of the 60 scans were 0.62 ± 0.08 and 0.81 ± 0.11  µmol/g [mean ± standard deviation (SD), n = 60], respectively, using 10  µmol/g N-acetylaspartate as an internal reference and assuming a constant influence of N-acetylaspartate and antioxidant T(2) relaxation in the reference solution and in vivo. The T(2) value of GSH was measured for the first time in the human brain. The data are consistent with short T(2) for both antioxidants. These T(2) values are essential for the absolute quantification of Asc and GSH concentrations measured at long TE, and provide a critical step towards addressing assumptions about T(2), and therefore towards the quantification of concentrations without the possibility of systematic bias. Copyright © 2010 John Wiley & Sons, Ltd.

  20. Sensitive and rapid liquid chromatography/tandem mass spectrometric assay for the quantification of piperaquine in human plasma.

    PubMed

    Singhal, Puran; Gaur, Ashwani; Gautam, Anirudh; Varshney, Brijesh; Paliwal, Jyoti; Batra, Vijay

    2007-11-01

    A simple, sensitive and rapid liquid chromatography/tandem mass spectrometric (LC-MS/MS) method was developed and validated for quantification of piperaquine, an antimalarial drug, in human plasma using its structural analogue, piperazine bis chloroquinoline as internal standard (IS). The method involved a simple protein precipitation with methanol followed by rapid isocratic elution of analytes with 10mM ammonium acetate buffer/methanol/formic acid/ammonia solution (25/75/0.2/0.15, v/v) on Chromolith SpeedROD RP-18e reversed phase chromatographic column and quantification by mass spectrometry in the multiple reaction monitoring mode (MRM). The precursor to product ion transitions of m/z 535.3-->288.2 and m/z 409.1-->205.2 were used to measure the analyte and the IS, respectively. The assay exhibited a linear dynamic range of 1.0-250.2 ng/mL for piperaquine in plasma. The limit of detection (LOD) and lower limit of quantification (LLOQ) in plasma were 0.2 and 1.0 ng/mL, respectively. Acceptable precision and accuracy (+/-20% deviation for LLOQ standard and +/-15% deviation for other standards from the respective nominal concentration) were obtained for concentrations over the standard curve ranges. A run time of 2.5 min for a sample made it possible to achieve a throughput of more than 400 plasma samples analyzed per day. The validated method was successfully applied to analyze human plasma samples from phase-1 clinical studies. The mean pharmacokinetic parameters of piperaquine following 1000 mg oral dose: observed maximum plasma concentration (Cmax), time to maximum plasma concentration (Tmax) and elimination half-life (T1/2) were 46.1 ng/mL, 3.8h and 13 days, respectively.

  1. Development of an analytical method for the targeted screening and multi-residue quantification of environmental contaminants in urine by liquid chromatography coupled to high resolution mass spectrometry for evaluation of human exposures.

    PubMed

    Cortéjade, A; Kiss, A; Cren, C; Vulliet, E; Buleté, A

    2016-01-01

    The aim of this study was to develop an analytical method and contribute to the assessment of the Exposome. Thus, a targeted analysis of a wide range of contaminants in contact with humans on daily routines in urine was developed. The method focused on a list of 38 contaminants, including 12 pesticides, one metabolite of pesticide, seven veterinary drugs, five parabens, one UV filter, one plastic additive, two surfactants and nine substances found in different products present in the everyday human environment. These contaminants were analyzed by high performance liquid chromatography coupled to high resolution mass spectrometry (HPLC-HRMS) with a quadrupole-time-of-flight (QqToF) instrument from a raw urinary matrix. A validation according to the FDA guidelines was employed to evaluate the specificity, linear or quadratic curve fitting, inter- and intra-day precision, accuracy and limits of detection and quantification (LOQ). The developed analysis allows for the quantification of 23 contaminants in the urine samples, with the LOQs ranging between 4.3 ng.mL(-1) and 113.2 ng.mL(-1). This method was applied to 17 urine samples. Among the targeted contaminants, four compounds were detected in samples. One of the contaminants (tributyl phosphate) was detected below the LOQ. The three others (4-hydroxybenzoic acid, sodium dodecylbenzenesulfonate and O,O-diethyl thiophosphate potassium) were detected but did not fulfill the validation criteria for quantification. Among these four compounds, two of them were found in all samples: tributyl phosphate and the surfactant sodium dodecylbenzenesulfonate. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Ultrasensitive quantification of the CYP2E1 probe chlorzoxazone and its main metabolite 6-hydroxychlorzoxazone in human plasma using ultra performance liquid chromatography coupled to tandem mass spectrometry after chlorzoxazone microdosing.

    PubMed

    Witt, Lukas; Suzuki, Yosuke; Hohmann, Nicolas; Mikus, Gerd; Haefeli, Walter E; Burhenne, Jürgen

    2016-08-01

    Chlorzoxazone is a probe drug to assess cytochrome P450 (CYP) 2E1 activity (phenotyping). If the pharmacokinetics of the probe drug is linear, pharmacologically ineffective doses are sufficient for the purpose of phenotyping and adverse effects can thus be avoided. For this reason, we developed and validated an assay for the ultrasensitive quantification of chlorzoxazone and 6-hydroxychlorzoxazone in human plasma. Plasma (0.5mL) and liquid/liquid partitioning were used for sample preparation. Extraction recoveries ranged between 76 and 93% for both analytes. Extracts were separated within 3min on a Waters BEH C18 Shield 1.7μm UPLC column with a fast gradient consisting of aqueous formic acid and acetonitrile. Quantification was achieved using internal standards labeled with deuterium or (13)C and tandem mass spectrometry in the multiple reaction monitoring mode using negative electrospray ionization, which yielded lower limits of quantification of 2.5pgmL(-1), while maintaining a precision always below 15%. The calibrated concentration ranges were linear for both analytes (2.5-1000pgmL(-1)) with correlation coefficients of >0.99. Within-batch and batch-to-batch precision in the calibrated ranges for both analytes were <15% and <11% and plasma matrix effects always were below 50%. The assay was successfully applied to assess the pharmacokinetics of chlorzoxazone in two human volunteers after administration of single oral doses (2.5-5000μg). This ultrasensitive assay allowed the determination of chlorzoxazone pharmacokinetics for 8h after microdosing of 25μg chlorzoxazone. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Proteomic and N-glycoproteomic quantification reveal aberrant changes in the human saliva of oral ulcer patients.

    PubMed

    Zhang, Ying; Wang, Xi; Cui, Dan; Zhu, Jun

    2016-12-01

    Human whole saliva is a vital body fluid for studying the physiology and pathology of the oral cavity. As a powerful technique for biomarker discovery, MS-based proteomic strategies have been introduced for saliva analysis and identified hundreds of proteins and N-glycosylation sites. However, there is still a lack of quantitative analysis, which is necessary for biomarker screening and biological research. In this study, we establish an integrated workflow by the combination of stable isotope dimethyl labeling, HILIC enrichment, and high resolution MS for both quantification of the global proteome and N-glycoproteome of human saliva from oral ulcer patients. With the help of advanced bioinformatics, we comprehensively studied oral ulcers at both protein and glycoprotein scales. Bioinformatics analyses revealed that starch digestion and protein degradation activities are inhibited while the immune response is promoted in oral ulcer saliva. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Simplified quantification and whole-body distribution of [18F]FE-PE2I in nonhuman primates: prediction for human studies.

    PubMed

    Varrone, Andrea; Gulyás, Balázs; Takano, Akihiro; Stabin, Michael G; Jonsson, Cathrine; Halldin, Christer

    2012-02-01

    [(18)F]FE-PE2I is a promising dopamine transporter (DAT) radioligand. In nonhuman primates, we examined the accuracy of simplified quantification methods and the estimates of radiation dose of [(18)F]FE-PE2I. In the quantification study, binding potential (BP(ND)) values previously reported in three rhesus monkeys using kinetic and graphical analyses of [(18)F]FE-PE2I were used for comparison. BP(ND) using the cerebellum as reference region was obtained with four reference tissue methods applied to the [(18)F]FE-PE2I data that were compared with the kinetic and graphical analyses. In the whole-body study, estimates of adsorbed radiation were obtained in two cynomolgus monkeys. All reference tissue methods provided BP(ND) values within 5% of the values obtained with the kinetic and graphical analyses. The shortest imaging time for stable BP(ND) estimation was 54 min. The average effective dose of [(18)F]FE-PE2I was 0.021 mSv/MBq, similar to 2-deoxy-2-[(18)F]fluoro-d-glucose. The results in nonhuman primates suggest that [(18)F]FE-PE2I is suitable for accurate and stable DAT quantification, and its radiation dose estimates would allow for a maximal administered radioactivity of 476 MBq in human subjects. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Quantification In Situ of Crystalline Cholesterol and Calcium Phosphate Hydroxyapatite in Human Atherosclerotic Plaques by Solid-State Magic Angle Spinning NMR

    PubMed Central

    Guo, Wen; Morrisett, Joel D.; DeBakey, Michael E.; Lawrie, Gerald M.; Hamilton, James A.

    2010-01-01

    Because of renewed interest in the progression, stabilization, and regression of atherosclerotic plaques, it has become important to develop methods for characterizing structural features of plaques in situ and noninvasively. We present a nondestructive method for ex vivo quantification of 2 solid-phase components of plaques: crystalline cholesterol and calcium phosphate salts. Magic angle spinning (MAS) nuclear magnetic resonance (NMR) spectra of human carotid endarterectomy plaques revealed 13C resonances of crystalline cholesterol monohydrate and a 31P resonance of calcium phosphate hydroxyapatite (CPH). The spectra were obtained under conditions in which there was little or no interference from other chemical components and were suitable for quantification in situ of the crystalline cholesterol and CPH. Carotid atherosclerotic plaques showed a wide variation in their crystalline cholesterol content. The calculated molar ratio of liquid-crystalline cholesterol to phospholipid ranged from 1.1 to 1.7, demonstrating different capabilities of the phospholipids to reduce crystallization of cholesterol. The spectral properties of the phosphate groups in CPH in carotid plaques were identical to those of CPH in bone. 31P MAS NMR is a simple, rapid method for quantification of calcium phosphate salts in tissue without extraction and time-consuming chemical analysis. Crystalline phases in intact atherosclerotic plaques (ex vivo) can be quantified accurately by solid-state 13C and 31PMAS NMR spectroscopy. PMID:10845882

  6. Simultaneous determination of blonanserin and its metabolite in human plasma and urine by liquid chromatography-tandem mass spectrometry: application to a pharmacokinetic study.

    PubMed

    Wen, Yu-Guan; Ni, Xiao-Jia; Zhang, Ming; Liu, Xia; Shang, De-Wei

    2012-08-15

    Blonanserin is a novel atypical antipsychotic with highly selective receptor antagonist activity to dopamine D₂ and 5-HT(2A). N-desethyl blonanserin (blonanserin C) is its major active metabolite in human plasma. Herein we report a new highly sensitive, selective, and rapid liquid chromatography-tandem mass spectrometry method to determine blonanserin and blonanserin C simultaneously in human plasma and urine, with N-desethyl-chlor-blonanserin (blonanserin D) as internal standard (IS). Blonanserin and blonanserin C were extracted from aliquots of plasma and urine with ethyl acetate and dichloromethane (4:1) as the solvent and chromatographic separation was performed using an Agilent Eclipse Plus C₁₈ column. The mobile phase was composed of: acetonitrile and ammonium formate-formic acid buffer containing 5mM ammonium formate and 0.1% formic acid (87:13, v/v). To quantify blonanserin, blonanserin C, and blonanserin D, respectively, multiple reaction monitoring (MRM) transition of m/z 368.2→297.2, m/z 340.2→297.1, and m/z 356.2→313.3 was performed in positive mode. The analysis time was about 5.5 min. The calibration curve was linear in the concentration range of 0.01-2 ng/ml. The lower limit of quantification reached 0.01 ng/ml. The intra and inter-day precision and relative errors were less than 8.0% and 6.6% for three QC levels in plasma and urine. The current LC-MS/MS method was validated as simple, sensitive, and accurate and has been successfully applied to investigate the pharmacokinetics of blonanserin and blonanserin C in humans. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Automated solid-phase extraction-liquid chromatography-tandem mass spectrometry analysis of 11-nor-Delta9-tetrahydrocannabinol-9-carboxylic acid in human urine specimens: application to a high-throughput urine analysis laboratory.

    PubMed

    Robandt, P V; Klette, K L; Sibum, M

    2009-10-01

    An automated solid-phase extraction coupled with liquid chromatography and tandem mass spectrometry (SPE-LC-MS-MS) method for the analysis of 11-nor-Delta(9)-tetrahydrocannabinol-9-carboxylic acid (THC-COOH) in human urine specimens was developed. The method was linear (R(2) = 0.9986) to 1000 ng/mL with no carryover evidenced at 2000 ng/mL. Limits of quantification and detection were found to be 2 ng/mL. Interrun precision was evaluated at the 15 ng/mL level over nine batches spanning 15 days (n = 45). The coefficient of variation (%CV) was found to be 5.5% over the course of the validation. Intrarun precision of a 15 ng/mL control (n = 5) ranged from 0.58% CV to 7.4% CV for the same set of analytical batches. Interference was tested using (+/-)-11-hydroxy-Delta(9)-tetrahydrocannabinol, cannabidiol, (-)-Delta(8)-tetrahydrocannabinol, and cannabinol. One hundred and nineteen specimens previously found to contain THC-COOH by a previously validated gas chromatographic mass spectrometry (GC-MS) procedure were compared to the SPE-LC-MS-MS method. Excellent agreement was found (R(2) = 0.9925) for the parallel comparison study. The automated SPE procedure eliminates the human factors of specimen handling, extraction, and derivatization, thereby reducing labor costs and rework resulting from human error or technique issues. Additionally, method runtime is greatly reduced (e.g., during parallel studies the SPE-LC-MS-MS instrument was often finished with analysis by the time the technician finished the offline SPE and derivatization procedure prior to the GC-MS analysis).

  8. Quality Assessments of Long-Term Quantitative Proteomic Analysis of Breast Cancer Xenograft Tissues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Jian-Ying; Chen, Lijun; Zhang, Bai

    The identification of protein biomarkers requires large-scale analysis of human specimens to achieve statistical significance. In this study, we evaluated the long-term reproducibility of an iTRAQ (isobaric tags for relative and absolute quantification) based quantitative proteomics strategy using one channel for universal normalization across all samples. A total of 307 liquid chromatography tandem mass spectrometric (LC-MS/MS) analyses were completed, generating 107 one-dimensional (1D) LC-MS/MS datasets and 8 offline two-dimensional (2D) LC-MS/MS datasets (25 fractions for each set) for human-in-mouse breast cancer xenograft tissues representative of basal and luminal subtypes. Such large-scale studies require the implementation of robust metrics to assessmore » the contributions of technical and biological variability in the qualitative and quantitative data. Accordingly, we developed a quantification confidence score based on the quality of each peptide-spectrum match (PSM) to remove quantification outliers from each analysis. After combining confidence score filtering and statistical analysis, reproducible protein identification and quantitative results were achieved from LC-MS/MS datasets collected over a 16 month period.« less

  9. Sulfur-based absolute quantification of proteins using isotope dilution inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Lee, Hyun-Seok; Heun Kim, Sook; Jeong, Ji-Seon; Lee, Yong-Moon; Yim, Yong-Hyeon

    2015-10-01

    An element-based reductive approach provides an effective means of realizing International System of Units (SI) traceability for high-purity biological standards. Here, we develop an absolute protein quantification method using double isotope dilution (ID) inductively coupled plasma mass spectrometry (ICP-MS) combined with microwave-assisted acid digestion for the first time. We validated the method and applied it to certify the candidate protein certified reference material (CRM) of human growth hormone (hGH). The concentration of hGH was determined by analysing the total amount of sulfur in hGH. Next, the size-exclusion chromatography method was used with ICP-MS to characterize and quantify sulfur-containing impurities. By subtracting the contribution of sulfur-containing impurities from the total sulfur content in the hGH CRM, we obtained a SI-traceable certification value. The quantification result obtained with the present method based on sulfur analysis was in excellent agreement with the result determined via a well-established protein quantification method based on amino acid analysis using conventional acid hydrolysis combined with an ID liquid chromatography-tandem mass spectrometry. The element-based protein quantification method developed here can be generally used for SI-traceable absolute quantification of proteins, especially pure-protein standards.

  10. MODELING ENERGY EXPENDITURE AND OXYGEN CONSUMPTION IN HUMAN EXPOSURE MODELS: ACCOUNTING FOR FATIGUE AND EPOC

    EPA Science Inventory

    Human exposure and dose models often require a quantification of oxygen consumption for a simulated individual. Oxygen consumption is dependent on the modeled Individual's physical activity level as described in an activity diary. Activity level is quantified via standardized val...

  11. Telomere length analysis.

    PubMed

    Canela, Andrés; Klatt, Peter; Blasco, María A

    2007-01-01

    Most somatic cells of long-lived species undergo telomere shortening throughout life. Critically short telomeres trigger loss of cell viability in tissues, which has been related to alteration of tissue function and loss of regenerative capabilities in aging and aging-related diseases. Hence, telomere length is an important biomarker for aging and can be used in the prognosis of aging diseases. These facts highlight the importance of developing methods for telomere length determination that can be employed to evaluate telomere length during the human aging process. Telomere length quantification methods have improved greatly in accuracy and sensitivity since the development of the conventional telomeric Southern blot. Here, we describe the different methodologies recently developed for telomere length quantification, as well as their potential applications for human aging studies.

  12. Modeling human tracking error in several different anti-tank systems

    NASA Technical Reports Server (NTRS)

    Kleinman, D. L.

    1981-01-01

    An optimal control model for generating time histories of human tracking errors in antitank systems is outlined. Monte Carlo simulations of human operator responses for three Army antitank systems are compared. System/manipulator dependent data comparisons reflecting human operator limitations in perceiving displayed quantities and executing intended control motions are presented. Motor noise parameters are also discussed.

  13. [Using some modern mathematical models of postmortem cooling of the human body for the time of death determination].

    PubMed

    Vavilov, A Iu; Viter, V I

    2007-01-01

    Mathematical questions of data errors of modern thermometrical models of postmortem cooling of the human body are considered. The main diagnostic areas used for thermometry are analyzed to minimize these data errors. The authors propose practical recommendations to decrease data errors of determination of prescription of death coming.

  14. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    PubMed

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  15. The Hinton train disaster.

    PubMed

    Smiley, A M

    1990-10-01

    In February of 1986 a head-on collision occurred between a freight train and a passenger train in western Canada killing 23 people and causing over $30 million of damage. A Commission of Inquiry appointed by the Canadian government concluded that human error was the major reason for the collision. This report discusses the factors contributing to the human error: mainly poor work-rest schedules, the monotonous nature of the train driving task, insufficient information about train movements, and the inadequate backup systems in case of human error.

  16. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  17. What is the value of biomass remote sensing data for blue carbon inventories?

    NASA Astrophysics Data System (ADS)

    Byrd, K. B.; Simard, M.; Crooks, S.; Windham-Myers, L.

    2015-12-01

    The U.S. is testing approaches for accounting for carbon emissions and removals associated with wetland management according to 2013 IPCC Wetlands Supplement guidelines. Quality of reporting is measured from low (Tier 1) to high (Tier 3) depending upon data availability and analytical capacity. The use of satellite remote sensing data to derive carbon stocks and flux provides a practical approach for moving beyond IPCC Tier 1, the global default factor approach, to support Tier 2 or Tier 3 quantification of carbon emissions or removals. We are determining the "price of precision," or the extent to which improved satellite data will continue to increase the accuracy of "blue carbon" accounting. Tidal marsh biomass values are needed to quantify aboveground carbon stocks and stock changes, and to run process-based models of carbon accumulation. Maps of tidal marsh biomass have been produced from high resolution commercial and moderate resolution Landsat satellite data with relatively low error [percent normalized RMSE (%RMSE) from 7 to 14%]. Recently for a brackish marsh in Suisun Bay, California, we used Landsat 8 data to produce a biomass map that applied the Wide Dynamic Range Vegetation Index (WDRVI) (ρNIR*0.2 - ρR)/(ρNIR*0.2+ρR) to fully vegetated pixels and the Simple Ratio index (ρRed/ρGreen) to pixels with a mix of vegetation and water. Overall RMSE was 208 g/m2, while %RMSE = 13.7%. Also, preliminary use of airborne and spaceborne RADAR data in coastal Louisiana produced a marsh biomass map with 30% error. The integration of RADAR and LiDAR with optical remote sensing data has the potential to further reduce error in biomass estimation. In 2017, nations will report back to the U.N. Framework Convention on Climate Change on their experience in applying the Wetlands Supplement guidelines. These remote sensing efforts will mark an important step toward quantifying human impacts to wetlands within the global carbon cycle.

  18. Technical Note: Millimeter precision in ultrasound based patient positioning: Experimental quantification of inherent technical limitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun

    2014-08-15

    Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less

  19. Development and validation of a bioanalytical LC-MS method for the quantification of GHRP-6 in human plasma.

    PubMed

    Gil, Jeovanis; Cabrales, Ania; Reyes, Osvaldo; Morera, Vivian; Betancourt, Lázaro; Sánchez, Aniel; García, Gerardo; Moya, Galina; Padrón, Gabriel; Besada, Vladimir; González, Luis Javier

    2012-02-23

    Growth hormone-releasing peptide 6 (GHRP-6, His-(DTrp)-Ala-Trp-(DPhe)-Lys-NH₂, MW=872.44 Da) is a potent growth hormone secretagogue that exhibits a cytoprotective effect, maintaining tissue viability during acute ischemia/reperfusion episodes in different organs like small bowel, liver and kidneys. In the present work a quantitative method to analyze GHRP-6 in human plasma was developed and fully validated following FDA guidelines. The method uses an internal standard (IS) of GHRP-6 with ¹³C-labeled Alanine for quantification. Sample processing includes a precipitation step with cold acetone to remove the most abundant plasma proteins, recovering the GHRP-6 peptide with a high yield. Quantification was achieved by LC-MS in positive full scan mode in a Q-Tof mass spectrometer. The sensitivity of the method was evaluated, establishing the lower limit of quantification at 5 ng/mL and a range for the calibration curve from 5 ng/mL to 50 ng/mL. A dilution integrity test was performed to analyze samples at higher concentration of GHRP-6. The validation process involved five calibration curves and the analysis of quality control samples to determine accuracy and precision. The calibration curves showed R² higher than 0.988. The stability of the analyte and its internal standard (IS) was demonstrated in all conditions the samples would experience in a real time analyses. This method was applied to the quantification of GHRP-6 in plasma from nine healthy volunteers participating in a phase I clinical trial. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Quantification of silver nanoparticle uptake and distribution within individual human macrophages by FIB/SEM slice and view.

    PubMed

    Guehrs, Erik; Schneider, Michael; Günther, Christian M; Hessing, Piet; Heitz, Karen; Wittke, Doreen; López-Serrano Oliver, Ana; Jakubowski, Norbert; Plendl, Johanna; Eisebitt, Stefan; Haase, Andrea

    2017-03-21

    Quantification of nanoparticle (NP) uptake in cells or tissues is very important for safety assessment. Often, electron microscopy based approaches are used for this purpose, which allow imaging at very high resolution. However, precise quantification of NP numbers in cells and tissues remains challenging. The aim of this study was to present a novel approach, that combines precise quantification of NPs in individual cells together with high resolution imaging of their intracellular distribution based on focused ion beam/ scanning electron microscopy (FIB/SEM) slice and view approaches. We quantified cellular uptake of 75 nm diameter citrate stabilized silver NPs (Ag 75 Cit) into an individual human macrophage derived from monocytic THP-1 cells using a FIB/SEM slice and view approach. Cells were treated with 10 μg/ml for 24 h. We investigated a single cell and found in total 3138 ± 722 silver NPs inside this cell. Most of the silver NPs were located in large agglomerates, only a few were found in clusters of fewer than five NPs. Furthermore, we cross-checked our results by using inductively coupled plasma mass spectrometry and could confirm the FIB/SEM results. Our approach based on FIB/SEM slice and view is currently the only one that allows the quantification of the absolute dose of silver NPs in individual cells and at the same time to assess their intracellular distribution at high resolution. We therefore propose to use FIB/SEM slice and view to systematically analyse the cellular uptake of various NPs as a function of size, concentration and incubation time.

Top