Sample records for provide quantitative estimates

  1. Stroke onset time estimation from multispectral quantitative magnetic resonance imaging in a rat model of focal permanent cerebral ischemia.

    PubMed

    McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A

    2016-08-01

    Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.

  2. Quantitative genetic tools for insecticide resistance risk assessment: estimating the heritability of resistance

    Treesearch

    Michael J. Firko; Jane Leslie Hayes

    1990-01-01

    Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...

  3. Quantitative subsurface analysis using frequency modulated thermal wave imaging

    NASA Astrophysics Data System (ADS)

    Subhani, S. K.; Suresh, B.; Ghali, V. S.

    2018-01-01

    Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.

  4. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward.  The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226

  5. Quantitative method of medication system interface evaluation.

    PubMed

    Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F

    2007-01-01

    The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.

  6. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  7. [Quantitative estimation of vegetation cover and management factor in USLE and RUSLE models by using remote sensing data: a review].

    PubMed

    Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie

    2012-06-01

    Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.

  8. Quantitative Estimate of the Relation Between Rolling Resistance on Fuel Consumption of Class 8 Tractor Trailers Using Both New and Retreaded Tires (SAE Paper 2014-01-2425)

    EPA Science Inventory

    Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.

  9. Harnessing quantitative genetics and genomics for understanding and improving complex traits in crops

    USDA-ARS?s Scientific Manuscript database

    Classical quantitative genetics aids crop improvement by providing the means to estimate heritability, genetic correlations, and predicted responses to various selection schemes. Genomics has the potential to aid quantitative genetics and applied crop improvement programs via large-scale, high-thro...

  10. Quantitative Doppler Analysis Using Conventional Color Flow Imaging Acquisitions.

    PubMed

    Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Lovstakken, Lasse

    2018-05-01

    Interleaved acquisitions used in conventional triplex mode result in a tradeoff between the frame rate and the quality of velocity estimates. On the other hand, workflow becomes inefficient when the user has to switch between different modes, and measurement variability is increased. This paper investigates the use of power spectral Capon estimator in quantitative Doppler analysis using data acquired with conventional color flow imaging (CFI) schemes. To preserve the number of samples used for velocity estimation, only spatial averaging was utilized, and clutter rejection was performed after spectral estimation. The resulting velocity spectra were evaluated in terms of spectral width using a recently proposed spectral envelope estimator. The spectral envelopes were also used for Doppler index calculations using in vivo and string phantom acquisitions. In vivo results demonstrated that the Capon estimator can provide spectral estimates with sufficient quality for quantitative analysis using packet-based CFI acquisitions. The calculated Doppler indices were similar to the values calculated using spectrograms estimated on a commercial ultrasound scanner.

  11. Quantifying Uncertainty in Near Surface Electromagnetic Imaging Using Bayesian Methods

    NASA Astrophysics Data System (ADS)

    Blatter, D. B.; Ray, A.; Key, K.

    2017-12-01

    Geoscientists commonly use electromagnetic methods to image the Earth's near surface. Field measurements of EM fields are made (often with the aid an artificial EM source) and then used to infer near surface electrical conductivity via a process known as inversion. In geophysics, the standard inversion tool kit is robust and can provide an estimate of the Earth's near surface conductivity that is both geologically reasonable and compatible with the measured field data. However, standard inverse methods struggle to provide a sense of the uncertainty in the estimate they provide. This is because the task of finding an Earth model that explains the data to within measurement error is non-unique - that is, there are many, many such models; but the standard methods provide only one "answer." An alternative method, known as Bayesian inversion, seeks to explore the full range of Earth model parameters that can adequately explain the measured data, rather than attempting to find a single, "ideal" model. Bayesian inverse methods can therefore provide a quantitative assessment of the uncertainty inherent in trying to infer near surface conductivity from noisy, measured field data. This study applies a Bayesian inverse method (called trans-dimensional Markov chain Monte Carlo) to transient airborne EM data previously collected over Taylor Valley - one of the McMurdo Dry Valleys in Antarctica. Our results confirm the reasonableness of previous estimates (made using standard methods) of near surface conductivity beneath Taylor Valley. In addition, we demonstrate quantitatively the uncertainty associated with those estimates. We demonstrate that Bayesian inverse methods can provide quantitative uncertainty to estimates of near surface conductivity.

  12. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    PubMed

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  13. Inter- and intra-observer agreement of BI-RADS-based subjective visual estimation of amount of fibroglandular breast tissue with magnetic resonance imaging: comparison to automated quantitative assessment.

    PubMed

    Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja

    2016-11-01

    To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.

  14. Relationship and Variation of qPCR and Culturable Enterococci Estimates in Ambient Surface Waters Are Predictable

    EPA Science Inventory

    The quantitative polymerase chain reaction (qPCR) method provides rapid estimates of fecal indicator bacteria densities that have been indicated to be useful in the assessment of water quality. Primarily because this method provides faster results than standard culture-based meth...

  15. QFASAR: Quantitative fatty acid signature analysis with R

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2017-01-01

    Knowledge of predator diets provides essential insights into their ecology, yet diet estimation is challenging and remains an active area of research.Quantitative fatty acid signature analysis (QFASA) is a popular method of estimating diet composition that continues to be investigated and extended. However, software to implement QFASA has only recently become publicly available.I summarize a new R package, qfasar, for diet estimation using QFASA methods. The package also provides functionality to evaluate and potentially improve the performance of a library of prey signature data, compute goodness-of-fit diagnostics, and support simulation-based research. Several procedures in the package have not previously been published.qfasar makes traditional and recently published QFASA diet estimation methods accessible to ecologists for the first time. Use of the package is illustrated with signature data from Chukchi Sea polar bears and potential prey species.

  16. Comparison of blood flow models and acquisitions for quantitative myocardial perfusion estimation from dynamic CT

    NASA Astrophysics Data System (ADS)

    Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.

    2014-04-01

    Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.

  17. Improving statistical inference on pathogen densities estimated by quantitative molecular methods: malaria gametocytaemia as a case study.

    PubMed

    Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S

    2015-01-16

    Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.

  18. Radar QPE for hydrological design: Intensity-Duration-Frequency curves

    NASA Astrophysics Data System (ADS)

    Marra, Francesco; Morin, Efrat

    2015-04-01

    Intensity-duration-frequency (IDF) curves are widely used in flood risk management since they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. They are estimated analyzing the extreme values of rainfall records, usually basing on raingauge data. This point-based approach raises two issues: first, hydrological design applications generally need IDF information for the entire catchment rather than a point, second, the representativeness of point measurements decreases with the distance from measure location, especially in regions characterized by steep climatological gradients. Weather radar, providing high resolution distributed rainfall estimates over wide areas, has the potential to overcome these issues. Two objections usually restrain this approach: (i) the short length of data records and (ii) the reliability of quantitative precipitation estimation (QPE) of the extremes. This work explores the potential use of weather radar estimates for the identification of IDF curves by means of a long length radar archive and a combined physical- and quantitative- adjustment of radar estimates. Shacham weather radar, located in the eastern Mediterranean area (Tel Aviv, Israel), archives data since 1990 providing rainfall estimates for 23 years over a region characterized by strong climatological gradients. Radar QPE is obtained correcting the effects of pointing errors, ground echoes, beam blockage, attenuation and vertical variations of reflectivity. Quantitative accuracy is then ensured with a range-dependent bias adjustment technique and reliability of radar QPE is assessed by comparison with gauge measurements. IDF curves are derived from the radar data using the annual extremes method and compared with gauge-based curves. Results from 14 study cases will be presented focusing on the effects of record length and QPE accuracy, exploring the potential application of radar IDF curves for ungauged locations and providing insights on the use of radar QPE for hydrological design studies.

  19. Quantitative Compactness Estimates for Hamilton-Jacobi Equations

    NASA Astrophysics Data System (ADS)

    Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.

    2016-02-01

    We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.

  20. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Treesearch

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  1. 78 FR 53336 - List of Fisheries for 2013

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-29

    ... provided on the LOF are solely used for descriptive purposes and will not be used in determining future... this information to determine whether the fishery can be classified on the LOF based on quantitative... does not have a quantitative estimate of the number of mortalities and serious injuries of pantropical...

  2. Genomic Quantitative Genetics to Study Evolution in the Wild.

    PubMed

    Gienapp, Phillip; Fior, Simone; Guillaume, Frédéric; Lasky, Jesse R; Sork, Victoria L; Csilléry, Katalin

    2017-12-01

    Quantitative genetic theory provides a means of estimating the evolutionary potential of natural populations. However, this approach was previously only feasible in systems where the genetic relatedness between individuals could be inferred from pedigrees or experimental crosses. The genomic revolution opened up the possibility of obtaining the realized proportion of genome shared among individuals in natural populations of virtually any species, which could promise (more) accurate estimates of quantitative genetic parameters in virtually any species. Such a 'genomic' quantitative genetics approach relies on fewer assumptions, offers a greater methodological flexibility, and is thus expected to greatly enhance our understanding of evolution in natural populations, for example, in the context of adaptation to environmental change, eco-evolutionary dynamics, and biodiversity conservation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. qF-SSOP: real-time optical property corrected fluorescence imaging

    PubMed Central

    Valdes, Pablo A.; Angelo, Joseph P.; Choi, Hak Soo; Gioux, Sylvain

    2017-01-01

    Fluorescence imaging is well suited to provide image guidance during resections in oncologic and vascular surgery. However, the distorting effects of tissue optical properties on the emitted fluorescence are poorly compensated for on even the most advanced fluorescence image guidance systems, leading to subjective and inaccurate estimates of tissue fluorophore concentrations. Here we present a novel fluorescence imaging technique that performs real-time (i.e., video rate) optical property corrected fluorescence imaging. We perform full field of view simultaneous imaging of tissue optical properties using Single Snapshot of Optical Properties (SSOP) and fluorescence detection. The estimated optical properties are used to correct the emitted fluorescence with a quantitative fluorescence model to provide quantitative fluorescence-Single Snapshot of Optical Properties (qF-SSOP) images with less than 5% error. The technique is rigorous, fast, and quantitative, enabling ease of integration into the surgical workflow with the potential to improve molecular guidance intraoperatively. PMID:28856038

  4. Overview of T.E.S.T. (Toxicity Estimation Software Tool)

    EPA Science Inventory

    This talk provides an overview of T.E.S.T. (Toxicity Estimation Software Tool). T.E.S.T. predicts toxicity values and physical properties using a variety of different QSAR (quantitative structure activity relationship) approaches including hierarchical clustering, group contribut...

  5. Simulating realistic predator signatures in quantitative fatty acid signature analysis

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.

    2015-01-01

    Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.

  6. Validation of Passive Sampling Devices for Monitoring of Munitions Constituents in Underwater Environments

    DTIC Science & Technology

    2017-06-30

    Research and Development Program [SERDP] project #ER-2542) into the canister would provide enhancement of the quantitative estimation of the TWA...7 4. Advantages and limitations compared to other sampling techniques...Department of Defense EOD Explosive Ordnance Disposal EPA United States Environmental Protection Agency EQL Environmental Quantitation Limit EST

  7. Physiological frailty index (PFI): quantitative in-life estimate of individual biological age in mice.

    PubMed

    Antoch, Marina P; Wrobel, Michelle; Kuropatwinski, Karen K; Gitlin, Ilya; Leonova, Katerina I; Toshkov, Ilia; Gleiberman, Anatoli S; Hutson, Alan D; Chernova, Olga B; Gudkov, Andrei V

    2017-03-19

    The development of healthspan-extending pharmaceuticals requires quantitative estimation of age-related progressive physiological decline. In humans, individual health status can be quantitatively assessed by means of a frailty index (FI), a parameter which reflects the scale of accumulation of age-related deficits. However, adaptation of this methodology to animal models is a challenging task since it includes multiple subjective parameters. Here we report a development of a quantitative non-invasive procedure to estimate biological age of an individual animal by creating physiological frailty index (PFI). We demonstrated the dynamics of PFI increase during chronological aging of male and female NIH Swiss mice. We also demonstrated acceleration of growth of PFI in animals placed on a high fat diet, reflecting aging acceleration by obesity and provide a tool for its quantitative assessment. Additionally, we showed that PFI could reveal anti-aging effect of mTOR inhibitor rapatar (bioavailable formulation of rapamycin) prior to registration of its effects on longevity. PFI revealed substantial sex-related differences in normal chronological aging and in the efficacy of detrimental (high fat diet) or beneficial (rapatar) aging modulatory factors. Together, these data introduce PFI as a reliable, non-invasive, quantitative tool suitable for testing potential anti-aging pharmaceuticals in pre-clinical studies.

  8. Techniques and methods for estimating abundance of larval and metamorphosed sea lampreys in Great Lakes tributaries, 1995 to 2001

    USGS Publications Warehouse

    Slade, Jeffrey W.; Adams, Jean V.; Christie, Gavin C.; Cuddy, Douglas W.; Fodale, Michael F.; Heinrich, John W.; Quinlan, Henry R.; Weise, Jerry G.; Weisser, John W.; Young, Robert J.

    2003-01-01

    Before 1995, Great Lakes streams were selected for lampricide treatment based primarily on qualitative measures of the relative abundance of larval sea lampreys, Petromyzon marinus. New integrated pest management approaches required standardized quantitative measures of sea lamprey. This paper evaluates historical larval assessment techniques and data and describes how new standardized methods for estimating abundance of larval and metamorphosed sea lampreys were developed and implemented. These new methods have been used to estimate larval and metamorphosed sea lamprey abundance in about 100 Great Lakes streams annually and to rank them for lampricide treatment since 1995. Implementation of these methods has provided a quantitative means of selecting streams for treatment based on treatment cost and estimated production of metamorphosed sea lampreys, provided managers with a tool to estimate potential recruitment of sea lampreys to the Great Lakes and the ability to measure the potential consequences of not treating streams, resulting in a more justifiable allocation of resources. The empirical data produced can also be used to simulate the impacts of various control scenarios.

  9. Quantitative software models for the estimation of cost, size, and defects

    NASA Technical Reports Server (NTRS)

    Hihn, J.; Bright, L.; Decker, B.; Lum, K.; Mikulski, C.; Powell, J.

    2002-01-01

    The presentation will provide a brief overview of the SQI measurement program as well as describe each of these models and how they are currently being used in supporting JPL project, task and software managers to estimate and plan future software systems and subsystems.

  10. Misconceptions of Astronomical Distances

    ERIC Educational Resources Information Center

    Miller, Brian W.; Brewer, William F.

    2010-01-01

    Previous empirical studies using multiple-choice procedures have suggested that there are misconceptions about the scale of astronomical distances. The present study provides a quantitative estimate of the nature of this misconception among US university students by asking them, in an open-ended response format, to make estimates of the distances…

  11. Polymer on Top: Current Limits and Future Perspectives of Quantitatively Evaluating Surface Grafting.

    PubMed

    Michalek, Lukas; Barner, Leonie; Barner-Kowollik, Christopher

    2018-03-07

    Well-defined polymer strands covalently tethered onto solid substrates determine the properties of the resulting functional interface. Herein, the current approaches to determine quantitative grafting densities are assessed. Based on a brief introduction into the key theories describing polymer brush regimes, a user's guide is provided to estimating maximum chain coverage and-importantly-examine the most frequently employed approaches for determining grafting densities, i.e., dry thickness measurements, gravimetric assessment, and swelling experiments. An estimation of the reliability of these determination methods is provided via carefully evaluating their assumptions and assessing the stability of the underpinning equations. A practical access guide for comparatively and quantitatively evaluating the reliability of a given approach is thus provided, enabling the field to critically judge experimentally determined grafting densities and to avoid the reporting of grafting densities that fall outside the physically realistic parameter space. The assessment is concluded with a perspective on the development of advanced approaches for determination of grafting density, in particular, on single-chain methodologies. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Conventional liquid chromatography/triple quadrupole mass spectrometer-based metabolite identification and semi-quantitative estimation approach in the investigation of dabigatran etexilate in vitro metabolism

    PubMed Central

    Hu, Zhe-Yi; Parker, Robert B.; Herring, Vanessa L.; Laizure, S. Casey

    2012-01-01

    Dabigatran etexilate (DABE) is an oral prodrug that is rapidly converted by esterases to dabigatran (DAB), a direct inhibitor of thrombin. To elucidate the esterase-mediated metabolic pathway of DABE, a high-performance liquid chromatography/mass spectrometer (LC-MS/MS)-based metabolite identification and semi-quantitative estimation approach was developed. To overcome the poor full-scan sensitivity of conventional triple quadrupole mass spectrometry, precursor-product ion pairs were predicted, to search for the potential in vitro metabolites. The detected metabolites were confirmed by the product ion scan. A dilution method was introduced to evaluate the matrix effects of tentatively identified metabolites without chemical standards. Quantitative information on detected metabolites was obtained using ‘metabolite standards’ generated from incubation samples that contain a high concentration of metabolite in combination with a correction factor for mass spectrometry response. Two in vitro metabolites of DABE (M1 and M2) were identified, and quantified by the semi-quantitative estimation approach. It is noteworthy that CES1 convert DABE to M1 while CES2 mediates the conversion of DABE to M2. M1 (or M2) was further metabolized to DAB by CES2 (or CES1). The approach presented here provides a solution to a bioanalytical need for fast identification and semi-quantitative estimation of CES metabolites in preclinical samples. PMID:23239178

  13. MONITORING ECOSYSTEMS FROM SPACE: THE GLOBAL FIDUCIALS PROGRAM

    EPA Science Inventory

    Images from satellites provide valuable insights to changes in land-cover and ecosystems. Long- term monitoring of ecosystem change using historical satellite imagery can provide quantitative measures of ecological processes and allows for estimation of future ecosystem condition...

  14. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China*

    PubMed Central

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-01-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749

  15. A new quantitative model of ecological compensation based on ecosystem capital in Zhejiang Province, China.

    PubMed

    Jin, Yan; Huang, Jing-feng; Peng, Dai-liang

    2009-04-01

    Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.

  16. 77 FR 4169 - Endangered and Threatened Species: Final Rule To Revise the Critical Habitat Designation for the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ..., we have decided it is not feasible to provide meaningful quantitative estimates of the incremental... potential economic impacts to oil spill response. This revision (i.e., replacing quantitative costs with a... and separate PCE in this final designation. As more research is completed, and we learn more of the...

  17. Repeatability Assessment by ISO 11843-7 in Quantitative HPLC for Herbal Medicines.

    PubMed

    Chen, Liangmian; Kotani, Akira; Hakamata, Hideki; Tsutsumi, Risa; Hayashi, Yuzuru; Wang, Zhimin; Kusu, Fumiyo

    2015-01-01

    We have proposed an assessment methods to estimate the measurement relative standard deviation (RSD) of chromatographic peaks in quantitative HPLC for herbal medicines by the methodology of ISO 11843 Part 7 (ISO 11843-7:2012), which provides detection limits stochastically. In quantitative HPLC with UV detection (HPLC-UV) of Scutellaria Radix for the determination of baicalin, the measurement RSD of baicalin by ISO 11843-7:2012 stochastically was within a 95% confidence interval of the statistically obtained RSD by repetitive measurements (n = 6). Thus, our findings show that it is applicable for estimating of the repeatability of HPLC-UV for determining baicalin without repeated measurements. In addition, the allowable limit of the "System repeatability" in "Liquid Chromatography" regulated in a pharmacopoeia can be obtained by the present assessment method. Moreover, the present assessment method was also successfully applied to estimate the measurement RSDs of quantitative three-channel liquid chromatography with electrochemical detection (LC-3ECD) of Chrysanthemi Flos for determining caffeoylquinic acids and flavonoids. By the present repeatability assessment method, reliable measurement RSD was obtained stochastically, and the experimental time was remarkably reduced.

  18. A model for the characterization of the spatial properties in vestibular neurons

    NASA Technical Reports Server (NTRS)

    Angelaki, D. E.; Bush, G. A.; Perachio, A. A.

    1992-01-01

    Quantitative study of the static and dynamic response properties of some otolith-sensitive neurons has been difficult in the past partly because their responses to different linear acceleration vectors exhibited no "null" plane and a dependence of phase on stimulus orientation. The theoretical formulation of the response ellipse provides a quantitative way to estimate the spatio-temporal properties of such neurons. Its semi-major axis gives the direction of the polarization vector (i.e., direction of maximal sensitivity) and it estimates the neuronal response for stimulation along that direction. In addition, the semi-minor axis of the ellipse provides an estimate of the neuron's maximal sensitivity in the "null" plane. In this paper, extracellular recordings from otolith-sensitive vestibular nuclei neurons in decerebrate rats were used to demonstrate the practical application of the method. The experimentally observed gain and phase dependence on the orientation angle of the acceleration vector in a head-horizontal plane was described and satisfactorily fit by the response ellipse model. In addition, the model satisfactorily fits neuronal responses in three-dimensions and unequivocally demonstrates that the response ellipse formulation is the general approach to describe quantitatively the spatial properties of vestibular neurons.

  19. Estimation of the number of fluorescent end-members for quantitative analysis of multispectral FLIM data.

    PubMed

    Gutierrez-Navarro, Omar; Campos-Delgado, Daniel U; Arce-Santana, Edgar R; Maitland, Kristen C; Cheng, Shuna; Jabbour, Joey; Malik, Bilal; Cuenca, Rodrigo; Jo, Javier A

    2014-05-19

    Multispectral fluorescence lifetime imaging (m-FLIM) can potentially allow identifying the endogenous fluorophores present in biological tissue. Quantitative description of such data requires estimating the number of components in the sample, their characteristic fluorescent decays, and their relative contributions or abundances. Unfortunately, this inverse problem usually requires prior knowledge about the data, which is seldom available in biomedical applications. This work presents a new methodology to estimate the number of potential endogenous fluorophores present in biological tissue samples from time-domain m-FLIM data. Furthermore, a completely blind linear unmixing algorithm is proposed. The method was validated using both synthetic and experimental m-FLIM data. The experimental m-FLIM data include in-vivo measurements from healthy and cancerous hamster cheek-pouch epithelial tissue, and ex-vivo measurements from human coronary atherosclerotic plaques. The analysis of m-FLIM data from in-vivo hamster oral mucosa identified healthy from precancerous lesions, based on the relative concentration of their characteristic fluorophores. The algorithm also provided a better description of atherosclerotic plaques in term of their endogenous fluorophores. These results demonstrate the potential of this methodology to provide quantitative description of tissue biochemical composition.

  20. Measuring landscape esthetics: the scenic beauty estimation method

    Treesearch

    Terry C. Daniel; Ron S. Boster

    1976-01-01

    The Scenic Beauty Estimation Method (SBE) provides quantitative measures of esthetic preferences for alternative wildland management systems. Extensive experimentation and testing with user, interest, and professional groups validated the method. SBE shows promise as an efficient and objective means for assessing the scenic beauty of public forests and wildlands, and...

  1. Calibration-free assays on standard real-time PCR devices

    PubMed Central

    Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr

    2017-01-01

    Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration. PMID:28327545

  2. Calibration-free assays on standard real-time PCR devices

    NASA Astrophysics Data System (ADS)

    Debski, Pawel R.; Gewartowski, Kamil; Bajer, Seweryn; Garstecki, Piotr

    2017-03-01

    Quantitative Polymerase Chain Reaction (qPCR) is one of central techniques in molecular biology and important tool in medical diagnostics. While being a golden standard qPCR techniques depend on reference measurements and are susceptible to large errors caused by even small changes of reaction efficiency or conditions that are typically not marked by decreased precision. Digital PCR (dPCR) technologies should alleviate the need for calibration by providing absolute quantitation using binary (yes/no) signals from partitions provided that the basic assumption of amplification a single target molecule into a positive signal is met. Still, the access to digital techniques is limited because they require new instruments. We show an analog-digital method that can be executed on standard (real-time) qPCR devices. It benefits from real-time readout, providing calibration-free assessment. The method combines advantages of qPCR and dPCR and bypasses their drawbacks. The protocols provide for small simplified partitioning that can be fitted within standard well plate format. We demonstrate that with the use of synergistic assay design standard qPCR devices are capable of absolute quantitation when normal qPCR protocols fail to provide accurate estimates. We list practical recipes how to design assays for required parameters, and how to analyze signals to estimate concentration.

  3. Manned Mars mission radiation environment and radiobiology

    NASA Technical Reports Server (NTRS)

    Nachtwey, D. S.

    1986-01-01

    Potential radiation hazards to crew members on manned Mars missions are discussed. It deals briefly with radiation sources and environments likely to be encountered during various phases of such missions, providing quantitative estimates of these environments. Also provided are quantitative data and discussions on the implications of such radiation on the human body. Various sorts of protective measures are suggested. Recent re-evaluation of allowable dose limits by the National Council of Radiation Protection is discussed, and potential implications from such activity are assessed.

  4. Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model

    PubMed Central

    Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.

    2012-01-01

    Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315

  5. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  6. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  7. Motion compensation using origin ensembles in awake small animal positron emission tomography

    NASA Astrophysics Data System (ADS)

    Gillam, John E.; Angelis, Georgios I.; Kyme, Andre Z.; Meikle, Steven R.

    2017-02-01

    In emission tomographic imaging, the stochastic origin ensembles algorithm provides unique information regarding the detected counts given the measured data. Precision in both voxel and region-wise parameters may be determined for a single data set based on the posterior distribution of the count density allowing uncertainty estimates to be allocated to quantitative measures. Uncertainty estimates are of particular importance in awake animal neurological and behavioral studies for which head motion, unique for each acquired data set, perturbs the measured data. Motion compensation can be conducted when rigid head pose is measured during the scan. However, errors in pose measurements used for compensation can degrade the data and hence quantitative outcomes. In this investigation motion compensation and detector resolution models were incorporated into the basic origin ensembles algorithm and an efficient approach to computation was developed. The approach was validated against maximum liklihood—expectation maximisation and tested using simulated data. The resultant algorithm was then used to analyse quantitative uncertainty in regional activity estimates arising from changes in pose measurement precision. Finally, the posterior covariance acquired from a single data set was used to describe correlations between regions of interest providing information about pose measurement precision that may be useful in system analysis and design. The investigation demonstrates the use of origin ensembles as a powerful framework for evaluating statistical uncertainty of voxel and regional estimates. While in this investigation rigid motion was considered in the context of awake animal PET, the extension to arbitrary motion may provide clinical utility where respiratory or cardiac motion perturb the measured data.

  8. A prioritization of generic safety issues. Supplement 19, Revision insertion instructions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    1995-11-01

    The report presents the safety priority ranking for generic safety issues related to nuclear power plants. The purpose of these rankings is to assist in the timely and efficient allocation of NRC resources for the resolution of those safety issues that have a significant potential for reducing risk. The safety priority rankings are HIGH, MEDIUM, LOW, and DROP, and have been assigned on the basis of risk significance estimates, the ratio of risk to costs and other impacts estimated to result if resolution of the safety issues were implemented, and the consideration of uncertainties and other quantitative or qualitative factors.more » To the extent practical, estimates are quantitative. This document provides revisions and amendments to the report.« less

  9. New service interface for River Forecasting Center derived quantitative precipitation estimates

    USGS Publications Warehouse

    Blodgett, David L.

    2013-01-01

    For more than a decade, the National Weather Service (NWS) River Forecast Centers (RFCs) have been estimating spatially distributed rainfall by applying quality-control procedures to radar-indicated rainfall estimates in the eastern United States and other best practices in the western United States to producea national Quantitative Precipitation Estimate (QPE) (National Weather Service, 2013). The availability of archives of QPE information for analytical purposes has been limited to manual requests for access to raw binary file formats that are difficult for scientists who are not in the climatic sciences to work with. The NWS provided the QPE archives to the U.S. Geological Survey (USGS), and the contents of the real-time feed from the RFCs are being saved by the USGS for incorporation into the archives. The USGS has applied time-series aggregation and added latitude-longitude coordinate variables to publish the RFC QPE data. Web services provide users with direct (index-based) data access, rendered visualizations of the data, and resampled raster representations of the source data in common geographic information formats.

  10. Economic valuation of landslide damage in hilly regions: a case study from Flanders, Belgium.

    PubMed

    Vranken, Liesbet; Van Turnhout, Pieter; Van Den Eeckhaut, Miet; Vandekerckhove, Liesbeth; Poesen, Jean

    2013-03-01

    Several regions around the globe are at risk of incurring damage from landslides, but only few studies have concentrated on a quantitative estimate of the overall damage caused by landslides at a regional scale. This study therefore starts with a quantitative economic assessment of the direct and indirect damage caused by landslides in a 2,910 km study area located west of Brussels, a low-relief region susceptible to landslides. Based on focus interviews as well as on semi-structured interviews with homeowners, civil servants and the owners and providers of lifelines such as electricity and sewage, a quantitative damage assessment is provided. For private properties (houses, forest and pasture land) we estimate the real estate and production value losses for different damage scenarios, while for public infrastructure the costs of measures to repair and prevent landslide induced damage are estimated. In addition, the increase in amenity value of forests and grasslands due to the occurrence of landslides is also calculated. The study illustrates that a minority of land (only 2.3%) within the study area is used for dwellings, roads and railway lines, but that these land use types are responsible for the vast majority of the economic damage due to the occurrence of landslides. The annual cost of direct damage due to landsliding amounts to 688,148 €/year out of which 550,740 €/year for direct damage to houses, while the annual indirect damage augments to 3,020,049 €/year out of which 2,007,375 €/year for indirect damage to real estate. Next, the study illustrates that the increase of the amenity value of forests and grasslands outweighs the production value loss. As such the study does not only provide quantitative input data for the estimation of future risks, but also important information for government officials as it clearly informs about the costs associated with certain land use types in landslide areas. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. A clustering approach to segmenting users of internet-based risk calculators.

    PubMed

    Harle, C A; Downs, J S; Padman, R

    2011-01-01

    Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.

  12. Generalized likelihood ratios for quantitative diagnostic test scores.

    PubMed

    Tandberg, D; Deely, J J; O'Malley, A J

    1997-11-01

    The reduction of quantitative diagnostic test scores to the dichotomous case is a wasteful and unnecessary simplification in the era of high-speed computing. Physicians could make better use of the information embedded in quantitative test results if modern generalized curve estimation techniques were applied to the likelihood functions of Bayes' theorem. Hand calculations could be completely avoided and computed graphical summaries provided instead. Graphs showing posttest probability of disease as a function of pretest probability with confidence intervals (POD plots) would enhance acceptance of these techniques if they were immediately available at the computer terminal when test results were retrieved. Such constructs would also provide immediate feedback to physicians when a valueless test had been ordered.

  13. Interstate waste transport -- Emotions, energy, and environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elcock, D.

    1993-12-31

    This report applies quantitative analysis to the debate of waste transport and disposal. Moving from emotions and politics back to numbers, this report estimates potential energy, employment and environmental impacts associated with disposing a ton of municipal solid waste under three different disposal scenarios that reflect interstate and intrastate options. The results help provide a less emotional, more quantitative look at interstate waste transport restrictions.

  14. Interstate waste transport -- Emotions, energy, and environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elcock, D.

    1993-01-01

    This report applies quantitative analysis to the debate of waste transport and disposal. Moving from emotions and politics back to numbers, this report estimates potential energy, employment and environmental impacts associated with disposing a ton of municipal solid waste under three different disposal scenarios that reflect interstate and intrastate options. The results help provide a less emotional, more quantitative look at interstate waste transport restrictions.

  15. Nonparametric modeling of longitudinal covariance structure in functional mapping of quantitative trait loci.

    PubMed

    Yap, John Stephen; Fan, Jianqing; Wu, Rongling

    2009-12-01

    Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.

  16. Development of an agricultural job-exposure matrix for British Columbia, Canada.

    PubMed

    Wood, David; Astrakianakis, George; Lang, Barbara; Le, Nhu; Bert, Joel

    2002-09-01

    Farmers in British Columbia (BC), Canada have been shown to have unexplained elevated proportional mortality rates for several cancers. Because agricultural exposures have never been documented systematically in BC, a quantitative agricultural Job-exposure matrix (JEM) was developed containing exposure assessments from 1950 to 1998. This JEM was developed to document historical exposures and to facilitate future epidemiological studies. Available information regarding BC farming practices was compiled and checklists of potential exposures were produced for each crop. Exposures identified included chemical, biological, and physical agents. Interviews with farmers and agricultural experts were conducted using the checklists as a starting point. This allowed the creation of an initial or 'potential' JEM based on three axes: exposure agent, 'type of work' and time. The 'type of work' axis was determined by combining several variables: region, crop, job title and task. This allowed for a complete description of exposures. Exposure assessments were made quantitatively, where data allowed, or by a dichotomous variable (exposed/unexposed). Quantitative calculations were divided into re-entry and application scenarios. 'Re-entry' exposures were quantified using a standard exposure model with some modification while application exposure estimates were derived using data from the North American Pesticide Handlers Exposure Database (PHED). As expected, exposures differed between crops and job titles both quantitatively and qualitatively. Of the 290 agents included in the exposure axis; 180 were pesticides. Over 3000 estimates of exposure were conducted; 50% of these were quantitative. Each quantitative estimate was at the daily absorbed dose level. Exposure estimates were then rated as high, medium, or low based on comparing them with their respective oral chemical reference dose (RfD) or Acceptable Daily Intake (ADI). This data was mainly obtained from the US Environmental Protection Agency (EPA) Integrated Risk Information System database. Of the quantitative estimates, 74% were rated as low (< 100%) and only 10% were rated as high (>500%). The JEM resulting from this study fills a void concerning exposures for BC farmers and farm workers. While only limited validation of assessments were possible, this JEM can serve as a benchmark for future studies. Preliminary analysis at the BC Cancer Agency (BCCA) using the JEM with prostate cancer records from a large cancer and occupation study/survey has already shown promising results. Development of this JEM provides a useful model for developing historical quantitative exposure estimates where is very little documented information available.

  17. Linkage disequilibrium interval mapping of quantitative trait loci.

    PubMed

    Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte

    2006-03-16

    For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.

  18. Estimating the Potential Toxicity of Chemicals Associated with Hydraulic Fracturing Operations Using Quantitative Structure-Activity Relationship Modeling.

    PubMed

    Yost, Erin E; Stanek, John; DeWoskin, Robert S; Burgoon, Lyle D

    2016-07-19

    The United States Environmental Protection Agency (EPA) identified 1173 chemicals associated with hydraulic fracturing fluids, flowback, or produced water, of which 1026 (87%) lack chronic oral toxicity values for human health assessments. To facilitate the ranking and prioritization of chemicals that lack toxicity values, it may be useful to employ toxicity estimates from quantitative structure-activity relationship (QSAR) models. Here we describe an approach for applying the results of a QSAR model from the TOPKAT program suite, which provides estimates of the rat chronic oral lowest-observed-adverse-effect level (LOAEL). Of the 1173 chemicals, TOPKAT was able to generate LOAEL estimates for 515 (44%). To address the uncertainty associated with these estimates, we assigned qualitative confidence scores (high, medium, or low) to each TOPKAT LOAEL estimate, and found 481 to be high-confidence. For 48 chemicals that had both a high-confidence TOPKAT LOAEL estimate and a chronic oral reference dose from EPA's Integrated Risk Information System (IRIS) database, Spearman rank correlation identified 68% agreement between the two values (permutation p-value =1 × 10(-11)). These results provide support for the use of TOPKAT LOAEL estimates in identifying and prioritizing potentially hazardous chemicals. High-confidence TOPKAT LOAEL estimates were available for 389 of 1026 hydraulic fracturing-related chemicals that lack chronic oral RfVs and OSFs from EPA-identified sources, including a subset of chemicals that are frequently used in hydraulic fracturing fluids.

  19. FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.

    PubMed

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.

  20. Rapid Quantitation of Ascorbic and Folic Acids in SRM 3280 Multivitamin/Multielement Tablets using Flow-Injection Tandem Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhandari, Deepak; Kertesz, Vilmos; Van Berkel, Gary J

    RATIONALE: Ascorbic acid (AA) and folic acid (FA) are water-soluble vitamins and are usually fortified in food and dietary supplements. For the safety of human health, proper intake of these vitamins is recommended. Improvement in the analysis time required for the quantitative determination of these vitamins in food and nutritional formulations is desired. METHODS: A simple and fast (~5 min) in-tube sample preparation was performed, independently for FA and AA, by mixing extraction solvent with a powdered sample aliquot followed by agitation, centrifugation, and filtration to recover an extract for analysis. Quantitative detection was achieved by flow-injection (1 L injectionmore » volume) electrospray ionization tandem mass spectrometry (ESI-MS/MS) in negative ion mode using the method of standard addition. RESULTS: Method of standard addition was employed for the quantitative estimation of each vitamin in a sample extract. At least 2 spiked and 1 non-spiked sample extract were injected in triplicate for each quantitative analysis. Given an injection-to-injection interval of approximately 2 min, about 18 min was required to complete the quantitative estimation of each vitamin. The concentration values obtained for the respective vitamins in the standard reference material (SRM) 3280 using this approach were within the statistical range of the certified values provided in the NIST Certificate of Analysis. The estimated limit of detections of FA and AA were 13 and 5.9 ng/g, respectively. CONCLUSIONS: Flow-injection ESI-MS/MS was successfully applied for the rapid quantitation of FA and AA in SRM 3280 multivitamin/multielement tablets.« less

  1. Bound Pool Fractions Complement Diffusion Measures to Describe White Matter Micro and Macrostructure

    PubMed Central

    Stikov, Nikola; Perry, Lee M.; Mezer, Aviv; Rykhlevskaia, Elena; Wandell, Brian A.; Pauly, John M.; Dougherty, Robert F.

    2010-01-01

    Diffusion imaging and bound pool fraction (BPF) mapping are two quantitative magnetic resonance imaging techniques that measure microstructural features of the white matter of the brain. Diffusion imaging provides a quantitative measure of the diffusivity of water in tissue. BPF mapping is a quantitative magnetization transfer (qMT) technique that estimates the proportion of exchanging protons bound to macromolecules, such as those found in myelin, and is thus a more direct measure of myelin content than diffusion. In this work, we combine BPF estimates of macromolecular content with measurements of diffusivity within human white matter tracts. Within the white matter, the correlation between BPFs and diffusivity measures such as fractional anisotropy and radial diffusivity was modest, suggesting that diffusion tensor imaging and bound pool fractions are complementary techniques. We found that several major tracts have high BPF, suggesting a higher density of myelin in these tracts. We interpret these results in the context of a quantitative tissue model. PMID:20828622

  2. Effects of decompression on operator performance.

    DOT National Transportation Integrated Search

    1966-04-01

    The study was performed to provide more quantitative estimates of degradation of pilot performance following decompression and the extent to which a decompression with mask donning interrupts the task of piloting. The experiments utilized a Scow comp...

  3. Method comparison for forest soil carbon and nitrogen estimates in the Delaware River basin

    Treesearch

    B. Xu; Yude Pan; A.H. Johnson; A.F. Plante

    2016-01-01

    The accuracy of forest soil C and N estimates is hampered by forest soils that are rocky, inaccessible, and spatially heterogeneous. A composite coring technique is the standard method used in Forest Inventory and Analysis, but its accuracy has been questioned. Quantitative soil pits provide direct measurement of rock content and soil mass from a larger, more...

  4. Quantitative tradeoffs between spatial, temporal, and thermometric resolution of nonresonant Raman thermometry for dynamic experiments.

    PubMed

    McGrane, Shawn D; Moore, David S; Goodwin, Peter M; Dattelbaum, Dana M

    2014-01-01

    The ratio of Stokes to anti-Stokes nonresonant spontaneous Raman can provide an in situ thermometer that is noncontact, independent of any material specific parameters or calibrations, can be multiplexed spatially with line imaging, and can be time resolved for dynamic measurements. However, spontaneous Raman cross sections are very small, and thermometric measurements are often limited by the amount of laser energy that can be applied without damaging the sample or changing its temperature appreciably. In this paper, we quantitatively detail the tradeoff space between spatial, temporal, and thermometric accuracy measurable with spontaneous Raman. Theoretical estimates are pinned to experimental measurements to form realistic expectations of the resolution tradeoffs appropriate to various experiments. We consider the effects of signal to noise, collection efficiency, laser heating, pulsed laser ablation, and blackbody emission as limiting factors, provide formulae to help choose optimal conditions and provide estimates relevant to planning experiments along with concrete examples for single-shot measurements.

  5. ESTIMATING WELFARE IN INSURANCE MARKETS USING VARIATION IN PRICES*

    PubMed Central

    Einav, Liran; Finkelstein, Amy; Cullen, Mark R.

    2009-01-01

    We provide a graphical illustration of how standard consumer and producer theory can be used to quantify the welfare loss associated with inefficient pricing in insurance markets with selection. We then show how this welfare loss can be estimated empirically using identifying variation in the price of insurance. Such variation, together with quantity data, allows us to estimate the demand for insurance. The same variation, together with cost data, allows us to estimate how insurer’s costs vary as market participants endogenously respond to price. The slope of this estimated cost curve provides a direct test for both the existence and nature of selection, and the combination of demand and cost curves can be used to estimate welfare. We illustrate our approach by applying it to data on employer-provided health insurance from one specific company. We detect adverse selection but estimate that the quantitative welfare implications associated with inefficient pricing in our particular application are small, in both absolute and relative terms. PMID:21218182

  6. Generalized PSF modeling for optimized quantitation in PET imaging.

    PubMed

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.

  7. The benefits of improved technologies in agricultural aviation

    NASA Technical Reports Server (NTRS)

    Lietzke, K.; Abram, P.; Braen, C.; Givens, S.; Hazelrigg, G. A., Jr.; Fish, R.; Clyne, F.; Sand, F.

    1977-01-01

    The results are present for a study of the economic benefits attributed to a variety of potential technological improvements in agricultural aviation. Part 1 gives a general description of the ag-air industry and discusses the information used in the data base to estimate the potential benefits from technological improvements. Part 2 presents the benefit estimates and provides a quantitative basis for the estimates in each area study. Part 3 is a bibliography of references relating to this study.

  8. Store turnover as a predictor of food and beverage provider turnover and associated dietary intake estimates in very remote Indigenous communities.

    PubMed

    Wycherley, Thomas; Ferguson, Megan; O'Dea, Kerin; McMahon, Emma; Liberato, Selma; Brimblecombe, Julie

    2016-12-01

    Determine how very-remote Indigenous community (RIC) food and beverage (F&B) turnover quantities and associated dietary intake estimates derived from only stores, compare with values derived from all community F&B providers. F&B turnover quantity and associated dietary intake estimates (energy, micro/macronutrients and major contributing food types) were derived from 12-months transaction data of all F&B providers in three RICs (NT, Australia). F&B turnover quantities and dietary intake estimates from only stores (plus only the primary store in multiple-store communities) were expressed as a proportion of complete F&B provider turnover values. Food types and macronutrient distribution (%E) estimates were quantitatively compared. Combined stores F&B turnover accounted for the majority of F&B quantity (98.1%) and absolute dietary intake estimates (energy [97.8%], macronutrients [≥96.7%] and micronutrients [≥83.8%]). Macronutrient distribution estimates from combined stores and only the primary store closely aligned complete provider estimates (≤0.9% absolute). Food types were similar using combined stores, primary store or complete provider turnover. Evaluating combined stores F&B turnover represents an efficient method to estimate total F&B turnover quantity and associated dietary intake in RICs. In multiple-store communities, evaluating only primary store F&B turnover provides an efficient estimate of macronutrient distribution and major food types. © 2016 Public Health Association of Australia.

  9. Estimating unknown parameters in haemophilia using expert judgement elicitation.

    PubMed

    Fischer, K; Lewandowski, D; Janssen, M P

    2013-09-01

    The increasing attention to healthcare costs and treatment efficiency has led to an increasing demand for quantitative data concerning patient and treatment characteristics in haemophilia. However, most of these data are difficult to obtain. The aim of this study was to use expert judgement elicitation (EJE) to estimate currently unavailable key parameters for treatment models in severe haemophilia A. Using a formal expert elicitation procedure, 19 international experts provided information on (i) natural bleeding frequency according to age and onset of bleeding, (ii) treatment of bleeds, (iii) time needed to control bleeding after starting secondary prophylaxis, (iv) dose requirements for secondary prophylaxis according to onset of bleeding, and (v) life-expectancy. For each parameter experts provided their quantitative estimates (median, P10, P90), which were combined using a graphical method. In addition, information was obtained concerning key decision parameters of haemophilia treatment. There was most agreement between experts regarding bleeding frequencies for patients treated on demand with an average onset of joint bleeding (1.7 years): median 12 joint bleeds per year (95% confidence interval 0.9-36) for patients ≤ 18, and 11 (0.8-61) for adult patients. Less agreement was observed concerning estimated effective dose for secondary prophylaxis in adults: median 2000 IU every other day The majority (63%) of experts expected that a single minor joint bleed could cause irreversible damage, and would accept up to three minor joint bleeds or one trauma related joint bleed annually on prophylaxis. Expert judgement elicitation allowed structured capturing of quantitative expert estimates. It generated novel data to be used in computer modelling, clinical care, and trial design. © 2013 John Wiley & Sons Ltd.

  10. Developing Daily Quantitative Damage Estimates From Geospatial Layers To Support Post Event Recovery

    NASA Astrophysics Data System (ADS)

    Woods, B. K.; Wei, L. H.; Connor, T. C.

    2014-12-01

    With the growth of natural hazard data available in near real-time it is increasingly feasible to deliver damage estimates caused by natural disasters. These estimates can be used in disaster management setting or by commercial entities to optimize the deployment of resources and/or routing of goods and materials. This work outlines an end-to-end, modular process to generate estimates of damage caused by severe weather. The processing stream consists of five generic components: 1) Hazard modules that provide quantitate data layers for each peril. 2) Standardized methods to map the hazard data to an exposure layer based on atomic geospatial blocks. 3) Peril-specific damage functions that compute damage metrics at the atomic geospatial block level. 4) Standardized data aggregators, which map damage to user-specific geometries. 5) Data dissemination modules, which provide resulting damage estimates in a variety of output forms. This presentation provides a description of this generic tool set, and an illustrated example using HWRF-based hazard data for Hurricane Arthur (2014). In this example, the Python-based real-time processing ingests GRIB2 output from the HWRF numerical model, dynamically downscales it in conjunctions with a land cover database using a multiprocessing pool, and a just-in-time compiler (JIT). The resulting wind fields are contoured, and ingested into a PostGIS database using OGR. Finally, the damage estimates are calculated at the atomic block level and aggregated to user-defined regions using PostgreSQL queries to construct application specific tabular and graphics output.

  11. A statistical framework for protein quantitation in bottom-up MS-based proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas

    2009-08-15

    ABSTRACT Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and confidence measures. Challenges include the presence of low-quality or incorrectly identified peptides and widespread, informative, missing data. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model for protein abundance in terms of peptide peak intensities, applicable to both label-based and label-free quantitation experiments. The model allows for both random and censoring missingness mechanisms and provides naturally for protein-level estimates and confidence measures. The model is also used to derive automated filtering and imputation routines. Three LC-MS datasets are used tomore » illustrate the methods. Availability: The software has been made available in the open-source proteomics platform DAnTE (Polpitiya et al. (2008)) (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu« less

  12. A quantitative flood risk analysis methodology for urban areas with integration of social research data

    NASA Astrophysics Data System (ADS)

    Escuder-Bueno, I.; Castillo-Rodríguez, J. T.; Zechner, S.; Jöbstl, C.; Perales-Momparler, S.; Petaccia, G.

    2012-09-01

    Risk analysis has become a top priority for authorities and stakeholders in many European countries, with the aim of reducing flooding risk, considering the population's needs and improving risk awareness. Within this context, two methodological pieces have been developed in the period 2009-2011 within the SUFRI project (Sustainable Strategies of Urban Flood Risk Management with non-structural measures to cope with the residual risk, 2nd ERA-Net CRUE Funding Initiative). First, the "SUFRI Methodology for pluvial and river flooding risk assessment in urban areas to inform decision-making" provides a comprehensive and quantitative tool for flood risk analysis. Second, the "Methodology for investigation of risk awareness of the population concerned" presents the basis to estimate current risk from a social perspective and identify tendencies in the way floods are understood by citizens. Outcomes of both methods are integrated in this paper with the aim of informing decision making on non-structural protection measures. The results of two case studies are shown to illustrate practical applications of this developed approach. The main advantage of applying the methodology herein presented consists in providing a quantitative estimation of flooding risk before and after investing in non-structural risk mitigation measures. It can be of great interest for decision makers as it provides rational and solid information.

  13. Incorporating weather impacts in traffic estimation and prediction systems (TREPS)

    DOT National Transportation Integrated Search

    2009-09-01

    This document provides quantitative benefits of using Intelligent Transportation Systems in highway construction and maintenance work zones. The technical report covers case study sites in the District of Columbia, Texas, Michigan, Arkansas, and Nort...

  14. FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts

    PubMed Central

    Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto

    2010-01-01

    Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304

  15. Quantitative analysis of low-density SNP data for parentage assignment and estimation of family contributions to pooled samples.

    PubMed

    Henshall, John M; Dierens, Leanne; Sellars, Melony J

    2014-09-02

    While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are sufficiently accurate to provide useful information for a breeding program. Treating genotypes as quantitative values is an alternative to perturbing genotypes using an assumed error distribution, but can produce very different results. An understanding of the distribution of the error is required for SNP genotyping platforms.

  16. Methodologies for the quantitative estimation of toxicant dose to cigarette smokers using physical, chemical and bioanalytical data.

    PubMed

    St Charles, Frank Kelley; McAughey, John; Shepperd, Christopher J

    2013-06-01

    Methodologies have been developed, described and demonstrated that convert mouth exposure estimates of cigarette smoke constituents to dose by accounting for smoke spilled from the mouth prior to inhalation (mouth-spill (MS)) and the respiratory retention (RR) during the inhalation cycle. The methodologies are applicable to just about any chemical compound in cigarette smoke that can be measured analytically and can be used with ambulatory population studies. Conversion of exposure to dose improves the relevancy for risk assessment paradigms. Except for urinary nicotine plus metabolites, biomarkers generally do not provide quantitative exposure or dose estimates. In addition, many smoke constituents have no reliable biomarkers. We describe methods to estimate the RR of chemical compounds in smoke based on their vapor pressure (VP) and to estimate the MS for a given subject. Data from two clinical studies were used to demonstrate dose estimation for 13 compounds, of which only 3 have urinary biomarkers. Compounds with VP > 10(-5) Pa generally have RRs of 88% or greater, which do not vary appreciably with inhalation volume (IV). Compounds with VP < 10(-7) Pa generally have RRs dependent on IV and lung exposure time. For MS, mean subject values from both studies were slightly greater than 30%. For constituents with urinary biomarkers, correlations with the calculated dose were significantly improved over correlations with mouth exposure. Of toxicological importance is that the dose correlations provide an estimate of the metabolic conversion of a constituent to its respective biomarker.

  17. Comparison of quantitative myocardial perfusion imaging CT to fluorescent microsphere-based flow from high-resolution cryo-images

    NASA Astrophysics Data System (ADS)

    Eck, Brendan L.; Fahmi, Rachid; Levi, Jacob; Fares, Anas; Wu, Hao; Li, Yuemeng; Vembar, Mani; Dhanantwari, Amar; Bezerra, Hiram G.; Wilson, David L.

    2016-03-01

    Myocardial perfusion imaging using CT (MPI-CT) has the potential to provide quantitative measures of myocardial blood flow (MBF) which can aid the diagnosis of coronary artery disease. We evaluated the quantitative accuracy of MPI-CT in a porcine model of balloon-induced LAD coronary artery ischemia guided by fractional flow reserve (FFR). We quantified MBF at baseline (FFR=1.0) and under moderate ischemia (FFR=0.7) using MPI-CT and compared to fluorescent microsphere-based MBF from high-resolution cryo-images. Dynamic, contrast-enhanced CT images were obtained using a spectral detector CT (Philips Healthcare). Projection-based mono-energetic images were reconstructed and processed to obtain MBF. Three MBF quantification approaches were evaluated: singular value decomposition (SVD) with fixed Tikhonov regularization (ThSVD), SVD with regularization determined by the L-Curve criterion (LSVD), and Johnson-Wilson parameter estimation (JW). The three approaches over-estimated MBF compared to cryo-images. JW produced the most accurate MBF, with average error 33.3+/-19.2mL/min/100g, whereas LSVD and ThSVD had greater over-estimation, 59.5+/-28.3mL/min/100g and 78.3+/-25.6 mL/min/100g, respectively. Relative blood flow as assessed by a flow ratio of LAD-to-remote myocardium was strongly correlated between JW and cryo-imaging, with R2=0.97, compared to R2=0.88 and 0.78 for LSVD and ThSVD, respectively. We assessed tissue impulse response functions (IRFs) from each approach for sources of error. While JW was constrained to physiologic solutions, both LSVD and ThSVD produced IRFs with non-physiologic properties due to noise. The L-curve provided noise-adaptive regularization but did not eliminate non-physiologic IRF properties or optimize for MBF accuracy. These findings suggest that model-based MPI-CT approaches may be more appropriate for quantitative MBF estimation and that cryo-imaging can support the development of MPI-CT by providing spatial distributions of MBF.

  18. Primary production in the Delta: Then and now

    USGS Publications Warehouse

    Cloern, James E.; Robinson, April; Richey, Amy; Grenier, Letitia; Grossinger, Robin; Boyer, Katharyn E.; Burau, Jon; Canuel, Elizabeth A.; DeGeorge, John F.; Drexler, Judith Z.; Enright, Chris; Howe, Emily R.; Kneib, Ronald; Mueller-Solger, Anke; Naiman, Robert J.; Pinckney, James L.; Safran, Samuel M.; Schoellhamer, David H.; Simenstad, Charles A.

    2016-01-01

    To evaluate the role of restoration in the recovery of the Delta ecosystem, we need to have clear targets and performance measures that directly assess ecosystem function. Primary production is a crucial ecosystem process, which directly limits the quality and quantity of food available for secondary consumers such as invertebrates and fish. The Delta has a low rate of primary production, but it is unclear whether this was always the case. Recent analyses from the Historical Ecology Team and Delta Landscapes Project provide quantitative comparisons of the areal extent of 14 habitat types in the modern Delta versus the historical Delta (pre-1850). Here we describe an approach for using these metrics of land use change to: (1) produce the first quantitative estimates of how Delta primary production and the relative contributions from five different producer groups have been altered by large-scale drainage and conversion to agriculture; (2) convert these production estimates into a common currency so the contributions of each producer group reflect their food quality and efficiency of transfer to consumers; and (3) use simple models to discover how tidal exchange between marshes and open water influences primary production and its consumption. Application of this approach could inform Delta management in two ways. First, it would provide a quantitative estimate of how large-scale conversion to agriculture has altered the Delta's capacity to produce food for native biota. Second, it would provide restoration practitioners with a new approach—based on ecosystem function—to evaluate the success of restoration projects and gauge the trajectory of ecological recovery in the Delta region.

  19. Advanced risk assessment of the effects of graphite fibers on electronic and electric equipment, phase 1. [simulating vulnerability to airports and communities from fibers released during aircraft fires

    NASA Technical Reports Server (NTRS)

    Pocinki, L. S.; Kaplan, L. D.; Cornell, M. E.; Greenstone, R.

    1979-01-01

    A model was developed to generate quantitative estimates of the risk associated with the release of graphite fibers during fires involving commercial aircraft constructed with graphite fiber composite materials. The model was used to estimate the risk associated with accidents at several U.S. airports. These results were then combined to provide an estimate of the total risk to the nation.

  20. Objective evaluation of reconstruction methods for quantitative SPECT imaging in the absence of ground truth.

    PubMed

    Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C

    2015-04-13

    Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berryman, J. G.

    While the well-known Voigt and Reuss (VR) bounds, and the Voigt-Reuss-Hill (VRH) elastic constant estimators for random polycrystals are all straightforwardly calculated once the elastic constants of anisotropic crystals are known, the Hashin-Shtrikman (HS) bounds and related self-consistent (SC) estimators for the same constants are, by comparison, more difficult to compute. Recent work has shown how to simplify (to some extent) these harder to compute HS bounds and SC estimators. An overview and analysis of a subsampling of these results is presented here with the main point being to show whether or not this extra work (i.e., in calculating bothmore » the HS bounds and the SC estimates) does provide added value since, in particular, the VRH estimators often do not fall within the HS bounds, while the SC estimators (for good reasons) have always been found to do so. The quantitative differences between the SC and the VRH estimators in the eight cases considered are often quite small however, being on the order of ±1%. These quantitative results hold true even though these polycrystal Voigt-Reuss-Hill estimators more typically (but not always) fall outside the Hashin-Shtrikman bounds, while the self-consistent estimators always fall inside (or on the boundaries of) these same bounds.« less

  2. Fourier phase in Fourier-domain optical coherence tomography.

    PubMed

    Uttam, Shikhar; Liu, Yang

    2015-12-01

    Phase of an electromagnetic wave propagating through a sample-of-interest is well understood in the context of quantitative phase imaging in transmission-mode microscopy. In the past decade, Fourier-domain optical coherence tomography has been used to extend quantitative phase imaging to the reflection-mode. Unlike transmission-mode electromagnetic phase, however, the origin and characteristics of reflection-mode Fourier phase are poorly understood, especially in samples with a slowly varying refractive index. In this paper, the general theory of Fourier phase from first principles is presented, and it is shown that Fourier phase is a joint estimate of subresolution offset and mean spatial frequency of the coherence-gated sample refractive index. It is also shown that both spectral-domain phase microscopy and depth-resolved spatial-domain low-coherence quantitative phase microscopy are special cases of this general theory. Analytical expressions are provided for both, and simulations are presented to explain and support the theoretical results. These results are further used to show how Fourier phase allows the estimation of an axial mean spatial frequency profile of the sample, along with depth-resolved characterization of localized optical density change and sample heterogeneity. Finally, a Fourier phase-based explanation of Doppler optical coherence tomography is also provided.

  3. A test for selection employing quantitative trait locus and mutation accumulation data.

    PubMed

    Rice, Daniel P; Townsend, Jeffrey P

    2012-04-01

    Evolutionary biologists attribute much of the phenotypic diversity observed in nature to the action of natural selection. However, for many phenotypic traits, especially quantitative phenotypic traits, it has been challenging to test for the historical action of selection. An important challenge for biologists studying quantitative traits, therefore, is to distinguish between traits that have evolved under the influence of strong selection and those that have evolved neutrally. Most existing tests for selection employ molecular data, but selection also leaves a mark on the genetic architecture underlying a trait. In particular, the distribution of quantitative trait locus (QTL) effect sizes and the distribution of mutational effects together provide information regarding the history of selection. Despite the increasing availability of QTL and mutation accumulation data, such data have not yet been effectively exploited for this purpose. We present a model of the evolution of QTL and employ it to formulate a test for historical selection. To provide a baseline for neutral evolution of the trait, we estimate the distribution of mutational effects from mutation accumulation experiments. We then apply a maximum-likelihood-based method of inference to estimate the range of selection strengths under which such a distribution of mutations could generate the observed QTL. Our test thus represents the first integration of population genetic theory and QTL data to measure the historical influence of selection.

  4. Modeling regional-scale wildland fire emissions with the wildland fire emissions information system

    Treesearch

    Nancy H.F. French; Donald McKenzie; Tyler Erickson; Benjamin Koziol; Michael Billmire; K. Endsley; Naomi K.Y. Scheinerman; Liza Jenkins; Mary E. Miller; Roger Ottmar; Susan Prichard

    2014-01-01

    As carbon modeling tools become more comprehensive, spatial data are needed to improve quantitative maps of carbon emissions from fire. The Wildland Fire Emissions Information System (WFEIS) provides mapped estimates of carbon emissions from historical forest fires in the United States through a web browser. WFEIS improves access to data and provides a consistent...

  5. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    PubMed Central

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  6. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    PubMed

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  7. Global, long-term surface reflectance records from Landsat

    USDA-ARS?s Scientific Manuscript database

    Global, long-term monitoring of changes in Earth’s land surface requires quantitative comparisons of satellite images acquired under widely varying atmospheric conditions. Although physically based estimates of surface reflectance (SR) ultimately provide the most accurate representation of Earth’s s...

  8. Estimating the number of animals in wildlife populations

    USGS Publications Warehouse

    Lancia, R.A.; Kendall, W.L.; Pollock, K.H.; Nichols, J.D.; Braun, Clait E.

    2005-01-01

    INTRODUCTION In 1938, Howard M. Wight devoted 9 pages, which was an entire chapter in the first wildlife management techniques manual, to what he termed 'census' methods. As books and chapters such as this attest, the volume of literature on this subject has grown tremendously. Abundance estimation remains an active area of biometrical research, as reflected in the many differences between this chapter and the similar contribution in the previous manual. Our intent in this chapter is to present an overview of the basic and most widely used population estimation techniques and to provide an entree to the relevant literature. Several possible approaches could be taken in writing a chapter dealing with population estimation. For example, we could provide a detailed treatment focusing on statistical models and on derivation of estimators based on these models. Although a chapter using this approach might provide a valuable reference for quantitative biologists and biometricians, it would be of limited use to many field biologists and wildlife managers. Another approach would be to focus on details of actually applying different population estimation techniques. This approach would include both field application (e.g., how to set out a trapping grid or conduct an aerial survey) and detailed instructions on how to use the resulting data with appropriate estimation equations. We are reluctant to attempt such an approach, however, because of the tremendous diversity of real-world field situations defined by factors such as the animal being studied, habitat, available resources, and because of our resultant inability to provide detailed instructions for all possible cases. We believe it is more useful to provide the reader with the conceptual basis underlying estimation methods. Thus, we have tried to provide intuitive explanations for how basic methods work. In doing so, we present relevant estimation equations for many methods and provide citations of more detailed treatments covering both statistical considerations and field applications. We have chosen to present methods that are representative of classes of estimators, rather than address every available method. Our hope is that this chapter will provide the reader with enough background to make an informed decision about what general method(s) will likely perform well in any particular field situation. Readers with a more quantitative background may then be able to consult detailed references and tailor the selected method to suit their particular needs. Less quantitative readers should consult a biometrician, preferably one with experience in wildlife studies, for this 'tailoring,' with the hope they will be able to do so with a basic understanding of the general method, thereby permitting useful interaction and discussion with the biometrician. SUMMARY Estimating the abundance or density of animals in wild populations is not a trivial matter. Virtually all techniques involve the basic problem of estimating the probability of seeing, capturing, or otherwise detecting animals during some type of survey and, in many cases, sampling concerns as well. In the case of indices, the detection probability is assumed to be constant (but unknown). We caution against use of indices unless this assumption can be verified for the comparison(s) of interest. In the case of population estimation, many methods have been developed over the years to estimate the probability of detection associated with various kinds of count statistics. Techniques range from complete counts, where sampling concerns often dominate, to incomplete counts where detection probabilities are also important. Some examples of the latter are multiple observers, removal methods, and capture-recapture. Before embarking on a survey to estimate the size of a population, one must understand clearly what information is needed and for what purpose the information will be used. The key to derivin

  9. Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.

    PubMed

    Omer, Travis; Intes, Xavier; Hahn, Juergen

    2015-01-01

    Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.

  10. Quantitative diet reconstruction of a Neolithic population using a Bayesian mixing model (FRUITS): The case study of Ostorf (Germany).

    PubMed

    Fernandes, Ricardo; Grootes, Pieter; Nadeau, Marie-Josée; Nehlich, Olaf

    2015-07-14

    The island cemetery site of Ostorf (Germany) consists of individual human graves containing Funnel Beaker ceramics dating to the Early or Middle Neolithic. However, previous isotope and radiocarbon analysis demonstrated that the Ostorf individuals had a diet rich in freshwater fish. The present study was undertaken to quantitatively reconstruct the diet of the Ostorf population and establish if dietary habits are consistent with the traditional characterization of a Neolithic diet. Quantitative diet reconstruction was achieved through a novel approach consisting of the use of the Bayesian mixing model Food Reconstruction Using Isotopic Transferred Signals (FRUITS) to model isotope measurements from multiple dietary proxies (δ 13 C collagen , δ 15 N collagen , δ 13 C bioapatite , δ 34 S methione , 14 C collagen ). The accuracy of model estimates was verified by comparing the agreement between observed and estimated human dietary radiocarbon reservoir effects. Quantitative diet reconstruction estimates confirm that the Ostorf individuals had a high protein intake due to the consumption of fish and terrestrial animal products. However, FRUITS estimates also show that plant foods represented a significant source of calories. Observed and estimated human dietary radiocarbon reservoir effects are in good agreement provided that the aquatic reservoir effect at Lake Ostorf is taken as reference. The Ostorf population apparently adopted elements associated with a Neolithic culture but adapted to available local food resources and implemented a subsistence strategy that involved a large proportion of fish and terrestrial meat consumption. This case study exemplifies the diversity of subsistence strategies followed during the Neolithic. Am J Phys Anthropol, 2015. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  11. Assessment of the Accountability of Night Vision Devices Provided to the Security Forces of Iraq

    DTIC Science & Technology

    2009-03-17

    of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering...and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other... data in this project. The qualitative data consisted of individual interviews, direct observation, and written documents. Quantitative data

  12. Evaluation of TRMM Ground-Validation Radar-Rain Errors Using Rain Gauge Measurements

    NASA Technical Reports Server (NTRS)

    Wang, Jianxin; Wolff, David B.

    2009-01-01

    Ground-validation (GV) radar-rain products are often utilized for validation of the Tropical Rainfall Measuring Mission (TRMM) spaced-based rain estimates, and hence, quantitative evaluation of the GV radar-rain product error characteristics is vital. This study uses quality-controlled gauge data to compare with TRMM GV radar rain rates in an effort to provide such error characteristics. The results show that significant differences of concurrent radar-gauge rain rates exist at various time scales ranging from 5 min to 1 day, despite lower overall long-term bias. However, the differences between the radar area-averaged rain rates and gauge point rain rates cannot be explained as due to radar error only. The error variance separation method is adapted to partition the variance of radar-gauge differences into the gauge area-point error variance and radar rain estimation error variance. The results provide relatively reliable quantitative uncertainty evaluation of TRMM GV radar rain estimates at various times scales, and are helpful to better understand the differences between measured radar and gauge rain rates. It is envisaged that this study will contribute to better utilization of GV radar rain products to validate versatile spaced-based rain estimates from TRMM, as well as the proposed Global Precipitation Measurement, and other satellites.

  13. Extending the Precipitation Map Offshore Using Daily and 3-Hourly Combined Precipitation Estimates

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Bolvin, David T.; Curtis, Scott; Einaudi, Franco (Technical Monitor)

    2001-01-01

    One of the difficulties in studying landfalling extratropical cyclones along the Pacific Coast is the lack of antecedent data over the ocean, including precipitation. Recent research on combining various satellite-based precipitation estimates opens the possibility of realistic precipitation estimates on a global 1 deg. x 1 deg. latitude-longitude grid at the daily or even 3-hourly interval. The goal in this work is to provide quantitative precipitation estimates that correctly represent the precipitation- related variables in the hydrological cycle: surface accumulations (fresh-water flux into oceans), frequency and duration statistics, net latent heating, etc.

  14. A Method to Measure and Estimate Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.

  15. Methodologies for the quantitative estimation of toxicant dose to cigarette smokers using physical, chemical and bioanalytical data

    PubMed Central

    McAughey, John; Shepperd, Christopher J.

    2013-01-01

    Methodologies have been developed, described and demonstrated that convert mouth exposure estimates of cigarette smoke constituents to dose by accounting for smoke spilled from the mouth prior to inhalation (mouth-spill (MS)) and the respiratory retention (RR) during the inhalation cycle. The methodologies are applicable to just about any chemical compound in cigarette smoke that can be measured analytically and can be used with ambulatory population studies. Conversion of exposure to dose improves the relevancy for risk assessment paradigms. Except for urinary nicotine plus metabolites, biomarkers generally do not provide quantitative exposure or dose estimates. In addition, many smoke constituents have no reliable biomarkers. We describe methods to estimate the RR of chemical compounds in smoke based on their vapor pressure (VP) and to estimate the MS for a given subject. Data from two clinical studies were used to demonstrate dose estimation for 13 compounds, of which only 3 have urinary biomarkers. Compounds with VP > 10−5 Pa generally have RRs of 88% or greater, which do not vary appreciably with inhalation volume (IV). Compounds with VP < 10−7 Pa generally have RRs dependent on IV and lung exposure time. For MS, mean subject values from both studies were slightly greater than 30%. For constituents with urinary biomarkers, correlations with the calculated dose were significantly improved over correlations with mouth exposure. Of toxicological importance is that the dose correlations provide an estimate of the metabolic conversion of a constituent to its respective biomarker. PMID:23742081

  16. Quantifying heterogeneity in human tumours using MRI and PET.

    PubMed

    Asselin, Marie-Claude; O'Connor, James P B; Boellaard, Ronald; Thacker, Neil A; Jackson, Alan

    2012-03-01

    Most tumours, even those of the same histological type and grade, demonstrate considerable biological heterogeneity. Variations in genomic subtype, growth factor expression and local microenvironmental factors can result in regional variations within individual tumours. For example, localised variations in tumour cell proliferation, cell death, metabolic activity and vascular structure will be accompanied by variations in oxygenation status, pH and drug delivery that may directly affect therapeutic response. Documenting and quantifying regional heterogeneity within the tumour requires histological or imaging techniques. There is increasing evidence that quantitative imaging biomarkers can be used in vivo to provide important, reproducible and repeatable estimates of tumoural heterogeneity. In this article we review the imaging methods available to provide appropriate biomarkers of tumour structure and function. We also discuss the significant technical issues involved in the quantitative estimation of heterogeneity and the range of descriptive metrics that can be derived. Finally, we have reviewed the existing clinical evidence that heterogeneity metrics provide additional useful information in drug discovery and development and in clinical practice. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Neuroergonomics: Quantitative Modeling of Individual, Shared, and Team Neurodynamic Information.

    PubMed

    Stevens, Ronald H; Galloway, Trysha L; Willemsen-Dunlap, Ann

    2018-06-01

    The aim of this study was to use the same quantitative measure and scale to directly compare the neurodynamic information/organizations of individual team members with those of the team. Team processes are difficult to separate from those of individual team members due to the lack of quantitative measures that can be applied to both process sets. Second-by-second symbolic representations were created of each team member's electroencephalographic power, and quantitative estimates of their neurodynamic organizations were calculated from the Shannon entropy of the symbolic data streams. The information in the neurodynamic data streams of health care ( n = 24), submarine navigation ( n = 12), and high school problem-solving ( n = 13) dyads was separated into the information of each team member, the information shared by team members, and the overall team information. Most of the team information was the sum of each individual's neurodynamic information. The remaining team information was shared among the team members. This shared information averaged ~15% of the individual information, with momentary levels of 1% to 80%. Continuous quantitative estimates can be made from the shared, individual, and team neurodynamic information about the contributions of different team members to the overall neurodynamic organization of a team and the neurodynamic interdependencies among the team members. Information models provide a generalizable quantitative method for separating a team's neurodynamic organization into that of individual team members and that shared among team members.

  18. The Economic Impact of Jefferson College on the Community and the State, FY 2002.

    ERIC Educational Resources Information Center

    Jefferson Coll., Hillsboro, MO.

    The purpose of this study is to provide an estimation of the ways in which Jefferson College impacts and stimulates the economy of Jefferson County and the state of Missouri as a whole. It provides quantitative information for use by the Board of Trustees and the Administrative Cabinet in institutional planning endeavors. It is also a useful…

  19. Missed and Delayed Diagnosis of Dementia in Primary Care: Prevalence and Contributing Factors

    PubMed Central

    Bradford, Andrea; Kunik, Mark E.; Schulz, Paul; Williams, Susan P.; Singh, Hardeep

    2009-01-01

    Dementia is a growing public health problem for which early detection may be beneficial. Currently, the diagnosis of dementia in primary care is dependent mostly on clinical suspicion based on patient symptomsor caregivers’ concerns and is prone to be missed or delayed. We conducted a systematic review of the literature to ascertain the prevalence and contributing factors for missed and delayed dementia diagnoses in primary care. Prevalence of missed and delayed diagnosis was estimated by abstracting quantitative data from studies of diagnostic sensitivity among primary care providers. Possible predictors and contributory factors were determined from the text of quantitative and qualitative studies of patient-, caregiver-, provider-, and system-related barriers. Overall estimates of diagnostic sensitivity varied among studies and appeared to be in part a function of dementia severity, degree of patient impairment, dementia subtype, and frequency of patient-provider contact. Major contributory factors included problems with attitudes and patient-provider communication, educational deficits, and system resource constraints. The true prevalence of missed and delayed diagnoses of dementia is unknown but appears to be high. Until the case for dementia screening becomes more compelling, efforts to promote timely detection should focus on removing barriers to diagnosis. PMID:19568149

  20. A Statistical Framework for Protein Quantitation in Bottom-Up MS-Based Proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas

    2009-08-15

    Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and associated confidence measures. Challenges include the presence of low quality or incorrectly identified peptides and informative missingness. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model that carefully accounts for informative missingness in peak intensities and allows unbiased, model-based, protein-level estimation and inference. The model is applicable to both label-based and label-free quantitation experiments. We also provide automated, model-based, algorithms for filtering of proteins and peptides as well as imputation of missing values. Two LC/MS datasets are used to illustrate themore » methods. In simulation studies, our methods are shown to achieve substantially more discoveries than standard alternatives. Availability: The software has been made available in the opensource proteomics platform DAnTE (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.« less

  1. The influence of the design matrix on treatment effect estimates in the quantitative analyses of single-subject experimental design research.

    PubMed

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim

    2014-09-01

    The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data. © The Author(s) 2014.

  2. On the reconciliation of missing heritability for genome-wide association studies

    PubMed Central

    Chen, Guo-Bo

    2016-01-01

    The definition of heritability has been unique and clear, but its estimation and estimates vary across studies. Linear mixed model (LMM) and Haseman–Elston (HE) regression analyses are commonly used for estimating heritability from genome-wide association data. This study provides an analytical resolution that can be used to reconcile the differences between LMM and HE in the estimation of heritability given the genetic architecture, which is responsible for these differences. The genetic architecture was classified into three forms via thought experiments: (i) coupling genetic architecture that the quantitative trait loci (QTLs) in the linkage disequilibrium (LD) had a positive covariance; (ii) repulsion genetic architecture that the QTLs in the LD had a negative covariance; (iii) and neutral genetic architecture that the QTLs in the LD had a covariance with a summation of zero. The neutral genetic architecture is so far most embraced, whereas the coupling and the repulsion genetic architecture have not been well investigated. For a quantitative trait under the coupling genetic architecture, HE overestimated the heritability and LMM underestimated the heritability; under the repulsion genetic architecture, HE underestimated but LMM overestimated the heritability for a quantitative trait. These two methods gave identical results under the neutral genetic architecture. A general analytical result for the statistic estimated under HE is given regardless of genetic architecture. In contrast, the performance of LMM remained elusive, such as further depended on the ratio between the sample size and the number of markers, but LMM converged to HE with increased sample size. PMID:27436266

  3. An assessment of the risk of foreign animal disease introduction into the United States of America through garbage from Alaskan cruise ships.

    PubMed

    McElvaine, M D; McDowell, R M; Fite, R W; Miller, L

    1993-12-01

    The United States Department of Agriculture, Animal and Plant Health Inspection Service (USDA-APHIS) has been exploring methods of quantitative risk assessment to support decision-making, provide risk management options and identify research needs. With current changes in world trade, regulatory decisions must have a scientific basis which is transparent, consistent, documentable and defensible. These quantitative risk assessment methods are described in an accompanying paper in this issue. In the present article, the authors provide an illustration by presenting an application of these methods. Prior to proposing changes in regulations, USDA officials requested an assessment of the risk of introduction of foreign animal disease to the United States of America through garbage from Alaskan cruise ships. The risk assessment team used a combination of quantitative and qualitative methods to evaluate this question. Quantitative risk assessment methods were used to estimate the amount of materials of foreign origin being sent to Alaskan landfills. This application of quantitative risk assessment illustrates the flexibility of the methods in addressing specific questions. By applying these methods, specific areas were identified where more scientific information and research were needed. Even with limited information, the risk assessment provided APHIS management with a scientific basis for a regulatory decision.

  4. Estimation of polydispersity in aggregating red blood cells by quantitative ultrasound backscatter analysis.

    PubMed

    de Monchy, Romain; Rouyer, Julien; Destrempes, François; Chayer, Boris; Cloutier, Guy; Franceschini, Emilie

    2018-04-01

    Quantitative ultrasound techniques based on the backscatter coefficient (BSC) have been commonly used to characterize red blood cell (RBC) aggregation. Specifically, a scattering model is fitted to measured BSC and estimated parameters can provide a meaningful description of the RBC aggregates' structure (i.e., aggregate size and compactness). In most cases, scattering models assumed monodisperse RBC aggregates. This study proposes the Effective Medium Theory combined with the polydisperse Structure Factor Model (EMTSFM) to incorporate the polydispersity of aggregate size. From the measured BSC, this model allows estimating three structural parameters: the mean radius of the aggregate size distribution, the width of the distribution, and the compactness of the aggregates. Two successive experiments were conducted: a first experiment on blood sheared in a Couette flow device coupled with an ultrasonic probe, and a second experiment, on the same blood sample, sheared in a plane-plane rheometer coupled to a light microscope. Results demonstrated that the polydisperse EMTSFM provided the best fit to the BSC data when compared to the classical monodisperse models for the higher levels of aggregation at hematocrits between 10% and 40%. Fitting the polydisperse model yielded aggregate size distributions that were consistent with direct light microscope observations at low hematocrits.

  5. Importance of Geodetically Controlled Topography to Constrain Rates of Volcanism and Internal Magma Plumbing Systems

    NASA Astrophysics Data System (ADS)

    Glaze, L. S.; Baloga, S. M.; Garvin, J. B.; Quick, L. C.

    2014-05-01

    Lava flows and flow fields on Venus lack sufficient topographic data for any type of quantitative modeling to estimate eruption rates and durations. Such modeling can constrain rates of resurfacing and provide insights into magma plumbing systems.

  6. Ocean Heat Content Reveals Secrets of Fish Migrations

    PubMed Central

    Luo, Jiangang; Ault, Jerald S.; Shay, Lynn K.; Hoolihan, John P.; Prince, Eric D.; Brown, Craig A.; Rooker, Jay R.

    2015-01-01

    For centuries, the mechanisms surrounding spatially complex animal migrations have intrigued scientists and the public. We present a new methodology using ocean heat content (OHC), a habitat metric that is normally a fundamental part of hurricane intensity forecasting, to estimate movements and migration of satellite-tagged marine fishes. Previous satellite-tagging research of fishes using archival depth, temperature and light data for geolocations have been too coarse to resolve detailed ocean habitat utilization. We combined tag data with OHC estimated from ocean circulation and transport models in an optimization framework that substantially improved geolocation accuracy over SST-based tracks. The OHC-based movement track provided the first quantitative evidence that many of the tagged highly migratory fishes displayed affinities for ocean fronts and eddies. The OHC method provides a new quantitative tool for studying dynamic use of ocean habitats, migration processes and responses to environmental changes by fishes, and further, improves ocean animal tracking and extends satellite-based animal tracking data for other potential physical, ecological, and fisheries applications. PMID:26484541

  7. Software analysis handbook: Software complexity analysis and software reliability estimation and prediction

    NASA Technical Reports Server (NTRS)

    Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron

    1994-01-01

    This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.

  8. The effects of dominance, regular inbreeding and sampling design on Q(ST), an estimator of population differentiation for quantitative traits.

    PubMed

    Goudet, Jérôme; Büchi, Lucie

    2006-02-01

    To test whether quantitative traits are under directional or homogenizing selection, it is common practice to compare population differentiation estimates at molecular markers (F(ST)) and quantitative traits (Q(ST)). If the trait is neutral and its determinism is additive, then theory predicts that Q(ST) = F(ST), while Q(ST) > F(ST) is predicted under directional selection for different local optima, and Q(ST) < F(ST) is predicted under homogenizing selection. However, nonadditive effects can alter these predictions. Here, we investigate the influence of dominance on the relation between Q(ST) and F(ST) for neutral traits. Using analytical results and computer simulations, we show that dominance generally deflates Q(ST) relative to F(ST). Under inbreeding, the effect of dominance vanishes, and we show that for selfing species, a better estimate of Q(ST) is obtained from selfed families than from half-sib families. We also compare several sampling designs and find that it is always best to sample many populations (>20) with few families (five) rather than few populations with many families. Provided that estimates of Q(ST) are derived from individuals originating from many populations, we conclude that the pattern Q(ST) > F(ST), and hence the inference of directional selection for different local optima, is robust to the effect of nonadditive gene actions.

  9. Review of Quantitative Ultrasound: Envelope Statistics and Backscatter Coefficient Imaging and Contributions to Diagnostic Ultrasound.

    PubMed

    Oelze, Michael L; Mamou, Jonathan

    2016-02-01

    Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation, and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years, QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient (BSC), estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter (ESD) and the effective acoustic concentration (EAC) of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and preclinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy.

  10. Review of quantitative ultrasound: envelope statistics and backscatter coefficient imaging and contributions to diagnostic ultrasound

    PubMed Central

    Oelze, Michael L.; Mamou, Jonathan

    2017-01-01

    Conventional medical imaging technologies, including ultrasound, have continued to improve over the years. For example, in oncology, medical imaging is characterized by high sensitivity, i.e., the ability to detect anomalous tissue features, but the ability to classify these tissue features from images often lacks specificity. As a result, a large number of biopsies of tissues with suspicious image findings are performed each year with a vast majority of these biopsies resulting in a negative finding. To improve specificity of cancer imaging, quantitative imaging techniques can play an important role. Conventional ultrasound B-mode imaging is mainly qualitative in nature. However, quantitative ultrasound (QUS) imaging can provide specific numbers related to tissue features that can increase the specificity of image findings leading to improvements in diagnostic ultrasound. QUS imaging techniques can encompass a wide variety of techniques including spectral-based parameterization, elastography, shear wave imaging, flow estimation and envelope statistics. Currently, spectral-based parameterization and envelope statistics are not available on most conventional clinical ultrasound machines. However, in recent years QUS techniques involving spectral-based parameterization and envelope statistics have demonstrated success in many applications, providing additional diagnostic capabilities. Spectral-based techniques include the estimation of the backscatter coefficient, estimation of attenuation, and estimation of scatterer properties such as the correlation length associated with an effective scatterer diameter and the effective acoustic concentration of scatterers. Envelope statistics include the estimation of the number density of scatterers and quantification of coherent to incoherent signals produced from the tissue. Challenges for clinical application include correctly accounting for attenuation effects and transmission losses and implementation of QUS on clinical devices. Successful clinical and pre-clinical applications demonstrating the ability of QUS to improve medical diagnostics include characterization of the myocardium during the cardiac cycle, cancer detection, classification of solid tumors and lymph nodes, detection and quantification of fatty liver disease, and monitoring and assessment of therapy. PMID:26761606

  11. Merging Radar Quantitative Precipitation Estimates (QPEs) from the High-resolution NEXRAD Reanalysis over CONUS with Rain-gauge Observations

    NASA Astrophysics Data System (ADS)

    Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.

    2015-12-01

    The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.

  12. Fourier phase in Fourier-domain optical coherence tomography

    PubMed Central

    Uttam, Shikhar; Liu, Yang

    2015-01-01

    Phase of an electromagnetic wave propagating through a sample-of-interest is well understood in the context of quantitative phase imaging in transmission-mode microscopy. In the past decade, Fourier-domain optical coherence tomography has been used to extend quantitative phase imaging to the reflection-mode. Unlike transmission-mode electromagnetic phase, however, the origin and characteristics of reflection-mode Fourier phase are poorly understood, especially in samples with a slowly varying refractive index. In this paper, the general theory of Fourier phase from first principles is presented, and it is shown that Fourier phase is a joint estimate of subresolution offset and mean spatial frequency of the coherence-gated sample refractive index. It is also shown that both spectral-domain phase microscopy and depth-resolved spatial-domain low-coherence quantitative phase microscopy are special cases of this general theory. Analytical expressions are provided for both, and simulations are presented to explain and support the theoretical results. These results are further used to show how Fourier phase allows the estimation of an axial mean spatial frequency profile of the sample, along with depth-resolved characterization of localized optical density change and sample heterogeneity. Finally, a Fourier phase-based explanation of Doppler optical coherence tomography is also provided. PMID:26831383

  13. Swath width study. A simulation assessment of costs and benefits of a sensor system for agricultural application

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Satellites provide an excellent platform from which to observe crops on the scale and frequency required to provide accurate crop production estimates on a worldwide basis. Multispectral imaging sensors aboard these platforms are capable of providing data from which to derive acreage and production estimates. The issue of sensor swath width was examined. The quantitative trade trade necessary to resolve the combined issue of sensor swath width, number of platforms, and their orbits was generated and are included. Problems with different swath width sensors were analyzed and an assessment of system trade-offs of swath width versus number of satellites was made for achieving Global Crop Production Forecasting.

  14. Quantitative imaging of aggregated emulsions.

    PubMed

    Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J

    2006-02-28

    Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.

  15. Body Composition.

    ERIC Educational Resources Information Center

    Mayhew, Jerry L.

    1981-01-01

    Body composition refers to the types and amounts of tissues which make up the body. The most acceptable method for assessing body composition is underwater weighing. A subcutaneous skinfold provides a quantitative measurement of fat below the skin. The skinfold technique permits a valid estimate of the body's total fat content. (JN)

  16. Experimental Design for Parameter Estimation of Gene Regulatory Networks

    PubMed Central

    Timmer, Jens

    2012-01-01

    Systems biology aims for building quantitative models to address unresolved issues in molecular biology. In order to describe the behavior of biological cells adequately, gene regulatory networks (GRNs) are intensively investigated. As the validity of models built for GRNs depends crucially on the kinetic rates, various methods have been developed to estimate these parameters from experimental data. For this purpose, it is favorable to choose the experimental conditions yielding maximal information. However, existing experimental design principles often rely on unfulfilled mathematical assumptions or become computationally demanding with growing model complexity. To solve this problem, we combined advanced methods for parameter and uncertainty estimation with experimental design considerations. As a showcase, we optimized three simulated GRNs in one of the challenges from the Dialogue for Reverse Engineering Assessment and Methods (DREAM). This article presents our approach, which was awarded the best performing procedure at the DREAM6 Estimation of Model Parameters challenge. For fast and reliable parameter estimation, local deterministic optimization of the likelihood was applied. We analyzed identifiability and precision of the estimates by calculating the profile likelihood. Furthermore, the profiles provided a way to uncover a selection of most informative experiments, from which the optimal one was chosen using additional criteria at every step of the design process. In conclusion, we provide a strategy for optimal experimental design and show its successful application on three highly nonlinear dynamic models. Although presented in the context of the GRNs to be inferred for the DREAM6 challenge, the approach is generic and applicable to most types of quantitative models in systems biology and other disciplines. PMID:22815723

  17. Applying petrophysical models to radar travel time and electrical resistivity tomograms: Resolution-dependent limitations

    USGS Publications Warehouse

    Day-Lewis, F. D.; Singha, K.; Binley, A.M.

    2005-01-01

    Geophysical imaging has traditionally provided qualitative information about geologic structure; however, there is increasing interest in using petrophysical models to convert tomograms to quantitative estimates of hydrogeologic, mechanical, or geochemical parameters of interest (e.g., permeability, porosity, water content, and salinity). Unfortunately, petrophysical estimation based on tomograms is complicated by limited and variable image resolution, which depends on (1) measurement physics (e.g., electrical conduction or electromagnetic wave propagation), (2) parameterization and regularization, (3) measurement error, and (4) spatial variability. We present a framework to predict how core-scale relations between geophysical properties and hydrologic parameters are altered by the inversion, which produces smoothly varying pixel-scale estimates. We refer to this loss of information as "correlation loss." Our approach upscales the core-scale relation to the pixel scale using the model resolution matrix from the inversion, random field averaging, and spatial statistics of the geophysical property. Synthetic examples evaluate the utility of radar travel time tomography (RTT) and electrical-resistivity tomography (ERT) for estimating water content. This work provides (1) a framework to assess tomograms for geologic parameter estimation and (2) insights into the different patterns of correlation loss for ERT and RTT. Whereas ERT generally performs better near boreholes, RTT performs better in the interwell region. Application of petrophysical models to the tomograms in our examples would yield misleading estimates of water content. Although the examples presented illustrate the problem of correlation loss in the context of near-surface geophysical imaging, our results have clear implications for quantitative analysis of tomograms for diverse geoscience applications. Copyright 2005 by the American Geophysical Union.

  18. Markov chain Monte Carlo linkage analysis: effect of bin width on the probability of linkage.

    PubMed

    Slager, S L; Juo, S H; Durner, M; Hodge, S E

    2001-01-01

    We analyzed part of the Genetic Analysis Workshop (GAW) 12 simulated data using Monte Carlo Markov chain (MCMC) methods that are implemented in the computer program Loki. The MCMC method reports the "probability of linkage" (PL) across the chromosomal regions of interest. The point of maximum PL can then be taken as a "location estimate" for the location of the quantitative trait locus (QTL). However, Loki does not provide a formal statistical test of linkage. In this paper, we explore how the bin width used in the calculations affects the max PL and the location estimate. We analyzed age at onset (AO) and quantitative trait number 5, Q5, from 26 replicates of the general simulated data in one region where we knew a major gene, MG5, is located. For each trait, we found the max PL and the corresponding location estimate, using four different bin widths. We found that bin width, as expected, does affect the max PL and the location estimate, and we recommend that users of Loki explore how their results vary with different bin widths.

  19. [Reflection of estimating postmortem interval in forensic entomology and the Daubert standard].

    PubMed

    Xie, Dan; Peng, Yu-Long; Guo, Ya-Dong; Cai, Ji-Feng

    2013-08-01

    Estimating postmortem interval (PMI) is always the emphasis and difficulty in forensic practice. Forensic entomology plays a significant indispensable role. Recently, the theories and technologies of forensic entomology are increasingly rich. But many problems remain in the research and practice. With proposing the Daubert standard, the reliability and accuracy of estimation PMI by forensic entomology need more demands. This review summarizes the application of the Daubert standard in several aspects of ecology, quantitative genetics, population genetics, molecular biology, and microbiology in the practice of forensic entomology. It builds a bridge for basic research and forensic practice to provide higher accuracy for estimating postmortem interval by forensic entomology.

  20. Broad-spectrum monitoring strategies for predicting occult precipitation contribution to water balance in a coastal watershed in California: Ground-truthing, areal monitoring and isotopic analysis of fog in the San Francisco Bay region

    NASA Astrophysics Data System (ADS)

    Koohafkan, M.; Thompson, S. E.; Leonardson, R.; Dufour, A.

    2013-12-01

    We showcase a fog monitoring study designed to quantitatively estimate the contribution of summer fog events to the water balance of a coastal watershed managed by the San Francisco Public Utilities Commission. Two decades of research now clearly show that fog and occult precipitation can be major contributors to the water balance of watersheds worldwide. Monitoring, understanding and predicting occult precipitation is therefore as hydrologically compelling as forecasting precipitation or evaporation, particularly in the face of climate variability. We combine ground-based monitoring and collection strategies with remote sensing technologies, time-lapse imagery, and isotope analysis to trace the ';signature' of fog in physical and ecological processes. Spatial coverage and duration of fog events in the watershed is monitored using time-lapse cameras and leaf wetness sensors strategically positioned to provide estimates of the fog bank extent and cloud base elevation, and this fine-scale data is used to estimate transpiration suppression by fog and is examined in the context of regional climate through the use of satellite imagery. Soil moisture sensors, throughfall collectors and advective fog collectors deployed throughout the watershed provide quantitative estimates of fog drip contribution to soil moisture and plants. Fog incidence records and streamflow monitoring provide daily estimates of fog contribution to streamflow. Isotope analysis of soil water, fog drip, stream water and vegetation samples are used to probe for evidence of direct root and leaf uptake of fog drip by plants. Using this diversity of fog monitoring methods, we develop an empirical framework for the inclusion of fog processes in water balance models.

  1. Development of four self-report measures of job stressors and strain: Interpersonal Conflict at Work Scale, Organizational Constraints Scale, Quantitative Workload Inventory, and Physical Symptoms Inventory.

    PubMed

    Spector, P E; Jex, S M

    1998-10-01

    Despite the widespread use of self-report measures of both job-related stressors and strains, relatively few carefully developed scales for which validity data exist are available. In this article, we discuss 3 job stressor scales (Interpersonal Conflict at Work Scale, Organizational Constraints Scale, and Quantitative Workload Inventory) and 1 job strain scale (Physical Symptoms Inventory). Using meta-analysis, we combined the results of 18 studies to provide estimates of relations between our scales and other variables. Data showed moderate convergent validity for the 3 job stressor scales, suggesting some objectively to these self-reports. Norms for each scale are provided.

  2. The Mapping Model: A Cognitive Theory of Quantitative Estimation

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2008-01-01

    How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…

  3. Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA)

    National Institute of Standards and Technology Data Gateway

    SRD 100 Database for Simulation of Electron Spectra for Surface Analysis (SESSA)Database for Simulation of Electron Spectra for Surface Analysis (SESSA) (PC database for purchase)   This database has been designed to facilitate quantitative interpretation of Auger-electron and X-ray photoelectron spectra and to improve the accuracy of quantitation in routine analysis. The database contains all physical data needed to perform quantitative interpretation of an electron spectrum for a thin-film specimen of given composition. A simulation module provides an estimate of peak intensities as well as the energy and angular distributions of the emitted electron flux.

  4. A Model for the Estimation of Hepatic Insulin Extraction After a Meal.

    PubMed

    Piccinini, Francesca; Dalla Man, Chiara; Vella, Adrian; Cobelli, Claudio

    2016-09-01

    Quantitative assessment of hepatic insulin extraction (HE) after an oral glucose challenge, e.g., a meal, is important to understand the regulation of carbohydrate metabolism. The aim of the current study is to develop a model of system for estimating HE. Nine different models, of increasing complexity, were tested on data of 204 normal subjects, who underwent a mixed meal tolerance test, with frequent measurement of plasma glucose, insulin, and C-peptide concentrations. All these models included a two-compartment model of C-peptide kinetics, an insulin secretion model, a compartmental model of insulin kinetics (with number of compartments ranging from one to three), and different HE descriptions, depending on plasma glucose and insulin. Model performances were compared on the basis of data fit, precision of parameter estimates, and parsimony criteria. The three-compartment model of insulin kinetics, coupled with HE depending on glucose concentration, showed the best fit and a good ability to precisely estimate the parameters. In addition, the model calculates basal and total indices of HE ( HE b and HE tot , respectively), and provides an index of HE sensitivity to glucose ( S G HE ). A new physiologically based HE model has been developed, which allows an improved quantitative description of glucose regulation. The use of the new model provides an in-depth description of insulin kinetics, thus enabling a better understanding of a given subject's metabolic state.

  5. Kinetics of Poliovirus Shedding following Oral Vaccination as Measured by Quantitative Reverse Transcription-PCR versus Culture

    PubMed Central

    Begum, Sharmin; Uddin, Md Jashim; Platts-Mills, James A.; Liu, Jie; Kirkpatrick, Beth D.; Chowdhury, Anwarul H.; Jamil, Khondoker M.; Haque, Rashidul; Petri, William A.; Houpt, Eric R.

    2014-01-01

    Amid polio eradication efforts, detection of oral polio vaccine (OPV) virus in stool samples can provide information about rates of mucosal immunity and allow estimation of the poliovirus reservoir. We developed a multiplex one-step quantitative reverse transcription-PCR (qRT-PCR) assay for detection of OPV Sabin strains 1, 2, and 3 directly in stool samples with an external control to normalize samples for viral quantity and compared its performance with that of viral culture. We applied the assay to samples from infants in Dhaka, Bangladesh, after the administration of trivalent OPV (tOPV) at weeks 14 and 52 of life (on days 0 [pre-OPV], +4, +11, +18, and +25 relative to vaccination). When 1,350 stool samples were tested, the sensitivity and specificity of the quantitative PCR (qPCR) assay were 89 and 91% compared with culture. A quantitative relationship between culture+/qPCR+ and culture−/qPCR+ stool samples was observed. The kinetics of shedding revealed by qPCR and culture were similar. qPCR quantitative cutoffs based on the day +11 or +18 stool samples could be used to identify the culture-positive shedders, as well as the long-duration or high-frequency shedders. Interestingly, qPCR revealed that a small minority (7%) of infants contributed the vast majority (93 to 100%) of the total estimated viral excretion across all subtypes at each time point. This qPCR assay for OPV can simply and quantitatively detect all three Sabin strains directly in stool samples to approximate shedding both qualitatively and quantitatively. PMID:25378579

  6. Apollo Video Photogrammetry Estimation Of Plume Impingement Effects

    NASA Technical Reports Server (NTRS)

    Immer, Christopher; Lane, John; Metzger, Philip T.; Clements, Sandra

    2008-01-01

    The Constellation Project's planned return to the moon requires numerous landings at the same site. Since the top few centimeters are loosely packed regolith, plume impingement from the Lander ejects the granular material at high velocities. Much work is needed to understand the physics of plume impingement during landing in order to protect hardware surrounding the landing sites. While mostly qualitative in nature, the Apollo Lunar Module landing videos can provide a wealth of quantitative information using modem photogrammetry techniques. The authors have used the digitized videos to quantify plume impingement effects of the landing exhaust on the lunar surface. The dust ejection angle from the plume is estimated at 1-3 degrees. The lofted particle density is estimated at 10(exp 8)- 10(exp 13) particles per cubic meter. Additionally, evidence for ejection of large 10-15 cm sized objects and a dependence of ejection angle on thrust are presented. Further work is ongoing to continue quantitative analysis of the landing videos.

  7. Optical contrast and refractive index of natural van der Waals heterostructure nanosheets of franckeite

    PubMed Central

    Gant, Patricia; Ghasemi, Foad; Maeso, David; Munuera, Carmen; López-Elvira, Elena; Frisenda, Riccardo; De Lara, David Pérez; Rubio-Bollinger, Gabino; Garcia-Hernandez, Mar

    2017-01-01

    We study mechanically exfoliated nanosheets of franckeite by quantitative optical microscopy. The analysis of transmission-mode and epi-illumination-mode optical microscopy images provides a rapid method to estimate the thickness of the exfoliated flakes at first glance. A quantitative analysis of the optical contrast spectra by means of micro-reflectance allows one to determine the refractive index of franckeite over a broad range of the visible spectrum through a fit of the acquired spectra to a model based on the Fresnel law. PMID:29181292

  8. Identifying critical life stage transitions for biological control of long-lived perennial Vincetoxicum species

    USDA-ARS?s Scientific Manuscript database

    Demographic matrix modeling of invasive plant populations can be a powerful tool to identify key life stage transitions for targeted disruption in order to cause population decline. This approach can provide quantitative estimates of reductions in select vital rates needed to reduce population growt...

  9. Media Violence, Antisocial Behavior, and the Social Consequences of Small Effects.

    ERIC Educational Resources Information Center

    Rosenthal, Robert

    1986-01-01

    Discusses research on media violence and antisocial behavior. Provides quantitative estimates for predicting: (1) adult antisocial behavior from childhood antisocial behavior; (2) current antisocial behavior from current exposure to media violence; (3) subsequent antisocial behavior from earlier exposure to media violence; and (4) how social…

  10. Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model

    PubMed Central

    Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz

    2014-01-01

    Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915

  11. Extended Kalman Filter framework for forecasting shoreline evolution

    USGS Publications Warehouse

    Long, Joseph; Plant, Nathaniel G.

    2012-01-01

    A shoreline change model incorporating both long- and short-term evolution is integrated into a data assimilation framework that uses sparse observations to generate an updated forecast of shoreline position and to estimate unobserved geophysical variables and model parameters. Application of the assimilation algorithm provides quantitative statistical estimates of combined model-data forecast uncertainty which is crucial for developing hazard vulnerability assessments, evaluation of prediction skill, and identifying future data collection needs. Significant attention is given to the estimation of four non-observable parameter values and separating two scales of shoreline evolution using only one observable morphological quantity (i.e. shoreline position).

  12. Science advancements key to increasing management value of life stage monitoring networks for endangered Sacramento River winter-run Chinook salmon in California

    USGS Publications Warehouse

    Johnson, Rachel C.; Windell, Sean; Brandes, Patricia L.; Conrad, J. Louise; Ferguson, John; Goertler, Pascale A. L.; Harvey, Brett N.; Heublein, Joseph; Isreal, Joshua A.; Kratville, Daniel W.; Kirsch, Joseph E.; Perry, Russell W.; Pisciotto, Joseph; Poytress, William R.; Reece, Kevin; Swart, Brycen G.

    2017-01-01

    A robust monitoring network that provides quantitative information about the status of imperiled species at key life stages and geographic locations over time is fundamental for sustainable management of fisheries resources. For anadromous species, management actions in one geographic domain can substantially affect abundance of subsequent life stages that span broad geographic regions. Quantitative metrics (e.g., abundance, movement, survival, life history diversity, and condition) at multiple life stages are needed to inform how management actions (e.g., hatcheries, harvest, hydrology, and habitat restoration) influence salmon population dynamics. The existing monitoring network for endangered Sacramento River winterrun Chinook Salmon (SRWRC, Oncorhynchus tshawytscha) in California’s Central Valley was compared to conceptual models developed for each life stage and geographic region of the life cycle to identify relevant SRWRC metrics. We concluded that the current monitoring network was insufficient to diagnose when (life stage) and where (geographic domain) chronic or episodic reductions in SRWRC cohorts occur, precluding within- and among-year comparisons. The strongest quantitative data exist in the Upper Sacramento River, where abundance estimates are generated for adult spawners and emigrating juveniles. However, once SRWRC leave the upper river, our knowledge of their identity, abundance, and condition diminishes, despite the juvenile monitoring enterprise. We identified six system-wide recommended actions to strengthen the value of data generated from the existing monitoring network to assess resource management actions: (1) incorporate genetic run identification; (2) develop juvenile abundance estimates; (3) collect data for life history diversity metrics at multiple life stages; (4) expand and enhance real-time fish survival and movement monitoring; (5) collect fish condition data; and (6) provide timely public access to monitoring data in open data formats. To illustrate how updated technologies can enhance the existing monitoring to provide quantitative data on SRWRC, we provide examples of how each recommendation can address specific management issues.

  13. Effects of personalized colorectal cancer risk information on laypersons' interest in colorectal cancer screening: The importance of individual differences.

    PubMed

    Han, Paul K J; Duarte, Christine W; Daggett, Susannah; Siewers, Andrea; Killam, Bill; Smith, Kahsi A; Freedman, Andrew N

    2015-10-01

    To evaluate how personalized quantitative colorectal cancer (CRC) risk information affects laypersons' interest in CRC screening, and to explore factors influencing these effects. An online pre-post experiment was conducted in which a convenience sample (N=578) of laypersons, aged >50, were provided quantitative personalized estimates of lifetime CRC risk, calculated by the National Cancer Institute Colorectal Cancer Risk Assessment Tool (CCRAT). Self-reported interest in CRC screening was measured immediately before and after CCRAT use; sociodemographic characteristics and prior CRC screening history were also assessed. Multivariable analyses assessed participants' change in interest in screening, and subgroup differences in this change. Personalized CRC risk information had no overall effect on CRC screening interest, but significant subgroup differences were observed. Change in screening interest was greater among individuals with recent screening (p=.015), higher model-estimated cancer risk (p=.0002), and lower baseline interest (p<.0001), with individuals at highest baseline interest demonstrating negative (not neutral) change in interest. Effects of quantitative personalized CRC risk information on laypersons' interest in CRC screening differ among individuals depending on prior screening history, estimated cancer risk, and baseline screening interest. Personalized cancer risk information has personalized effects-increasing and decreasing screening interest in different individuals. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  14. Quantitative Oxygenation Venography from MRI Phase

    PubMed Central

    Fan, Audrey P.; Bilgic, Berkin; Gagnon, Louis; Witzel, Thomas; Bhat, Himanshu; Rosen, Bruce R.; Adalsteinsson, Elfar

    2014-01-01

    Purpose To demonstrate acquisition and processing methods for quantitative oxygenation venograms that map in vivo oxygen saturation (SvO2) along cerebral venous vasculature. Methods Regularized quantitative susceptibility mapping (QSM) is used to reconstruct susceptibility values and estimate SvO2 in veins. QSM with ℓ1 and ℓ2 regularization are compared in numerical simulations of vessel structures with known magnetic susceptibility. Dual-echo, flow-compensated phase images are collected in three healthy volunteers to create QSM images. Bright veins in the susceptibility maps are vectorized and used to form a three-dimensional vascular mesh, or venogram, along which to display SvO2 values from QSM. Results Quantitative oxygenation venograms that map SvO2 along brain vessels of arbitrary orientation and geometry are shown in vivo. SvO2 values in major cerebral veins lie within the normal physiological range reported by 15O positron emission tomography. SvO2 from QSM is consistent with previous MR susceptometry methods for vessel segments oriented parallel to the main magnetic field. In vessel simulations, ℓ1 regularization results in less than 10% SvO2 absolute error across all vessel tilt orientations and provides more accurate SvO2 estimation than ℓ2 regularization. Conclusion The proposed analysis of susceptibility images enables reliable mapping of quantitative SvO2 along venograms and may facilitate clinical use of venous oxygenation imaging. PMID:24006229

  15. A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS

    EPA Science Inventory

    While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...

  16. A method for energy window optimization for quantitative tasks that includes the effects of model-mismatch on bias: application to Y-90 bremsstrahlung SPECT imaging.

    PubMed

    Rong, Xing; Du, Yong; Frey, Eric C

    2012-06-21

    Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.

  17. The Global Precipitation Mission

    NASA Technical Reports Server (NTRS)

    Braun, Scott; Kummerow, Christian

    2000-01-01

    The Global Precipitation Mission (GPM), expected to begin around 2006, is a follow-up to the Tropical Rainfall Measuring Mission (TRMM). Unlike TRMM, which primarily samples the tropics, GPM will sample both the tropics and mid-latitudes. The primary, or core, satellite will be a single, enhanced TRMM satellite that can quantify the 3-D spatial distributions of precipitation and its associated latent heat release. The core satellite will be complemented by a constellation of very small and inexpensive drones with passive microwave instruments that will sample the rainfall with sufficient frequency to be not only of climate interest, but also have local, short-term impacts by providing global rainfall coverage at approx. 3 h intervals. The data is expected to have substantial impact upon quantitative precipitation estimation/forecasting and data assimilation into global and mesoscale numerical models. Based upon previous studies of rainfall data assimilation, GPM is expected to lead to significant improvements in forecasts of extratropical and tropical cyclones. For example, GPM rainfall data can provide improved initialization of frontal systems over the Pacific and Atlantic Oceans. The purpose of this talk is to provide information about GPM to the USWRP (U.S. Weather Research Program) community and to discuss impacts on quantitative precipitation estimation/forecasting and data assimilation.

  18. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Technical Reports Server (NTRS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  19. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Astrophysics Data System (ADS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-10-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  20. Development of quantitative exposure data for a pooled exposure-response analysis of 10 silica cohorts.

    PubMed

    Mannetje, Andrea 't; Steenland, Kyle; Checkoway, Harvey; Koskela, Riitta-Sisko; Koponen, Matti; Attfield, Michael; Chen, Jingqiong; Hnizdo, Eva; DeKlerk, Nicholas; Dosemeci, Mustafa

    2002-08-01

    Comprehensive quantitative silica exposure estimates over time, measured in the same units across a number of cohorts, would make possible a pooled exposure-response analysis for lung cancer. Such an analysis would help clarify the continuing controversy regarding whether silica causes lung cancer. Existing quantitative exposure data for 10 silica-exposed cohorts were retrieved from the original investigators. Occupation- and time-specific exposure estimates were either adopted/adapted or developed for each cohort, and converted to milligram per cubic meter (mg/m(3)) respirable crystalline silica. Quantitative exposure assignments were typically based on a large number (thousands) of raw measurements, or otherwise consisted of exposure estimates by experts (for two cohorts). Median exposure level of the cohorts ranged between 0.04 and 0.59 mg/m(3) respirable crystalline silica. Exposure estimates were partially validated via their successful prediction of silicosis in these cohorts. Existing data were successfully adopted or modified to create comparable quantitative exposure estimates over time for 10 silica-exposed cohorts, permitting a pooled exposure-response analysis. The difficulties encountered in deriving common exposure estimates across cohorts are discussed. Copyright 2002 Wiley-Liss, Inc.

  1. Quantitative mapping of rainfall rates over the oceans utilizing Nimbus-5 ESMR data

    NASA Technical Reports Server (NTRS)

    Rao, M. S. V.; Abbott, W. V.

    1976-01-01

    The electrically scanning microwave radiometer (ESMR) data from the Nimbus 5 satellite was used to deduce estimates of precipitation amount over the oceans. An atlas of the global oceanic rainfall was prepared and the global rainfall maps analyzed and related to available ground truth information as well as to large scale processes in the atmosphere. It was concluded that the ESMR system provides the most reliable and direct approach yet known for the estimation of rainfall over sparsely documented, wide oceanic regions.

  2. The linearized multistage model and the future of quantitative risk assessment.

    PubMed

    Crump, K S

    1996-10-01

    The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.

  3. Spacecraft Complexity Subfactors and Implications on Future Cost Growth

    NASA Technical Reports Server (NTRS)

    Leising, Charles J.; Wessen, Randii; Ellyin, Ray; Rosenberg, Leigh; Leising, Adam

    2013-01-01

    During the last ten years the Jet Propulsion Laboratory has used a set of cost-risk subfactors to independently estimate the magnitude of development risks that may not be covered in the high level cost models employed during early concept development. Within the last several years the Laboratory has also developed a scale of Concept Maturity Levels with associated criteria to quantitatively assess a concept's maturity. This latter effort has been helpful in determining whether a concept is mature enough for accurate costing but it does not provide any quantitative estimate of cost risk. Unfortunately today's missions are significantly more complex than when the original cost-risk subfactors were first formulated. Risks associated with complex missions are not being adequately evaluated and future cost growth is being underestimated. The risk subfactor process needed to be updated.

  4. Providing Quantitative Information and a Nudge to Undergo Stool Testing in a Colorectal Cancer Screening Decision Aid: A Randomized Clinical Trial.

    PubMed

    Schwartz, Peter H; Perkins, Susan M; Schmidt, Karen K; Muriello, Paul F; Althouse, Sandra; Rawl, Susan M

    2017-08-01

    Guidelines recommend that patient decision aids should provide quantitative information about probabilities of potential outcomes, but the impact of this information is unknown. Behavioral economics suggests that patients confused by quantitative information could benefit from a "nudge" towards one option. We conducted a pilot randomized trial to estimate the effect sizes of presenting quantitative information and a nudge. Primary care patients (n = 213) eligible for colorectal cancer screening viewed basic screening information and were randomized to view (a) quantitative information (quantitative module), (b) a nudge towards stool testing with the fecal immunochemical test (FIT) (nudge module), (c) neither a nor b, or (d) both a and b. Outcome measures were perceived colorectal cancer risk, screening intent, preferred test, and decision conflict, measured before and after viewing the decision aid, and screening behavior at 6 months. Patients viewing the quantitative module were more likely to be screened than those who did not ( P = 0.012). Patients viewing the nudge module had a greater increase in perceived colorectal cancer risk than those who did not ( P = 0.041). Those viewing the quantitative module had a smaller increase in perceived risk than those who did not ( P = 0.046), and the effect was moderated by numeracy. Among patients with high numeracy who did not view the nudge module, those who viewed the quantitative module had a greater increase in intent to undergo FIT ( P = 0.028) than did those who did not. The limitations of this study were the limited sample size and single healthcare system. Adding quantitative information to a decision aid increased uptake of colorectal cancer screening, while adding a nudge to undergo FIT did not increase uptake. Further research on quantitative information in decision aids is warranted.

  5. Global estimates of shark catches using trade records from commercial markets.

    PubMed

    Clarke, Shelley C; McAllister, Murdoch K; Milner-Gulland, E J; Kirkwood, G P; Michielsens, Catherine G J; Agnew, David J; Pikitch, Ellen K; Nakano, Hideki; Shivji, Mahmood S

    2006-10-01

    Despite growing concerns about overexploitation of sharks, lack of accurate, species-specific harvest data often hampers quantitative stock assessment. In such cases, trade studies can provide insights into exploitation unavailable from traditional monitoring. We applied Bayesian statistical methods to trade data in combination with genetic identification to estimate by species, the annual number of globally traded shark fins, the most commercially valuable product from a group of species often unrecorded in harvest statistics. Our results provide the first fishery-independent estimate of the scale of shark catches worldwide and indicate that shark biomass in the fin trade is three to four times higher than shark catch figures reported in the only global data base. Comparison of our estimates to approximated stock assessment reference points for one of the most commonly traded species, blue shark, suggests that current trade volumes in numbers of sharks are close to or possibly exceeding the maximum sustainable yield levels.

  6. Airborne electromagnetic mapping of the base of aquifer in areas of western Nebraska

    USGS Publications Warehouse

    Abraham, Jared D.; Cannia, James C.; Bedrosian, Paul A.; Johnson, Michaela R.; Ball, Lyndsay B.; Sibray, Steven S.

    2012-01-01

    Airborne geophysical surveys of selected areas of the North and South Platte River valleys of Nebraska, including Lodgepole Creek valley, collected data to map aquifers and bedrock topography and thus improve the understanding of groundwater - surface-water relationships to be used in water-management decisions. Frequency-domain helicopter electromagnetic surveys, using a unique survey flight-line design, collected resistivity data that can be related to lithologic information for refinement of groundwater model inputs. To make the geophysical data useful to multidimensional groundwater models, numerical inversion converted measured data into a depth-dependent subsurface resistivity model. The inverted resistivity model, along with sensitivity analyses and test-hole information, is used to identify hydrogeologic features such as bedrock highs and paleochannels, to improve estimates of groundwater storage. The two- and three-dimensional interpretations provide the groundwater modeler with a high-resolution hydrogeologic framework and a quantitative estimate of framework uncertainty. The new hydrogeologic frameworks improve understanding of the flow-path orientation by refining the location of paleochannels and associated base of aquifer highs. These interpretations provide resource managers high-resolution hydrogeologic frameworks and quantitative estimates of framework uncertainty. The improved base of aquifer configuration represents the hydrogeology at a level of detail not achievable with previously available data.

  7. LipidQC: Method Validation Tool for Visual Comparison to SRM 1950 Using NIST Interlaboratory Comparison Exercise Lipid Consensus Mean Estimate Values.

    PubMed

    Ulmer, Candice Z; Ragland, Jared M; Koelmel, Jeremy P; Heckert, Alan; Jones, Christina M; Garrett, Timothy J; Yost, Richard A; Bowden, John A

    2017-12-19

    As advances in analytical separation techniques, mass spectrometry instrumentation, and data processing platforms continue to spur growth in the lipidomics field, more structurally unique lipid species are detected and annotated. The lipidomics community is in need of benchmark reference values to assess the validity of various lipidomics workflows in providing accurate quantitative measurements across the diverse lipidome. LipidQC addresses the harmonization challenge in lipid quantitation by providing a semiautomated process, independent of analytical platform, for visual comparison of experimental results of National Institute of Standards and Technology Standard Reference Material (SRM) 1950, "Metabolites in Frozen Human Plasma", against benchmark consensus mean concentrations derived from the NIST Lipidomics Interlaboratory Comparison Exercise.

  8. Estimates of Radiation Effects on Cancer Risks in the Mayak Worker, Techa River and Atomic Bomb Survivor Studies.

    PubMed

    Preston, Dale L; Sokolnikov, Mikhail E; Krestinina, Lyudmila Yu; Stram, Daniel O

    2017-04-01

    For almost 50 y, the Life Span Study cohort of atomic bomb survivor studies has been the primary source of the quantitative estimates of cancer and non-cancer risks that form the basis of international radiation protection standards. However, the long-term follow-up and extensive individual dose reconstruction for the Russian Mayak worker cohort (MWC) and Techa River cohort (TRC) are providing quantitative information about radiation effects on cancer risks that complement the atomic bomb survivor-based risk estimates. The MWC, which includes ~26 000 men and women who began working at Mayak between 1948 and 1982, is the primary source for estimates of the effects of plutonium on cancer risks and also provides information on the effects of low-dose rate external gamma exposures. The TRC consists of ~30 000 men and women of all ages who received low-dose-rate, low-dose exposures as a consequence of Mayak's release of radioactive material into the Techa River. The TRC data are of interest because the exposures are broadly similar to those experienced by populations exposed as a consequence of nuclear accidents such as Chernobyl. In this presentation, it is described the strengths and limitations of these three cohorts, outline and compare recent solid cancer and leukemia risk estimates and discussed why information from the Mayak and Techa River studies might play a role in the development and refinement of the radiation risk estimates that form the basis for radiation protection standards. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Solubility advantage of amorphous pharmaceuticals: II. Application of quantitative thermodynamic relationships for prediction of solubility enhancement in structurally diverse insoluble pharmaceuticals.

    PubMed

    Murdande, Sharad B; Pikal, Michael J; Shanker, Ravi M; Bogner, Robin H

    2010-12-01

    To quantitatively assess the solubility advantage of amorphous forms of nine insoluble drugs with a wide range of physico-chemical properties utilizing a previously reported thermodynamic approach. Thermal properties of amorphous and crystalline forms of drugs were measured using modulated differential calorimetry. Equilibrium moisture sorption uptake by amorphous drugs was measured by a gravimetric moisture sorption analyzer, and ionization constants were determined from the pH-solubility profiles. Solubilities of crystalline and amorphous forms of drugs were measured in de-ionized water at 25°C. Polarized microscopy was used to provide qualitative information about the crystallization of amorphous drug in solution during solubility measurement. For three out the nine compounds, the estimated solubility based on thermodynamic considerations was within two-fold of the experimental measurement. For one compound, estimated solubility enhancement was lower than experimental value, likely due to extensive ionization in solution and hence its sensitivity to error in pKa measurement. For the remaining five compounds, estimated solubility was about 4- to 53-fold higher than experimental results. In all cases where the theoretical solubility estimates were significantly higher, it was observed that the amorphous drug crystallized rapidly during the experimental determination of solubility, thus preventing an accurate experimental assessment of solubility advantage. It has been demonstrated that the theoretical approach does provide an accurate estimate of the maximum solubility enhancement by an amorphous drug relative to its crystalline form for structurally diverse insoluble drugs when recrystallization during dissolution is minimal.

  10. Methodological factors affecting joint moments estimation in clinical gait analysis: a systematic review.

    PubMed

    Camomilla, Valentina; Cereatti, Andrea; Cutti, Andrea Giovanni; Fantozzi, Silvia; Stagni, Rita; Vannozzi, Giuseppe

    2017-08-18

    Quantitative gait analysis can provide a description of joint kinematics and dynamics, and it is recognized as a clinically useful tool for functional assessment, diagnosis and intervention planning. Clinically interpretable parameters are estimated from quantitative measures (i.e. ground reaction forces, skin marker trajectories, etc.) through biomechanical modelling. In particular, the estimation of joint moments during motion is grounded on several modelling assumptions: (1) body segmental and joint kinematics is derived from the trajectories of markers and by modelling the human body as a kinematic chain; (2) joint resultant (net) loads are, usually, derived from force plate measurements through a model of segmental dynamics. Therefore, both measurement errors and modelling assumptions can affect the results, to an extent that also depends on the characteristics of the motor task analysed (i.e. gait speed). Errors affecting the trajectories of joint centres, the orientation of joint functional axes, the joint angular velocities, the accuracy of inertial parameters and force measurements (concurring to the definition of the dynamic model), can weigh differently in the estimation of clinically interpretable joint moments. Numerous studies addressed all these methodological aspects separately, but a critical analysis of how these aspects may affect the clinical interpretation of joint dynamics is still missing. This article aims at filling this gap through a systematic review of the literature, conducted on Web of Science, Scopus and PubMed. The final objective is hence to provide clear take-home messages to guide laboratories in the estimation of joint moments for the clinical practice.

  11. Construction of Genetically Engineered Streptococcus gordonii Strains to Provide Control in QPCR Assays for Assessing Microbiological Quality in Recreational Water.

    EPA Science Inventory

    Quantitative PCR (QPCR) methods for beach monitoring by estimating abundance of Enterococcus spp. in recreational waters use internal, positive controls which address only the amplification of target DNA. In this study two internal, positive controls were developed to control for...

  12. Identifying influences on model uncertainty: an application using a forest carbon budget model

    Treesearch

    James E. Smith; Linda S. Heath

    2001-01-01

    Uncertainty is an important consideration for both developers and users of environmental simulation models. Establishing quantitative estimates of uncertainty for deterministic models can be difficult when the underlying bases for such information are scarce. We demonstrate an application of probabilistic uncertainty analysis that provides for refinements in...

  13. 77 FR 66468 - Agency Forms Undergoing Paperwork Reduction Act Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-05

    ... groups, group discussions, and surveys. Depending on the specific purpose, data collection methods may be conducted either in-person, by telephone, on paper, or online. Data may be collected in quantitative and/or..., including screenshots of web-based surveys, in the statement provided to OMB. DGMQ estimates that 18,720...

  14. Factors influencing stream fish recovery following a large-scale disturbance

    Treesearch

    William E. Ensign; Angermeier Leftwich; C. Andrew Dolloff

    1997-01-01

    The authors examined fish distribution and abundance in erosional habitat units in South Fork Roanoke River, VA, following a fish kill by using a reachwide sampling approach for 3 species and a representative-reach sampling approach for 10 species. Qualitative (presence-absence) and quantitative (relative abundance) estimates of distribution and abundance provided...

  15. Estimation of the genome sizes of the chigger mites Leptotrombidium pallidum and Leptotrombidium scutellare based on quantitative PCR and k-mer analysis

    PubMed Central

    2014-01-01

    Background Leptotrombidium pallidum and Leptotrombidium scutellare are the major vector mites for Orientia tsutsugamushi, the causative agent of scrub typhus. Before these organisms can be subjected to whole-genome sequencing, it is necessary to estimate their genome sizes to obtain basic information for establishing the strategies that should be used for genome sequencing and assembly. Method The genome sizes of L. pallidum and L. scutellare were estimated by a method based on quantitative real-time PCR. In addition, a k-mer analysis of the whole-genome sequences obtained through Illumina sequencing was conducted to verify the mutual compatibility and reliability of the results. Results The genome sizes estimated using qPCR were 191 ± 7 Mb for L. pallidum and 262 ± 13 Mb for L. scutellare. The k-mer analysis-based genome lengths were estimated to be 175 Mb for L. pallidum and 286 Mb for L. scutellare. The estimates from these two independent methods were mutually complementary and within a similar range to those of other Acariform mites. Conclusions The estimation method based on qPCR appears to be a useful alternative when the standard methods, such as flow cytometry, are impractical. The relatively small estimated genome sizes should facilitate whole-genome analysis, which could contribute to our understanding of Arachnida genome evolution and provide key information for scrub typhus prevention and mite vector competence. PMID:24947244

  16. TOXNET: Toxicology Data Network

    MedlinePlus

    ... 4. Supporting Data for Carcinogenicity Expand II.B. Quantitative Estimate of Carcinogenic Risk from Oral Exposure II. ... of Confidence (Carcinogenicity, Oral Exposure) Expand II.C. Quantitative Estimate of Carcinogenic Risk from Inhalation Exposure II. ...

  17. A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.

    PubMed

    Rong, Xing; Frey, Eric C

    2013-08-01

    Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more general, the authors simulated multiple tumors of various sizes in the liver. The authors realistically simulated human anatomy using a digital phantom and the image formation process using a previously validated and computationally efficient method for modeling the image-degrading effects including object scatter, attenuation, and the full collimator-detector response (CDR). The scatter kernels and CDR function tables used in the modeling method were generated using a previously validated Monte Carlo simulation code. The hole length, hole diameter, and septal thickness of the obtained optimal collimator were 84, 3.5, and 1.4 mm, respectively. Compared to a commercial high-energy general-purpose collimator, the optimal collimator improved the resolution and FOM by 27% and 18%, respectively. The proposed collimator optimization method may be useful for improving quantitative SPECT imaging for radionuclides with complex energy spectra. The obtained optimal collimator provided a substantial improvement in quantitative performance for the microsphere radioembolization task considered.

  18. Biomarkers and Surrogate Endpoints in Uveitis: The Impact of Quantitative Imaging.

    PubMed

    Denniston, Alastair K; Keane, Pearse A; Srivastava, Sunil K

    2017-05-01

    Uveitis is a major cause of sight loss across the world. The reliable assessment of intraocular inflammation in uveitis ('disease activity') is essential in order to score disease severity and response to treatment. In this review, we describe how 'quantitative imaging', the approach of using automated analysis and measurement algorithms across both standard and emerging imaging modalities, can develop objective instrument-based measures of disease activity. This is a narrative review based on searches of the current world literature using terms related to quantitative imaging techniques in uveitis, supplemented by clinical trial registry data, and expert knowledge of surrogate endpoints and outcome measures in ophthalmology. Current measures of disease activity are largely based on subjective clinical estimation, and are relatively insensitive, with poor discrimination and reliability. The development of quantitative imaging in uveitis is most established in the use of optical coherence tomographic (OCT) measurement of central macular thickness (CMT) to measure severity of macular edema (ME). The transformative effect of CMT in clinical assessment of patients with ME provides a paradigm for the development and impact of other forms of quantitative imaging. Quantitative imaging approaches are now being developed and validated for other key inflammatory parameters such as anterior chamber cells, vitreous haze, retinovascular leakage, and chorioretinal infiltrates. As new forms of quantitative imaging in uveitis are proposed, the uveitis community will need to evaluate these tools against the current subjective clinical estimates and reach a new consensus for how disease activity in uveitis should be measured. The development, validation, and adoption of sensitive and discriminatory measures of disease activity is an unmet need that has the potential to transform both drug development and routine clinical care for the patient with uveitis.

  19. A quantitative estimate of schema abnormality in socially anxious and non-anxious individuals.

    PubMed

    Wenzel, Amy; Brendle, Jennifer R; Kerr, Patrick L; Purath, Donna; Ferraro, F Richard

    2007-01-01

    Although cognitive theories of anxiety suggest that anxious individuals are characterized by abnormal threat-relevant schemas, few empirical studies have estimated the nature of these cognitive structures using quantitative methods that lend themselves to inferential statistical analysis. In the present study, socially anxious (n = 55) and non-anxious (n = 62) participants completed 3 Q-Sort tasks to assess their knowledge of events that commonly occur in social or evaluative scenarios. Participants either sorted events according to how commonly they personally believe the events occur (i.e. "self" condition), or to how commonly they estimate that most people believe they occur (i.e. "other" condition). Participants' individual Q-Sorts were correlated with mean sorts obtained from a normative sample to obtain an estimate of schema abnormality, with lower correlations representing greater levels of abnormality. Relative to non-anxious participants, socially anxious participants' sorts were less strongly associated with sorts of the normative sample, particularly in the "self" condition, although secondary analyses suggest that some significant results might be explained, in part, by depression and experience with the scenarios. These results provide empirical support for the theoretical notion that threat-relevant self-schemas of anxious individuals are characterized by some degree of abnormality.

  20. Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.

    PubMed

    Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan

    2018-03-01

    Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  1. Exposure assessment of tetrafluoroethylene and ammonium perfluorooctanoate 1951-2002.

    PubMed

    Sleeuwenhoek, Anne; Cherrie, John W

    2012-03-01

    To develop a method to reconstruct exposure to tetrafluoroethylene (TFE) and ammonium perfluorooctanoate (APFO) in plants producing polytetrafluoroethylene (PTFE) in the absence of suitable objective measurements. These data were used to inform an epidemiological study being carried out to investigate possible risks in workers employed in the manufacture of PTFE and to study trends in exposure over time. For each plant, detailed descriptions of all occupational titles, including tasks and changes over time, were obtained during semi-structured interviews with key plant personnel. A semi-quantitative assessment method was used to assess inhalation exposure to TFE and inhalation plus dermal exposure to APFO. Temporal trends in exposure to TFE and APFO were investigated. In each plant the highest exposures for both TFE and APFO occurred in the polymerisation area. Due to the introduction of control measures, increasing process automation and other improvements, exposures generally decreased over time. In the polymerisation area, the annual decline in exposure to TFE varied by plant from 3.8 to 5.7% and for APFO from 2.2 to 5.5%. A simple method for assessing exposure was developed which used detailed process information and job descriptions to estimate average annual TFE and APFO exposure on an arbitrary semi-quantitative scale. These semi-quantitative estimates are sufficient to identify relative differences in exposure for the epidemiological study and should good data become available, they could be used to provide quantitative estimates for all plants across the whole period of operation. This journal is © The Royal Society of Chemistry 2012

  2. FERRET data analysis code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples.

  3. Quantitative, spectrally-resolved intraoperative fluorescence imaging

    PubMed Central

    Valdés, Pablo A.; Leblond, Frederic; Jacobs, Valerie L.; Wilson, Brian C.; Paulsen, Keith D.; Roberts, David W.

    2012-01-01

    Intraoperative visual fluorescence imaging (vFI) has emerged as a promising aid to surgical guidance, but does not fully exploit the potential of the fluorescent agents that are currently available. Here, we introduce a quantitative fluorescence imaging (qFI) approach that converts spectrally-resolved data into images of absolute fluorophore concentration pixel-by-pixel across the surgical field of view (FOV). The resulting estimates are linear, accurate, and precise relative to true values, and spectral decomposition of multiple fluorophores is also achieved. Experiments with protoporphyrin IX in a glioma rodent model demonstrate in vivo quantitative and spectrally-resolved fluorescence imaging of infiltrating tumor margins for the first time. Moreover, we present images from human surgery which detect residual tumor not evident with state-of-the-art vFI. The wide-field qFI technique has broad implications for intraoperative surgical guidance because it provides near real-time quantitative assessment of multiple fluorescent biomarkers across the operative field. PMID:23152935

  4. Biphasic dose responses in biology, toxicology and medicine: accounting for their generalizability and quantitative features.

    PubMed

    Calabrese, Edward J

    2013-11-01

    The most common quantitative feature of the hormetic-biphasic dose response is its modest stimulatory response which at maximum is only 30-60% greater than control values, an observation that is consistently independent of biological model, level of organization (i.e., cell, organ or individual), endpoint measured, chemical/physical agent studied, or mechanism. This quantitative feature suggests an underlying "upstream" mechanism common across biological systems, therefore basic and general. Hormetic dose response relationships represent an estimate of the peak performance of integrative biological processes that are allometrically based. Hormetic responses reflect both direct stimulatory or overcompensation responses to damage induced by relatively low doses of chemical or physical agents. The integration of the hormetic dose response within an allometric framework provides, for the first time, an explanation for both the generality and the quantitative features of the hormetic dose response. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Quantification of bone marrow fat content using iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL): reproducibility, site variation and correlation with age and menopause.

    PubMed

    Aoki, Takatoshi; Yamaguchi, Shinpei; Kinoshita, Shunsuke; Hayashida, Yoshiko; Korogi, Yukunori

    2016-09-01

    To determine the reproducibility of the quantitative chemical shift-based water-fat separation method with a multiecho gradient echo sequence [iteraterative decomposition of water and fat with echo asymmetry and least-squares estimation quantitation sequence (IDEAL-IQ)] for assessing bone marrow fat fraction (FF); to evaluate variation of FF at different bone sites; and to investigate its association with age and menopause. 31 consecutive females who underwent pelvic iterative decomposition of water and fat with echo asymmetry and least-squares estimation at 3-T MRI were included in this study. Quantitative FF using IDEAL-IQ of four bone sites were analyzed. The coefficients of variance (CV) on each site were evaluated repeatedly 10 times to assess the reproducibility. Correlations between FF and age were evaluated on each site, and the FFs between pre- and post-menopausal groups were compared. The CV in the quantification of marrow FF ranged from 0.69% to 1.70%. A statistically significant correlation was established between the FF and the age in lumbar vertebral body, ilium and intertrochanteric region of the femur (p < 0.001). The average FF of post-menopausal females was significantly higher than that of pre-menopausal females in these sites (p < 0.05). In the greater trochanter of the femur, there was no significant correlation between FF and age. In vivo IDEAL-IQ would provide reliable quantification of bone marrow fat. IDEAL-IQ is simple to perform in a short time and may be practical for providing information on bone quality in clinical settings.

  6. The use of simulation and multiple environmental tracers to quantify groundwater flow in a shallow aquifer

    USGS Publications Warehouse

    Reilly, Thomas E.; Plummer, Niel; Phillips, Patrick J.; Busenberg, Eurybiades

    1994-01-01

    Measurements of the concentrations of chlorofluorocarbons (CFCs), tritium, and other environmental tracers can be used to calculate recharge ages of shallow groundwater and estimate rates of groundwater movement. Numerical simulation also provides quantitative estimates of flow rates, flow paths, and mixing properties of the groundwater system. The environmental tracer techniques and the hydraulic analyses each contribute to the understanding and quantification of the flow of shallow groundwater. However, when combined, the two methods provide feedback that improves the quantification of the flow system and provides insight into the processes that are the most uncertain. A case study near Locust Grove, Maryland, is used to investigate the utility of combining groundwater age dating, based on CFCs and tritium, and hydraulic analyses using numerical simulation techniques. The results of the feedback between an advective transport model and the estimates of groundwater ages determined by the CFCs improve a quantitative description of the system by refining the system conceptualization and estimating system parameters. The plausible system developed with this feedback between the advective flow model and the CFC ages is further tested using a solute transport simulation to reproduce the observed tritium distribution in the groundwater. The solute transport simulation corroborates the plausible system developed and also indicates that, for the system under investigation with the data obtained from 0.9-m-long (3-foot-long) well screens, the hydrodynamic dispersion is negligible. Together the two methods enable a coherent explanation of the flow paths and rates of movement while indicating weaknesses in the understanding of the system that will require future data collection and conceptual refinement of the groundwater system.

  7. Growth characteristics and Otolith analysis on Age-0 American Shad

    USGS Publications Warehouse

    Sauter, Sally T.; Wetzel, Lisa A.

    2011-01-01

    Otolith microstructure analysis provides useful information on the growth history of fish (Campana and Jones 1992, Bang and Gronkjaer 2005). Microstructure analysis can be used to construct the size-at-age growth trajectory of fish, determine daily growth rates, and estimate hatch date and other ecologically important life history events (Campana and Jones 1992, Tonkin et al. 2008). This kind of information can be incorporated into bioenergetics modeling, providing necessary data for estimating prey consumption, and guiding the development of empirically-based modeling scenarios for hypothesis testing. For example, age-0 American shad co-occur with emigrating juvenile fall Chinook salmon originating from Hanford Reach and the Snake River in the lower Columbia River reservoirs during the summer and early fall. The diet of age-0 American shad appears to overlap with that of juvenile fall Chinook salmon (Chapter 1, this report), but juvenile fall Chinook salmon are also known to feed on age-0 American shad in the reservoirs (USGS unpublished data). Abundant, energy-dense age-0 American shad may provide juvenile fall Chinook salmon opportunities for rapid growth during the time period when large numbers of age-0 American shad are available. Otolith analysis of hatch dates and the growth curve of age-0 American shad could be used to identify when eggs, larvae, and juveniles of specific size classes are temporally available as food for fall Chinook salmon in the lower Columbia River reservoirs. This kind of temporally and spatially explicit life history information is important to include in bioenergetics modeling scenarios. Quantitative estimates of prey consumption could be used with spatially-explicit estimates of prey abundance to construct a quantitative assessment of the age-0 American shad impact on a reservoir food web.

  8. Should fatty acid signature proportions sum to 1 for diet estimation?

    USGS Publications Warehouse

    Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.

    2016-01-01

    Knowledge of predator diets, including how diets might change through time or differ among predators, provides essential insights into their ecology. Diet estimation therefore remains an active area of research within quantitative ecology. Quantitative fatty acid signature analysis (QFASA) is an increasingly common method of diet estimation. QFASA is based on a data library of prey signatures, which are vectors of proportions summarizing the fatty acid composition of lipids, and diet is estimated as the mixture of prey signatures that most closely approximates a predator’s signature. Diets are typically estimated using proportions from a subset of all fatty acids that are known to be solely or largely influenced by diet. Given the subset of fatty acids selected, the current practice is to scale their proportions to sum to 1.0. However, scaling signature proportions has the potential to distort the structural relationships within a prey library and between predators and prey. To investigate that possibility, we compared the practice of scaling proportions with two alternatives and found that the traditional scaling can meaningfully bias diet estimators under some conditions. Two aspects of the prey types that contributed to a predator’s diet influenced the magnitude of the bias: the degree to which the sums of unscaled proportions differed among prey types and the identifiability of prey types within the prey library. We caution investigators against the routine scaling of signature proportions in QFASA.

  9. Mechanistic and quantitative insight into cell surface targeted molecular imaging agent design.

    PubMed

    Zhang, Liang; Bhatnagar, Sumit; Deschenes, Emily; Thurber, Greg M

    2016-05-05

    Molecular imaging agent design involves simultaneously optimizing multiple probe properties. While several desired characteristics are straightforward, including high affinity and low non-specific background signal, in practice there are quantitative trade-offs between these properties. These include plasma clearance, where fast clearance lowers background signal but can reduce target uptake, and binding, where high affinity compounds sometimes suffer from lower stability or increased non-specific interactions. Further complicating probe development, many of the optimal parameters vary depending on both target tissue and imaging agent properties, making empirical approaches or previous experience difficult to translate. Here, we focus on low molecular weight compounds targeting extracellular receptors, which have some of the highest contrast values for imaging agents. We use a mechanistic approach to provide a quantitative framework for weighing trade-offs between molecules. Our results show that specific target uptake is well-described by quantitative simulations for a variety of targeting agents, whereas non-specific background signal is more difficult to predict. Two in vitro experimental methods for estimating background signal in vivo are compared - non-specific cellular uptake and plasma protein binding. Together, these data provide a quantitative method to guide probe design and focus animal work for more cost-effective and time-efficient development of molecular imaging agents.

  10. Accounting for Limited Detection Efficiency and Localization Precision in Cluster Analysis in Single Molecule Localization Microscopy

    PubMed Central

    Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra

    2015-01-01

    Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150

  11. On sweat analysis for quantitative estimation of dehydration during physical exercise.

    PubMed

    Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Eskofier, Bjoern M

    2015-08-01

    Quantitative estimation of water loss during physical exercise is of importance because dehydration can impair both muscular strength and aerobic endurance. A physiological indicator for deficit of total body water (TBW) might be the concentration of electrolytes in sweat. It has been shown that concentrations differ after physical exercise depending on whether water loss was replaced by fluid intake or not. However, to the best of our knowledge, this fact has not been examined for its potential to quantitatively estimate TBW loss. Therefore, we conducted a study in which sweat samples were collected continuously during two hours of physical exercise without fluid intake. A statistical analysis of these sweat samples revealed significant correlations between chloride concentration in sweat and TBW loss (r = 0.41, p <; 0.01), and between sweat osmolality and TBW loss (r = 0.43, p <; 0.01). A quantitative estimation of TBW loss resulted in a mean absolute error of 0.49 l per estimation. Although the precision has to be improved for practical applications, the present results suggest that TBW loss estimation could be realizable using sweat samples.

  12. Predicting Loss-of-Control Boundaries Toward a Piloting Aid

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan; Stepanyan, Vahram; Krishnakumar, Kalmanje

    2012-01-01

    This work presents an approach to predicting loss-of-control with the goal of providing the pilot a decision aid focused on maintaining the pilot's control action within predicted loss-of-control boundaries. The predictive architecture combines quantitative loss-of-control boundaries, a data-based predictive control boundary estimation algorithm and an adaptive prediction method to estimate Markov model parameters in real-time. The data-based loss-of-control boundary estimation algorithm estimates the boundary of a safe set of control inputs that will keep the aircraft within the loss-of-control boundaries for a specified time horizon. The adaptive prediction model generates estimates of the system Markov Parameters, which are used by the data-based loss-of-control boundary estimation algorithm. The combined algorithm is applied to a nonlinear generic transport aircraft to illustrate the features of the architecture.

  13. Quantitative health impact assessment of transport policies: two simulations related to speed limit reduction and traffic re-allocation in the Netherlands.

    PubMed

    Schram-Bijkerk, D; van Kempen, E; Knol, A B; Kruize, H; Staatsen, B; van Kamp, I

    2009-10-01

    Few quantitative health impact assessments (HIAs) of transport policies have been published so far and there is a lack of a common methodology for such assessments. To evaluate the usability of existing HIA methodology to quantify health effects of transport policies at the local level. Health impact of two simulated but realistic transport interventions - speed limit reduction and traffic re-allocation - was quantified by selecting traffic-related exposures and health endpoints, modelling of population exposure, selecting exposure-effect relations and estimating the number of local traffic-related cases and disease burden, expressed in disability-adjusted life-years (DALYs), before and after the intervention. Exposure information was difficult to retrieve because of the local scale of the interventions, and exposure-effect relations for subgroups and combined effects were missing. Given uncertainty in the outcomes originating from this kind of missing information, simulated changes in population health by two local traffic interventions were estimated to be small (<5%), except for the estimated reduction in DALYs by less traffic accidents (60%) due to speed limit reduction. Quantitative HIA of transport policies at a local scale is possible, provided that data on exposures, the exposed population and their baseline health status are available. The interpretation of the HIA information should be carried out in the context of the quality of input data and assumptions and uncertainties of the analysis.

  14. Household physical activity and cancer risk: a systematic review and dose-response meta-analysis of epidemiological studies

    PubMed Central

    Shi, Yun; Li, Tingting; Wang, Ying; Zhou, Lingling; Qin, Qin; Yin, Jieyun; Wei, Sheng; Liu, Li; Nie, Shaofa

    2015-01-01

    Controversial results of the association between household physical activity and cancer risk were reported among previous epidemiological studies. We conducted a meta-analysis to investigate the relationship of household physical activity and cancer risk quantitatively, especially in dose-response manner. PubMed, Embase, Web of science and the Cochrane Library were searched for cohort or case-control studies that examined the association between household physical activity and cancer risks. Random–effect models were conducted to estimate the summary relative risks (RRs), nonlinear or linear dose–response meta-analyses were performed to estimate the trend from the correlated log RR estimates across levels of household physical activity quantitatively. Totally, 30 studies including 41 comparisons met the inclusion criteria. Total cancer risks were reduced 16% among the people with highest household physical activity compared to those with lowest household physical activity (RR = 0.84, 95% CI = 0.76–0.93). The dose-response analyses indicated an inverse linear association between household physical activity and cancer risk. The relative risk was 0.98 (95% CI = 0.97–1.00) for per additional 10 MET-hours/week and it was 0.99 (95% CI = 0.98–0.99) for per 1 hour/week increase. These findings provide quantitative data supporting household physical activity is associated with decreased cancer risk in dose-response effect. PMID:26443426

  15. The Relative Performance of High Resolution Quantitative Precipitation Estimates in the Russian River Basin

    NASA Astrophysics Data System (ADS)

    Bytheway, J. L.; Biswas, S.; Cifelli, R.; Hughes, M.

    2017-12-01

    The Russian River carves a 110 mile path through Mendocino and Sonoma counties in western California, providing water for thousands of residents and acres of agriculture as well as a home for several species of endangered fish. The Russian River basin receives almost all of its precipitation during the October through March wet season, and the systems bringing this precipitation are often impacted by atmospheric river events as well as the complex topography of the region. This study will examine the performance of several high resolution (hourly, < 5km) estimates of precipitation from observational products and forecasts over the 2015-2016 and 2016-2017 wet seasons. Comparisons of event total rainfall as well as hourly rainfall will be performed using 1) rain gauges operated by the National Oceanic and Atmospheric Administration (NOAA) Physical Sciences Division (PSD), 2) products from the Multi-Radar/Multi-Sensor (MRMS) QPE dataset, and 3) quantitative precipitation forecasts from the High Resolution Rapid Refresh (HRRR) model at 1, 3, 6, and 12 hour lead times. Further attention will be given to cases or locations representing large disparities between the estimates.

  16. Quantitative estimation of cholinesterase-specific drug metabolism of carbamate inhibitors provided by the analysis of the area under the inhibition-time curve.

    PubMed

    Zhou, Huimin; Xiao, Qiaoling; Tan, Wen; Zhan, Yiyi; Pistolozzi, Marco

    2017-09-10

    Several molecules containing carbamate groups are metabolized by cholinesterases. This metabolism includes a time-dependent catalytic step which temporary inhibits the enzymes. In this paper we demonstrate that the analysis of the area under the inhibition versus time curve (AUIC) can be used to obtain a quantitative estimation of the amount of carbamate metabolized by the enzyme. (R)-bambuterol monocarbamate and plasma butyrylcholinesterase were used as model carbamate-cholinesterase system. The inhibition of different concentrations of the enzyme was monitored for 5h upon incubation with different concentrations of carbamate and the resulting AUICs were analyzed. The amount of carbamate metabolized could be estimated with <15% accuracy (RE%) and ≤23% precision (RSD%). Since the knowledge of the inhibition kinetics is not required for the analysis, this approach could be used to determine the amount of drug metabolized by cholinesterases in a selected compartment in which the cholinesterase is confined (e.g. in vitro solutions, tissues or body fluids), either in vitro or in vivo. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Improved quantitative analysis of spectra using a new method of obtaining derivative spectra based on a singular perturbation technique.

    PubMed

    Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan

    2015-06-01

    Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.

  18. Can We Really Trust Anyone Who Profits from Ranking Higher Education Institutions, or How Would One Evaluate Institutional Quality?

    ERIC Educational Resources Information Center

    Micceri, Ted

    2005-01-01

    Although numerous quality ratings exist in today's media-centric environment (Money Magazine, and U.S. News and World Report, etc.), it is quite difficult to provide any reasonably meaningful estimates of institutional quality, either qualitative or quantitative. Global ratings of university "quality" abound, despite the fact that there…

  19. Managing Technical and Cost Uncertainties During Product Development in a Simulation-Based Design Environment

    NASA Technical Reports Server (NTRS)

    Karandikar, Harsh M.

    1997-01-01

    An approach for objective and quantitative technical and cost risk analysis during product development, which is applicable from the earliest stages, is discussed. The approach is supported by a software tool called the Analytical System for Uncertainty and Risk Estimation (ASURE). Details of ASURE, the underlying concepts and its application history, are provided.

  20. Conservation assessment of the Yazoo Darter (etheostoma raneyi)

    Treesearch

    Ken A. Sterling; Warren L. Jr. Warren; Henderson L.Gayle

    2013-01-01

    We summarized all known historical and contemporary data on the geographic distribution of Etheostoma raneyi (Yazoo Darter), a range-restricted endemic in the Little Tallahatchie and Yocona rivers (upper Yazoo River basin), MS. We identified federal and state land ownership in relation to the darter’s distribution and provided quantitative estimates of abundance of the...

  1. Pest management in Douglas-fir seed orchards: a microcomputer decision method

    Treesearch

    James B. Hoy; Michael I. Haverty

    1988-01-01

    The computer program described provides a Douglas-fir seed orchard manager (user) with a quantitative method for making insect pest management decisions on a desk-top computer. The decision system uses site-specific information such as estimates of seed crop size, insect attack rates, insecticide efficacy and application costs, weather, and crop value. At sites where...

  2. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Treesearch

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  3. Estimating the Effect of Leaders on Public Sector Productivity: The Case of School Principals. Working Paper 66

    ERIC Educational Resources Information Center

    Branch, Gregory F.; Hanushek, Eric A.; Rivkin, Steven G.

    2012-01-01

    Although much has been written about the importance of leadership in the determination of organizational success, there is little quantitative evidence due to the difficulty of separating the impact of leaders from other organizational components--particularly in the public sector. Schools provide an especially rich environment for studying the…

  4. Synthesis, quantitative structure-property relationship study of novel fluorescence active 2-pyrazolines and application.

    PubMed

    Girgis, Adel S; Basta, Altaf H; El-Saied, Houssni; Mohamed, Mohamed A; Bedair, Ahmad H; Salim, Ahmad S

    2018-03-01

    A variety of fluorescence-active fluorinated pyrazolines 13-33 was synthesized in good yields through cyclocondensation reaction of propenones 1-9 with aryl hydrazines 10-12 . Some of the synthesized compounds provided promising fluorescence properties with quantum yield ( Φ ) higher than that of quinine sulfate (standard reference). Quantitative structure-property relationship studies were undertaken supporting the exhibited fluorescence properties and estimating the parameters governing properties. Five synthesized fluorescence-active pyrazolines ( 13 , 15 , 18 , 19 and 23 ) with variable Φ were selected for treating two types of paper sheets (Fabriano and Bible paper). These investigated fluorescence compounds, especially compounds 19 and 23 , provide improvements in strength properties of paper sheets. Based on the observed performance they can be used as markers in security documents.

  5. Synthesis, quantitative structure–property relationship study of novel fluorescence active 2-pyrazolines and application

    PubMed Central

    Girgis, Adel S.; El-Saied, Houssni; Mohamed, Mohamed A.; Bedair, Ahmad H.; Salim, Ahmad S.

    2018-01-01

    A variety of fluorescence-active fluorinated pyrazolines 13–33 was synthesized in good yields through cyclocondensation reaction of propenones 1–9 with aryl hydrazines 10–12. Some of the synthesized compounds provided promising fluorescence properties with quantum yield (Φ) higher than that of quinine sulfate (standard reference). Quantitative structure–property relationship studies were undertaken supporting the exhibited fluorescence properties and estimating the parameters governing properties. Five synthesized fluorescence-active pyrazolines (13, 15, 18, 19 and 23) with variable Φ were selected for treating two types of paper sheets (Fabriano and Bible paper). These investigated fluorescence compounds, especially compounds 19 and 23, provide improvements in strength properties of paper sheets. Based on the observed performance they can be used as markers in security documents. PMID:29657796

  6. Synthesis, quantitative structure-property relationship study of novel fluorescence active 2-pyrazolines and application

    NASA Astrophysics Data System (ADS)

    Girgis, Adel S.; Basta, Altaf H.; El-Saied, Houssni; Mohamed, Mohamed A.; Bedair, Ahmad H.; Salim, Ahmad S.

    2018-03-01

    A variety of fluorescence-active fluorinated pyrazolines 13-33 was synthesized in good yields through cyclocondensation reaction of propenones 1-9 with aryl hydrazines 10-12. Some of the synthesized compounds provided promising fluorescence properties with quantum yield (Φ) higher than that of quinine sulfate (standard reference). Quantitative structure-property relationship studies were undertaken supporting the exhibited fluorescence properties and estimating the parameters governing properties. Five synthesized fluorescence-active pyrazolines (13, 15, 18, 19 and 23) with variable Φ were selected for treating two types of paper sheets (Fabriano and Bible paper). These investigated fluorescence compounds, especially compounds 19 and 23, provide improvements in strength properties of paper sheets. Based on the observed performance they can be used as markers in security documents.

  7. EvolQG - An R package for evolutionary quantitative genetics

    PubMed Central

    Melo, Diogo; Garcia, Guilherme; Hubbe, Alex; Assis, Ana Paula; Marroig, Gabriel

    2016-01-01

    We present an open source package for performing evolutionary quantitative genetics analyses in the R environment for statistical computing. Evolutionary theory shows that evolution depends critically on the available variation in a given population. When dealing with many quantitative traits this variation is expressed in the form of a covariance matrix, particularly the additive genetic covariance matrix or sometimes the phenotypic matrix, when the genetic matrix is unavailable and there is evidence the phenotypic matrix is sufficiently similar to the genetic matrix. Given this mathematical representation of available variation, the \\textbf{EvolQG} package provides functions for calculation of relevant evolutionary statistics; estimation of sampling error; corrections for this error; matrix comparison via correlations, distances and matrix decomposition; analysis of modularity patterns; and functions for testing evolutionary hypotheses on taxa diversification. PMID:27785352

  8. Estimation of Forest Fuel Load from Radar Remote Sensing

    NASA Technical Reports Server (NTRS)

    Saatchi, Sassan; Despain, Don G.; Halligan, Kerry; Crabtree, Robert

    2007-01-01

    Understanding fire behavior characteristics and planning for fire management require maps showing the distribution of wildfire fuel loads at medium to fine spatial resolution across large landscapes. Radar sensors from airborne or spaceborne platforms have the potential of providing quantitative information about the forest structure and biomass components that can be readily translated to meaningful fuel load estimates for fire management. In this paper, we used multifrequency polarimetric synthetic aperture radar imagery acquired over a large area of the Yellowstone National Park (YNP) by the AIRSAR sensor, to estimate the distribution of forest biomass and canopy fuel loads. Semi-empirical algorithms were developed to estimate crown and stem biomass and three major fuel load parameters, canopy fuel weight, canopy bulk density, and foliage moisture content. These estimates when compared directly to measurements made at plot and stand levels, provided more than 70% accuracy, and when partitioned into fuel load classes, provided more than 85% accuracy. Specifically, the radar generated fuel parameters were in good agreement with the field-based fuel measurements, resulting in coefficients of determination of R(sup 2) = 85 for the canopy fuel weight, R(sup 2)=.84 for canopy bulk density and R(sup 2) = 0.78 for the foliage biomass.

  9. Lewis and Clark National Historical Park Elk Monitoring Program Annual Report 2010

    USGS Publications Warehouse

    Cole, Carla; Griffin, Paul; Jenkins, Kurt

    2012-01-01

    Data from FY09, FY10, and FY11 will be useful in the formal analyses of trend. Those three years of data will contribute to the preparation of a four-year analysis and report after only one more year. Quantitative estimates of relative use by elk throughout the Fort Clatsop unit will be provided in the four-year report in 2012. Those estimates will account for detection bias, which comes from an incomplete count of elk pellets that were present in the subplots at the time of survey.

  10. A General Model for Estimating Macroevolutionary Landscapes.

    PubMed

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  11. Dynamic whole body PET parametric imaging: II. Task-oriented statistical estimation

    PubMed Central

    Karakatsanis, Nicolas A.; Lodge, Martin A.; Zhou, Y.; Wahl, Richard L.; Rahmim, Arman

    2013-01-01

    In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15–20cm) of a single bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical FDG patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection. PMID:24080994

  12. Dynamic whole-body PET parametric imaging: II. Task-oriented statistical estimation.

    PubMed

    Karakatsanis, Nicolas A; Lodge, Martin A; Zhou, Y; Wahl, Richard L; Rahmim, Arman

    2013-10-21

    In the context of oncology, dynamic PET imaging coupled with standard graphical linear analysis has been previously employed to enable quantitative estimation of tracer kinetic parameters of physiological interest at the voxel level, thus, enabling quantitative PET parametric imaging. However, dynamic PET acquisition protocols have been confined to the limited axial field-of-view (~15-20 cm) of a single-bed position and have not been translated to the whole-body clinical imaging domain. On the contrary, standardized uptake value (SUV) PET imaging, considered as the routine approach in clinical oncology, commonly involves multi-bed acquisitions, but is performed statically, thus not allowing for dynamic tracking of the tracer distribution. Here, we pursue a transition to dynamic whole-body PET parametric imaging, by presenting, within a unified framework, clinically feasible multi-bed dynamic PET acquisition protocols and parametric imaging methods. In a companion study, we presented a novel clinically feasible dynamic (4D) multi-bed PET acquisition protocol as well as the concept of whole-body PET parametric imaging employing Patlak ordinary least squares (OLS) regression to estimate the quantitative parameters of tracer uptake rate Ki and total blood distribution volume V. In the present study, we propose an advanced hybrid linear regression framework, driven by Patlak kinetic voxel correlations, to achieve superior trade-off between contrast-to-noise ratio (CNR) and mean squared error (MSE) than provided by OLS for the final Ki parametric images, enabling task-based performance optimization. Overall, whether the observer's task is to detect a tumor or quantitatively assess treatment response, the proposed statistical estimation framework can be adapted to satisfy the specific task performance criteria, by adjusting the Patlak correlation-coefficient (WR) reference value. The multi-bed dynamic acquisition protocol, as optimized in the preceding companion study, was employed along with extensive Monte Carlo simulations and an initial clinical (18)F-deoxyglucose patient dataset to validate and demonstrate the potential of the proposed statistical estimation methods. Both simulated and clinical results suggest that hybrid regression in the context of whole-body Patlak Ki imaging considerably reduces MSE without compromising high CNR. Alternatively, for a given CNR, hybrid regression enables larger reductions than OLS in the number of dynamic frames per bed, allowing for even shorter acquisitions of ~30 min, thus further contributing to the clinical adoption of the proposed framework. Compared to the SUV approach, whole-body parametric imaging can provide better tumor quantification, and can act as a complement to SUV, for the task of tumor detection.

  13. Animal reintroductions: an innovative assessment of survival

    USGS Publications Warehouse

    Muths, Erin L.; Bailey, Larissa L.; Watry, Mary Kay

    2014-01-01

    Quantitative evaluations of reintroductions are infrequent and assessments of milestones reached before a project is completed, or abandoned due to lack of funding, are rare. However, such assessments, which are promoted in adaptive management frameworks, are critical. Quantification can provide defensible estimates of biological success, such as the number of survivors from a released cohort, with associated cost per animal. It is unlikely that the global issues of endangered wildlife and population declines will abate, therefore, assurance colonies and reintroductions are likely to become more common. If such endeavors are to be successful biologically or achieve adequate funding, implementation must be more rigorous and accountable. We use a novel application of a multistate, robust design capture-recapture model to estimate survival of reintroduced tadpoles through metamorphosis (i.e., the number of individuals emerging from the pond) and thereby provide a quantitative measure of effort and success for an "in progress" reintroduction of toads. Our data also suggest that tadpoles released at later developmental stages have an increased probability of survival and that eggs laid in the wild hatched at higher rates than eggs laid by captive toads. We illustrate how an interim assessment can identify problems, highlight successes, and provide information for use in adjusting the effort or implementing a Decision-Theoretic adaptive management strategy.

  14. Repeat 24-hour recalls and locally developed food composition databases: a feasible method to estimate dietary adequacy in a multi-site preconception maternal nutrition RCT.

    PubMed

    Lander, Rebecca L; Hambidge, K Michael; Krebs, Nancy F; Westcott, Jamie E; Garces, Ana; Figueroa, Lester; Tejeda, Gabriela; Lokangaka, Adrien; Diba, Tshilenge S; Somannavar, Manjunath S; Honnayya, Ranjitha; Ali, Sumera A; Khan, Umber S; McClure, Elizabeth M; Thorsten, Vanessa R; Stolka, Kristen B

    2017-01-01

    Background : Our aim was to utilize a feasible quantitative methodology to estimate the dietary adequacy of >900 first-trimester pregnant women in poor rural areas of the Democratic Republic of the Congo, Guatemala, India and Pakistan. This paper outlines the dietary methods used. Methods : Local nutritionists were trained at the sites by the lead study nutritionist and received ongoing mentoring throughout the study. Training topics focused on the standardized conduct of repeat multiple-pass 24-hr dietary recalls, including interview techniques, estimation of portion sizes, and construction of a unique site-specific food composition database (FCDB). Each FCDB was based on 13 food groups and included values for moisture, energy, 20 nutrients (i.e. macro- and micronutrients), and phytate (an anti-nutrient). Nutrient values for individual foods or beverages were taken from recently developed FAO-supported regional food composition tables or the USDA national nutrient database. Appropriate adjustments for differences in moisture and application of nutrient retention and yield factors after cooking were applied, as needed. Generic recipes for mixed dishes consumed by the study population were compiled at each site, followed by calculation of a median recipe per 100 g. Each recipe's nutrient values were included in the FCDB. Final site FCDB checks were planned according to FAO/INFOODS guidelines. Discussion : This dietary strategy provides the opportunity to assess estimated mean group usual energy and nutrient intakes and estimated prevalence of the population 'at risk' of inadequate intakes in first-trimester pregnant women living in four low- and middle-income countries. While challenges and limitations exist, this methodology demonstrates the practical application of a quantitative dietary strategy for a large international multi-site nutrition trial, providing within- and between-site comparisons. Moreover, it provides an excellent opportunity for local capacity building and each site FCDB can be easily modified for additional research activities conducted in other populations living in the same area.

  15. Smile line assessment comparing quantitative measurement and visual estimation.

    PubMed

    Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie

    2011-02-01

    Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.

  16. Intraoperative perception and estimates on extent of resection during awake glioma surgery: overcoming the learning curve.

    PubMed

    Lau, Darryl; Hervey-Jumper, Shawn L; Han, Seunggu J; Berger, Mitchel S

    2018-05-01

    OBJECTIVE There is ample evidence that extent of resection (EOR) is associated with improved outcomes for glioma surgery. However, it is often difficult to accurately estimate EOR intraoperatively, and surgeon accuracy has yet to be reviewed. In this study, the authors quantitatively assessed the accuracy of intraoperative perception of EOR during awake craniotomy for tumor resection. METHODS A single-surgeon experience of performing awake craniotomies for tumor resection over a 17-year period was examined. Retrospective review of operative reports for quantitative estimation of EOR was recorded. Definitive EOR was based on postoperative MRI. Analysis of accuracy of EOR estimation was examined both as a general outcome (gross-total resection [GTR] or subtotal resection [STR]), and quantitatively (5% within EOR on postoperative MRI). Patient demographics, tumor characteristics, and surgeon experience were examined. The effects of accuracy on motor and language outcomes were assessed. RESULTS A total of 451 patients were included in the study. Overall accuracy of intraoperative perception of whether GTR or STR was achieved was 79.6%, and overall accuracy of quantitative perception of resection (within 5% of postoperative MRI) was 81.4%. There was a significant difference (p = 0.049) in accuracy for gross perception over the 17-year period, with improvement over the later years: 1997-2000 (72.6%), 2001-2004 (78.5%), 2005-2008 (80.7%), and 2009-2013 (84.4%). Similarly, there was a significant improvement (p = 0.015) in accuracy of quantitative perception of EOR over the 17-year period: 1997-2000 (72.2%), 2001-2004 (69.8%), 2005-2008 (84.8%), and 2009-2013 (93.4%). This improvement in accuracy is demonstrated by the significantly higher odds of correctly estimating quantitative EOR in the later years of the series on multivariate logistic regression. Insular tumors were associated with the highest accuracy of gross perception (89.3%; p = 0.034), but lowest accuracy of quantitative perception (61.1% correct; p < 0.001) compared with tumors in other locations. Even after adjusting for surgeon experience, this particular trend for insular tumors remained true. The absence of 1p19q co-deletion was associated with higher quantitative perception accuracy (96.9% vs 81.5%; p = 0.051). Tumor grade, recurrence, diagnosis, and isocitrate dehydrogenase-1 (IDH-1) status were not associated with accurate perception of EOR. Overall, new neurological deficits occurred in 8.4% of cases, and 42.1% of those new neurological deficits persisted after the 3-month follow-up. Correct quantitative perception was associated with lower postoperative motor deficits (2.4%) compared with incorrect perceptions (8.0%; p = 0.029). There were no detectable differences in language outcomes based on perception of EOR. CONCLUSIONS The findings from this study suggest that there is a learning curve associated with the ability to accurately assess intraoperative EOR during glioma surgery, and it may take more than a decade to be truly proficient. Understanding the factors associated with this ability to accurately assess EOR will provide safer surgeries while maximizing tumor resection.

  17. Bayesian assessment of overtriage and undertriage at a level I trauma centre.

    PubMed

    DiDomenico, Paul B; Pietzsch, Jan B; Paté-Cornell, M Elisabeth

    2008-07-13

    We analysed the trauma triage system at a specific level I trauma centre to assess rates of over- and undertriage and to support recommendations for system improvements. The triage process is designed to estimate the severity of patient injury and allocate resources accordingly, with potential errors of overestimation (overtriage) consuming excess resources and underestimation (undertriage) potentially leading to medical errors.We first modelled the overall trauma system using risk analysis methods to understand interdependencies among the actions of the participants. We interviewed six experienced trauma surgeons to obtain their expert opinion of the over- and undertriage rates occurring in the trauma centre. We then assessed actual over- and undertriage rates in a random sample of 86 trauma cases collected over a six-week period at the same centre. We employed Bayesian analysis to quantitatively combine the data with the prior probabilities derived from expert opinion in order to obtain posterior distributions. The results were estimates of overtriage and undertriage in 16.1 and 4.9% of patients, respectively. This Bayesian approach, which provides a quantitative assessment of the error rates using both case data and expert opinion, provides a rational means of obtaining a best estimate of the system's performance. The overall approach that we describe in this paper can be employed more widely to analyse complex health care delivery systems, with the objective of reduced errors, patient risk and excess costs.

  18. A method to validate quantitative high-frequency power doppler ultrasound with fluorescence in vivo video microscopy.

    PubMed

    Pinter, Stephen Z; Kim, Dae-Ro; Hague, M Nicole; Chambers, Ann F; MacDonald, Ian C; Lacefield, James C

    2014-08-01

    Flow quantification with high-frequency (>20 MHz) power Doppler ultrasound can be performed objectively using the wall-filter selection curve (WFSC) method to select the cutoff velocity that yields a best-estimate color pixel density (CPD). An in vivo video microscopy system (IVVM) is combined with high-frequency power Doppler ultrasound to provide a method for validation of CPD measurements based on WFSCs in mouse testicular vessels. The ultrasound and IVVM systems are instrumented so that the mouse remains on the same imaging platform when switching between the two modalities. In vivo video microscopy provides gold-standard measurements of vascular diameter to validate power Doppler CPD estimates. Measurements in four image planes from three mice exhibit wide variation in the optimal cutoff velocity and indicate that a predetermined cutoff velocity setting can introduce significant errors in studies intended to quantify vascularity. Consistent with previously published flow-phantom data, in vivo WFSCs exhibited three characteristic regions and detectable plateaus. Selection of a cutoff velocity at the right end of the plateau yielded a CPD close to the gold-standard vascular volume fraction estimated using IVVM. An investigator can implement the WFSC method to help adapt cutoff velocity to current blood flow conditions and thereby improve the accuracy of power Doppler for quantitative microvascular imaging. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  19. Psychosocial Work Stressors, Work Fatigue, and Musculoskeletal Disorders: Comparison between Emergency and Critical Care Nurses in Brunei Public Hospitals.

    PubMed

    Abdul Rahman, Hanif; Abdul-Mumin, Khadizah; Naing, Lin

    2017-03-01

    Little evidence estimated the exposure of psychosocial work stressors, work-related fatigue, and musculoskeletal disorders for nurses working in South-East Asian region, and research on this subject is almost nonexistent in Brunei. The main aim of our study was to provide a comprehensive exploration and estimate exposure of the study variables amongst emergency (ER) and critical care (CC) nurses in Brunei. The study also aims to compare whether experiences of ER nurses differ from those of CC nurses. This cross-sectional study was implemented in the ER and CC departments across Brunei public hospitals from February to April 2016 by using Copenhagen Psychosocial Questionnaire II, Occupational Fatigue Exhaustion Recovery scale, and Cornell Musculoskeletal Discomfort Questionnaire. In total, 201 ER and CC nurses (82.0% response rate) participated in the study. Quantitative demands of CC nurses were significantly higher than ER nurses. Even so, ER nurses were 4.0 times more likely [95% confidence interval (2.21, 7.35)] to experience threats of violence, and 2.8 times more likely [95% confidence interval: (1.50, 5.29)] to experience chronic fatigue. The results revealed that nurses experienced high quantitative demands, work pace, stress, and burnout. High prevalence of chronic and persistent fatigue, threats of violence and bullying, and musculoskeletal pain at the neck, shoulder, upper and lower back, and foot region, was also reported. This study has provided good estimates for the exposure rate of psychosocial work stressors, work-related fatigue, and musculoskeletal disorders among nurses in Brunei. It provided important initial insight for nursing management and policymakers to make informed decisions on current and future planning to provide nurses with a conducive work environment. Copyright © 2017. Published by Elsevier B.V.

  20. Quantitative image fusion in infrared radiometry

    NASA Astrophysics Data System (ADS)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  1. A quantitative approach to assessing the efficacy of occupant protection programs: A case study from Montana.

    PubMed

    Manlove, Kezia; Stanley, Laura; Peck, Alyssa

    2015-10-01

    Quantitative evaluation of vehicle occupant protection programs is critical for ensuring efficient government resource allocation, but few methods exist for conducting evaluation across multiple programs simultaneously. Here we present an analysis of occupant protection efficacy in the state of Montana. This approach relies on seat belt compliance rates as measured by the National Occupant Protection Usage Survey (NOPUS). A hierarchical logistic regression model is used to estimate the impacts of four Montana Department of Transportation (MDT)-funded occupant protection programs used in the state of Montana, following adjustment for a suite of potential confounders. Activity from two programs, Buckle Up coalitions and media campaigns, are associated with increased seat belt use in Montana, whereas the impact of another program, Selective Traffic Enforcement, is potentially masked by other program activity. A final program, Driver's Education, is not associated with any shift in seat belt use. This method allows for a preliminary quantitative estimation of program impacts without requiring states to obtain any new seat belt use data. This approach provides states a preliminary look at program impacts, and a means for carefully planning future program allocation and investigation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. 75 FR 35990 - Endangered and Threatened Wildlife and Plants; Listing the Flying Earwig Hawaiian Damselfly and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-24

    ... this location in 2008. No quantitative estimate of the size of this remaining population is available... observed in 1998. No quantitative estimates of the size of the extant populations are available. Howarth...

  3. Whole-body PET parametric imaging employing direct 4D nested reconstruction and a generalized non-linear Patlak model

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Rahmim, Arman

    2014-03-01

    Graphical analysis is employed in the research setting to provide quantitative estimation of PET tracer kinetics from dynamic images at a single bed. Recently, we proposed a multi-bed dynamic acquisition framework enabling clinically feasible whole-body parametric PET imaging by employing post-reconstruction parameter estimation. In addition, by incorporating linear Patlak modeling within the system matrix, we enabled direct 4D reconstruction in order to effectively circumvent noise amplification in dynamic whole-body imaging. However, direct 4D Patlak reconstruction exhibits a relatively slow convergence due to the presence of non-sparse spatial correlations in temporal kinetic analysis. In addition, the standard Patlak model does not account for reversible uptake, thus underestimating the influx rate Ki. We have developed a novel whole-body PET parametric reconstruction framework in the STIR platform, a widely employed open-source reconstruction toolkit, a) enabling accelerated convergence of direct 4D multi-bed reconstruction, by employing a nested algorithm to decouple the temporal parameter estimation from the spatial image update process, and b) enhancing the quantitative performance particularly in regions with reversible uptake, by pursuing a non-linear generalized Patlak 4D nested reconstruction algorithm. A set of published kinetic parameters and the XCAT phantom were employed for the simulation of dynamic multi-bed acquisitions. Quantitative analysis on the Ki images demonstrated considerable acceleration in the convergence of the nested 4D whole-body Patlak algorithm. In addition, our simulated and patient whole-body data in the postreconstruction domain indicated the quantitative benefits of our extended generalized Patlak 4D nested reconstruction for tumor diagnosis and treatment response monitoring.

  4. Quantitative morphology of the vascularisation of organs: A stereological approach illustrated using the cardiac circulation.

    PubMed

    Mühlfeld, Christian

    2014-01-01

    The vasculature of the heart is able to adapt to various physiological and pathological stimuli and its failure to do so is well-reflected by the great impact of ischaemic heart disease on personal morbidity and mortality and on the health care systems of industrial countries. Studies on physiological or genetic interventions as well as therapeutic angiogenesis rely on quantitative data to characterize the effects in a statistically robust way. The gold standard for obtaining quantitative morphological data is design-based stereology which allows the estimation of volume, surface area, length and number of blood vessels as well as their thickness, diameter or wall composition. Unfortunately, the use of stereological methods for this purpose is still rare. One of the reasons for this is the fact that the transfer of the theoretical foundations into laboratory practice requires a remarkable amount of considerations before touching the first piece of tissue. These considerations, however, are often based on already acquired experience and are usually not dealt with in stereological review articles. The present article therefore delineates the procedures for estimating the most important characteristics of the cardiac vasculature and highlights potential problems and their practical solutions. Worked examples are used to illustrate the methods and provide examples of the calculations. Hopefully, the considerations and examples contained herein will provide researchers in this field with the necessary equipment to add stereological methods to their study designs. Copyright © 2012 Elsevier GmbH. All rights reserved.

  5. Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making

    USGS Publications Warehouse

    Williams, B.K.; Nichols, J.D.; Conroy, M.J.

    2002-01-01

    This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples

  6. Weak Value Amplification is Suboptimal for Estimation and Detection

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-01-01

    We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.

  7. An information measure for class discrimination. [in remote sensing of crop observation

    NASA Technical Reports Server (NTRS)

    Shen, S. S.; Badhwar, G. D.

    1986-01-01

    This article describes a separability measure for class discrimination. This measure is based on the Fisher information measure for estimating the mixing proportion of two classes. The Fisher information measure not only provides a means to assess quantitatively the information content in the features for separating classes, but also gives the lower bound for the variance of any unbiased estimate of the mixing proportion based on observations of the features. Unlike most commonly used separability measures, this measure is not dependent on the form of the probability distribution of the features and does not imply a specific estimation procedure. This is important because the probability distribution function that describes the data for a given class does not have simple analytic forms, such as a Gaussian. Results of applying this measure to compare the information content provided by three Landsat-derived feature vectors for the purpose of separating small grains from other crops are presented.

  8. Oceanic Fluxes of Mass, Heat and Freshwater: A Global Estimate and Perspective

    NASA Technical Reports Server (NTRS)

    MacDonald, Alison Marguerite

    1995-01-01

    Data from fifteen globally distributed, modern, high resolution, hydrographic oceanic transects are combined in an inverse calculation using large scale box models. The models provide estimates of the global meridional heat and freshwater budgets and are used to examine the sensitivity of the global circulation, both inter and intra-basin exchange rates, to a variety of external constraints provided by estimates of Ekman, boundary current and throughflow transports. A solution is found which is consistent with both the model physics and the global data set, despite a twenty five year time span and a lack of seasonal consistency among the data. The overall pattern of the global circulation suggested by the models is similar to that proposed in previously published local studies and regional reviews. However, significant qualitative and quantitative differences exist. These differences are due both to the model definition and to the global nature of the data set.

  9. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  10. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  11. Statistical, economic and other tools for assessing natural aggregate

    USGS Publications Warehouse

    Bliss, J.D.; Moyle, P.R.; Bolm, K.S.

    2003-01-01

    Quantitative aggregate resource assessment provides resource estimates useful for explorationists, land managers and those who make decisions about land allocation, which may have long-term implications concerning cost and the availability of aggregate resources. Aggregate assessment needs to be systematic and consistent, yet flexible enough to allow updating without invalidating other parts of the assessment. Evaluators need to use standard or consistent aggregate classification and statistic distributions or, in other words, models with geological, geotechnical and economic variables or interrelationships between these variables. These models can be used with subjective estimates, if needed, to estimate how much aggregate may be present in a region or country using distributions generated by Monte Carlo computer simulations.

  12. Security Events and Vulnerability Data for Cybersecurity Risk Estimation.

    PubMed

    Allodi, Luca; Massacci, Fabio

    2017-08-01

    Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.

  13. Confidence estimation for quantitative photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena

    2018-02-01

    Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.

  14. Differential reliability : probabilistic engineering applied to wood members in bending-tension

    Treesearch

    Stanley K. Suddarth; Frank E. Woeste; William L. Galligan

    1978-01-01

    Reliability analysis is a mathematical technique for appraising the design and materials of engineered structures to provide a quantitative estimate of probability of failure. Two or more cases which are similar in all respects but one may be analyzed by this method; the contrast between the probabilities of failure for these cases allows strong analytical focus on the...

  15. Historical fire and vegetation dynamics in dry forests of the interior Pacific Northwest, USA, and relationships to northern spotted owl (Strix occidentalis caurina) habitat conservation

    Treesearch

    Rebecca S.H. Kennedy; Michael C. Wimberly

    2009-01-01

    Regional conservation planning frequently relies on general assumptions about historical disturbance regimes to inform decisions about landscape restoration, reserve allocations, and landscape management. Spatially explicit simulations of landscape dynamics provide quantitative estimates of landscape structure and allow for the testing of alternative scenarios. We used...

  16. The Economic Impact of Jefferson College on the Community and State--FY1997.

    ERIC Educational Resources Information Center

    Jefferson Coll., Hillsboro, MO.

    This document provides an estimation of the ways in which Jefferson College (Missouri) impacts the economy of Jefferson County and the state as a whole. It offers quantitative information and acts as a reference for the Board of Trustees, administrators, faculty, and staff regarding the economic significance of the college to the area it serves.…

  17. Current capabilities and limitations of the stable isotope technologies and applied mathematical equations in determining whole body vitamin A status

    USDA-ARS?s Scientific Manuscript database

    Vitamin A (VA) stable isotope dilution methodology provides a quantitative estimate of total body VA stores and is the best method currently available for assessing VA status in adults and children. The methodology has also been used to test the efficacy of VA interventions in a number of low-incom...

  18. Estimating the Effect of Leaders on Public Sector Productivity: The Case of School Principals. NBER Working Paper No. 17803

    ERIC Educational Resources Information Center

    Branch, Gregory F.; Hanushek, Eric A.; Rivkin, Steven G.

    2012-01-01

    Although much has been written about the importance of leadership in the determination of organizational success, there is little quantitative evidence due to the difficulty of separating the impact of leaders from other organizational components--particularly in the public sector. Schools provide an especially rich environment for studying the…

  19. The application of remote sensing to the development and formulation of hydrologic planning models: Executive summary

    NASA Technical Reports Server (NTRS)

    Castruccio, P. A.; Loats, H. L., Jr.; Fowler, T. R.

    1977-01-01

    Methods for the reduction of remotely sensed data and its application in hydrologic land use assessment, surface water inventory, and soil property studies are presented. LANDSAT data is used to provide quantitative parameters and coefficients to construct watershed transfer functions for a hydrologic planning model aimed at estimating peak outflow from rainfall inputs.

  20. Spontaneous polyploidization in cucumber.

    PubMed

    Ramírez-Madera, Axel O; Miller, Nathan D; Spalding, Edgar P; Weng, Yiqun; Havey, Michael J

    2017-07-01

    This is the first quantitative estimation of spontaneous polyploidy in cucumber and we detected 2.2% polyploids in a greenhouse study. We provide evidence that polyploidization is consistent with endoreduplication and is an on-going process during plant growth. Cucumber occasionally produces polyploid plants, which are problematic for growers because these plants produce misshaped fruits with non-viable seeds. In this study, we undertook the first quantitative study to estimate the relative frequency of spontaneous polyploids in cucumber. Seeds of recombinant inbred lines were produced in different environments, plants were grown in the field and greenhouse, and flow cytometry was used to establish ploidies. From 1422 greenhouse-grown plants, the overall relative frequency of spontaneous polyploidy was 2.2%. Plants possessed nuclei of different ploidies in the same leaves (mosaic) and on different parts of the same plant (chimeric). Our results provide evidence of endoreduplication and polysomaty in cucumber, and that it is an on-going and dynamic process. There was a significant effect (p = 0.018) of seed production environment on the occurrence of polyploid plants. Seed and seedling traits were not accurate predictors of eventual polyploids, and we recommend that cucumber producers rogue plants based on stature and leaf serration to remove potential polyploids.

  1. Little cigars, big cigars: omissions and commissions of harm and harm reduction information on the Internet.

    PubMed

    Dollar, Katherine M; Mix, Jacqueline M; Kozlowski, Lynn T

    2008-05-01

    We conducted a comparative analysis of "harm," "harm reduction," and "little cigar" information about cigars on 10 major English-language health Web sites. The sites were from governmental and nongovernmental organizations based in seven different countries and included "harm" and "harm reduction" information, discussions of little cigars, quantitative estimates of health risks, and qualifying behavioral characteristics (inhalation, number per day). Of the 10 Web sites, 7 offered statements explicitly indicating that cigars may be safer than cigarettes. None of the Web sites reviewed described that little cigars are likely as dangerous as cigarettes. Some Web sites provided quantitative estimates of health risks and extensive discussions of qualifying factors. Reading grade levels were higher than desirable. Extensive and complex information on the reduced risks of cigars compared with cigarettes is available on Web sites affiliated with prominent health organizations. Yet these sites fail to warn consumers that popular cigarette-like little cigars and cigarillos are likely to be just as dangerous as cigarettes, even for those who have never smoked cigarettes. Improvement of these Web sites is urgently needed to provide the public with high-quality health information.

  2. Connecting the Kinetics and Energy Landscape of tRNA Translocation on the Ribosome

    PubMed Central

    Whitford, Paul C.; Blanchard, Scott C.; Cate, Jamie H. D.; Sanbonmatsu, Karissa Y.

    2013-01-01

    Functional rearrangements in biomolecular assemblies result from diffusion across an underlying energy landscape. While bulk kinetic measurements rely on discrete state-like approximations to the energy landscape, single-molecule methods can project the free energy onto specific coordinates. With measures of the diffusion, one may establish a quantitative bridge between state-like kinetic measurements and the continuous energy landscape. We used an all-atom molecular dynamics simulation of the 70S ribosome (2.1 million atoms; 1.3 microseconds) to provide this bridge for specific conformational events associated with the process of tRNA translocation. Starting from a pre-translocation configuration, we identified sets of residues that collectively undergo rotary rearrangements implicated in ribosome function. Estimates of the diffusion coefficients along these collective coordinates for translocation were then used to interconvert between experimental rates and measures of the energy landscape. This analysis, in conjunction with previously reported experimental rates of translocation, provides an upper-bound estimate of the free-energy barriers associated with translocation. While this analysis was performed for a particular kinetic scheme of translocation, the quantitative framework is general and may be applied to energetic and kinetic descriptions that include any number of intermediates and transition states. PMID:23555233

  3. Connecting the kinetics and energy landscape of tRNA translocation on the ribosome.

    PubMed

    Whitford, Paul C; Blanchard, Scott C; Cate, Jamie H D; Sanbonmatsu, Karissa Y

    2013-01-01

    Functional rearrangements in biomolecular assemblies result from diffusion across an underlying energy landscape. While bulk kinetic measurements rely on discrete state-like approximations to the energy landscape, single-molecule methods can project the free energy onto specific coordinates. With measures of the diffusion, one may establish a quantitative bridge between state-like kinetic measurements and the continuous energy landscape. We used an all-atom molecular dynamics simulation of the 70S ribosome (2.1 million atoms; 1.3 microseconds) to provide this bridge for specific conformational events associated with the process of tRNA translocation. Starting from a pre-translocation configuration, we identified sets of residues that collectively undergo rotary rearrangements implicated in ribosome function. Estimates of the diffusion coefficients along these collective coordinates for translocation were then used to interconvert between experimental rates and measures of the energy landscape. This analysis, in conjunction with previously reported experimental rates of translocation, provides an upper-bound estimate of the free-energy barriers associated with translocation. While this analysis was performed for a particular kinetic scheme of translocation, the quantitative framework is general and may be applied to energetic and kinetic descriptions that include any number of intermediates and transition states.

  4. Comparison of task-based exposure metrics for an epidemiologic study of isocyanate inhalation exposures among autobody shop workers.

    PubMed

    Woskie, Susan R; Bello, Dhimiter; Gore, Rebecca J; Stowe, Meredith H; Eisen, Ellen A; Liu, Youcheng; Sparer, Judy A; Redlich, Carrie A; Cullen, Mark R

    2008-09-01

    Because many occupational epidemiologic studies use exposure surrogates rather than quantitative exposure metrics, the UMass Lowell and Yale study of autobody shop workers provided an opportunity to evaluate the relative utility of surrogates and quantitative exposure metrics in an exposure response analysis of cross-week change in respiratory function. A task-based exposure assessment was used to develop several metrics of inhalation exposure to isocyanates. The metrics included the surrogates, job title, counts of spray painting events during the day, counts of spray and bystander exposure events, and a quantitative exposure metric that incorporated exposure determinant models based on task sampling and a personal workplace protection factor for respirator use, combined with a daily task checklist. The result of the quantitative exposure algorithm was an estimate of the daily time-weighted average respirator-corrected total NCO exposure (microg/m(3)). In general, these four metrics were found to be variable in agreement using measures such as weighted kappa and Spearman correlation. A logistic model for 10% drop in FEV(1) from Monday morning to Thursday morning was used to evaluate the utility of each exposure metric. The quantitative exposure metric was the most favorable, producing the best model fit, as well as the greatest strength and magnitude of association. This finding supports the reports of others that reducing exposure misclassification can improve risk estimates that otherwise would be biased toward the null. Although detailed and quantitative exposure assessment can be more time consuming and costly, it can improve exposure-disease evaluations and is more useful for risk assessment purposes. The task-based exposure modeling method successfully produced estimates of daily time-weighted average exposures in the complex and changing autobody shop work environment. The ambient TWA exposures of all of the office workers and technicians and 57% of the painters were found to be below the current U.K. Health and Safety Executive occupational exposure limit (OEL) for total NCO of 20 microg/m(3). When respirator use was incorporated, all personal daily exposures were below the U.K. OEL.

  5. Incremental Value of Three-Dimensional Transesophageal Echocardiography over the Two-Dimensional Technique in the Assessment of a Thrombus in Transit through a Patent Foramen Ovale.

    PubMed

    Thind, Munveer; Ahmed, Mustafa I; Gok, Gulay; Joson, Marisa; Elsayed, Mahmoud; Tuck, Benjamin C; Townsley, Matthew M; Klas, Berthold; McGiffin, David C; Nanda, Navin C

    2015-05-01

    We report a case of a right atrial thrombus traversing a patent foramen ovale into the left atrium, where three-dimensional transesophageal echocardiography provided considerable incremental value over two-dimensional transesophageal echocardiography in its assessment. As well as allowing us to better spatially characterize the thrombus, three-dimensional transesophageal echocardiography provided a more quantitative assessment through estimation of total thrombus burden. © 2015, Wiley Periodicals, Inc.

  6. Computational design of a Diels-Alderase from a thermophilic esterase: the importance of dynamics

    NASA Astrophysics Data System (ADS)

    Linder, Mats; Johansson, Adam Johannes; Olsson, Tjelvar S. G.; Liebeschuetz, John; Brinck, Tore

    2012-09-01

    A novel computational Diels-Alderase design, based on a relatively rare form of carboxylesterase from Geobacillus stearothermophilus, is presented and theoretically evaluated. The structure was found by mining the PDB for a suitable oxyanion hole-containing structure, followed by a combinatorial approach to find suitable substrates and rational mutations. Four lead designs were selected and thoroughly modeled to obtain realistic estimates of substrate binding and prearrangement. Molecular dynamics simulations and DFT calculations were used to optimize and estimate binding affinity and activation energies. A large quantum chemical model was used to capture the salient interactions in the crucial transition state (TS). Our quantitative estimation of kinetic parameters was validated against four experimentally characterized Diels-Alderases with good results. The final designs in this work are predicted to have rate enhancements of ≈103-106 and high predicted proficiencies. This work emphasizes the importance of considering protein dynamics in the design approach, and provides a quantitative estimate of the how the TS stabilization observed in most de novo and redesigned enzymes is decreased compared to a minimal, `ideal' model. The presented design is highly interesting for further optimization and applications since it is based on a thermophilic enzyme ( T opt = 70 °C).

  7. Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds

    PubMed Central

    Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.

    2013-01-01

    Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392

  8. Quantitative PCR for HTLV-1 provirus in adult T-cell leukemia/lymphoma using paraffin tumor sections.

    PubMed

    Kato, Junki; Masaki, Ayako; Fujii, Keiichiro; Takino, Hisashi; Murase, Takayuki; Yonekura, Kentaro; Utsunomiya, Atae; Ishida, Takashi; Iida, Shinsuke; Inagaki, Hiroshi

    2016-11-01

    Detection of HTLV-1 provirus using paraffin tumor sections may assist the diagnosis of adult T-cell leukemia/lymphoma (ATLL). For the detection, non-quantitative PCR assay has been reported, but its usefulness and limitations remain unclear. To our knowledge, quantitative PCR assay using paraffin tumor sections has not been reported. Using paraffin sections from ATLLs and non-ATLL T-cell lymphomas, we first performed non-quantitative PCR for HTLV-1 provirus. Next, we determined tumor ratios and carried out quantitative PCR to obtain provirus copy numbers. The results were analyzed with a simple regression model and a novel criterion, cut-off using 95 % rejection limits. Our quantitative PCR assay showed an excellent association between tumor ratios and the copy numbers (r = 0.89, P < 0.0001). The 95 % rejection limits provided a statistical basis for the range for the determination of HTLV-1 involvement. Its application suggested that results of non-quantitative PCR assay should be interpreted very carefully and that our quantitative PCR assay is useful to estimate the status of HTLV-1 involvement in the tumor cases. In conclusion, our quantitative PCR assay using paraffin tumor sections may be useful for the screening of ATLL cases, especially in HTLV-1 non-endemic areas where easy access to serological testing for HTLV-1 infection is limited. © 2016 Japanese Society of Pathology and John Wiley & Sons Australia, Ltd.

  9. Quantitative Homogenization in Nonlinear Elasticity for Small Loads

    NASA Astrophysics Data System (ADS)

    Neukamm, Stefan; Schäffner, Mathias

    2018-04-01

    We study quantitative periodic homogenization of integral functionals in the context of nonlinear elasticity. Under suitable assumptions on the energy densities (in particular frame indifference; minimality, non-degeneracy and smoothness at the identity; {p ≥q d} -growth from below; and regularity of the microstructure), we show that in a neighborhood of the set of rotations, the multi-cell homogenization formula of non-convex homogenization reduces to a single-cell formula. The latter can be expressed with the help of correctors. We prove that the homogenized integrand admits a quadratic Taylor expansion in an open neighborhood of the rotations - a result that can be interpreted as the fact that homogenization and linearization commute close to the rotations. Moreover, for small applied loads, we provide an estimate on the homogenization error in terms of a quantitative two-scale expansion.

  10. Comparison of Maximum Likelihood Estimation Approach and Regression Approach in Detecting Quantitative Trait Lco Using RAPD Markers

    Treesearch

    Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine

    1999-01-01

    Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...

  11. Boundary methods for mode estimation

    NASA Astrophysics Data System (ADS)

    Pierson, William E., Jr.; Ulug, Batuhan; Ahalt, Stanley C.

    1999-08-01

    This paper investigates the use of Boundary Methods (BMs), a collection of tools used for distribution analysis, as a method for estimating the number of modes associated with a given data set. Model order information of this type is required by several pattern recognition applications. The BM technique provides a novel approach to this parameter estimation problem and is comparable in terms of both accuracy and computations to other popular mode estimation techniques currently found in the literature and automatic target recognition applications. This paper explains the methodology used in the BM approach to mode estimation. Also, this paper quickly reviews other common mode estimation techniques and describes the empirical investigation used to explore the relationship of the BM technique to other mode estimation techniques. Specifically, the accuracy and computational efficiency of the BM technique are compared quantitatively to the a mixture of Gaussian (MOG) approach and a k-means approach to model order estimation. The stopping criteria of the MOG and k-means techniques is the Akaike Information Criteria (AIC).

  12. Quantitative PCR estimates Angiostrongylus cantonensis (rat lungworm) infection levels in semi-slugs (Parmarion martensi)

    PubMed Central

    Jarvi, Susan I.; Farias, Margaret E.M.; Howe, Kay; Jacquier, Steven; Hollingsworth, Robert; Pitt, William

    2013-01-01

    The life cycle of the nematode Angiostrongylus cantonensis involves rats as the definitive host and slugs and snails as intermediate hosts. Humans can become infected upon ingestion of intermediate or paratenic (passive carrier) hosts containing stage L3 A. cantonensis larvae. Here, we report a quantitative PCR (qPCR) assay that provides a reliable, relative measure of parasite load in intermediate hosts. Quantification of the levels of infection of intermediate hosts is critical for determining A. cantonensis intensity on the Island of Hawaii. The identification of high intensity infection ‘hotspots’ will allow for more effective targeted rat and slug control measures. qPCR appears more efficient and sensitive than microscopy and provides a new tool for quantification of larvae from intermediate hosts, and potentially from other sources as well. PMID:22902292

  13. Estimating exposures in the asphalt industry for an international epidemiological cohort study of cancer risk.

    PubMed

    Burstyn, Igor; Boffetta, Paolo; Kauppinen, Timo; Heikkilä, Pirjo; Svane, Ole; Partanen, Timo; Stücker, Isabelle; Frentzel-Beyme, Rainer; Ahrens, Wolfgang; Merzenich, Hiltrud; Heederik, Dick; Hooiveld, Mariëtte; Langård, Sverre; Randem, Britt G; Järvholm, Bengt; Bergdahl, Ingvar; Shaham, Judith; Ribak, Joseph; Kromhout, Hans

    2003-01-01

    An exposure matrix (EM) for known and suspected carcinogens was required for a multicenter international cohort study of cancer risk and bitumen among asphalt workers. Production characteristics in companies enrolled in the study were ascertained through use of a company questionnaire (CQ). Exposures to coal tar, bitumen fume, organic vapor, polycyclic aromatic hydrocarbons, diesel fume, silica, and asbestos were assessed semi-quantitatively using information from CQs, expert judgment, and statistical models. Exposures of road paving workers to bitumen fume, organic vapor, and benzo(a)pyrene were estimated quantitatively by applying regression models, based on monitoring data, to exposure scenarios identified by the CQs. Exposures estimates were derived for 217 companies enrolled in the cohort, plus the Swedish asphalt paving industry in general. Most companies were engaged in road paving and asphalt mixing, but some also participated in general construction and roofing. Coal tar use was most common in Denmark and The Netherlands, but the practice is now obsolete. Quantitative estimates of exposure to bitumen fume, organic vapor, and benzo(a)pyrene for pavers, and semi-quantitative estimates of exposure to these agents among all subjects were strongly correlated. Semi-quantitative estimates of exposure to bitumen fume and coal tar exposures were only moderately correlated. EM assessed non-monotonic historical decrease in exposures to all agents assessed except silica and diesel exhaust. We produced a data-driven EM using methodology that can be adapted for other multicenter studies. Copyright 2003 Wiley-Liss, Inc.

  14. Mathematics of quantitative kinetic PCR and the application of standard curves.

    PubMed

    Rutledge, R G; Côté, C

    2003-08-15

    Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.

  15. Low rank magnetic resonance fingerprinting.

    PubMed

    Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C

    2016-08-01

    Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.

  16. Low-Cost Evaluation of EO-1 Hyperion and ALI for Detection and Biophysical Characterization of Forest Logging in Amazonia (NCC5-481)

    NASA Technical Reports Server (NTRS)

    Asner, Gregory P.; Keller, Michael M.; Silva, Jose Natalino; Zweede, Johan C.; Pereira, Rodrigo, Jr.

    2002-01-01

    Major uncertainties exist regarding the rate and intensity of logging in tropical forests worldwide: these uncertainties severely limit economic, ecological, and biogeochemical analyses of these regions. Recent sawmill surveys in the Amazon region of Brazil show that the area logged is nearly equal to total area deforested annually, but conversion of survey data to forest area, forest structural damage, and biomass estimates requires multiple assumptions about logging practices. Remote sensing could provide an independent means to monitor logging activity and to estimate the biophysical consequences of this land use. Previous studies have demonstrated that the detection of logging in Amazon forests is difficult and no studies have developed either the quantitative physical basis or remote sensing approaches needed to estimate the effects of various logging regimes on forest structure. A major reason for these limitations has been a lack of sufficient, well-calibrated optical satellite data, which in turn, has impeded the development and use of physically-based, quantitative approaches for detection and structural characterization of forest logging regimes. We propose to use data from the EO-1 Hyperion imaging spectrometer to greatly increase our ability to estimate the presence and structural attributes of selective logging in the Amazon Basin. Our approach is based on four "biogeophysical indicators" not yet derived simultaneously from any satellite sensor: 1) green canopy leaf area index; 2) degree of shadowing; 3) presence of exposed soil and; 4) non-photosynthetic vegetation material. Airborne, field and modeling studies have shown that the optical reflectance continuum (400-2500 nm) contains sufficient information to derive estimates of each of these indicators. Our ongoing studies in the eastern Amazon basin also suggest that these four indicators are sensitive to logging intensity. Satellite-based estimates of these indicators should provide a means to quantify both the presence and degree of structural disturbance caused by various logging regimes. Our quantitative assessment of Hyperion hyperspectral and ALI multi-spectral data for the detection and structural characterization of selective logging in Amazonia will benefit from data collected through an ongoing project run by the Tropical Forest Foundation, within which we have developed a study of the canopy and landscape biophysics of conventional and reduced-impact logging. We will add to our base of forest structural information in concert with an EO-1 overpass. Using a photon transport model inversion technique that accounts for non-linear mixing of the four biogeophysical indicators, we will estimate these parameters across a gradient of selective logging intensity provided by conventional and reduced impact logging sites. We will also compare our physical ly-based approach to both conventional (e.g., NDVI) and novel (e.g., SWIR-channel) vegetation indices as well as to linear mixture modeling methods. We will cross-compare these approaches using Hyperion and ALI imagers to determine the strengths and limitations of these two sensors for applications of forest biophysics. This effort will yield the first physical ly-based, quantitative analysis of the detection and intensity of selective logging in Amazonia, comparing hyperspectral and improved multi-spectral approaches as well as inverse modeling, linear mixture modeling, and vegetation index techniques.

  17. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process.

    PubMed

    Haines, Aaron M; Zak, Matthew; Hammond, Katie; Scott, J Michael; Goble, Dale D; Rachlow, Janet L

    2013-08-13

    United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data.

  18. Estimation of diastolic intraventricular pressure gradients by Doppler M-mode echocardiography

    NASA Technical Reports Server (NTRS)

    Greenberg, N. L.; Vandervoort, P. M.; Firstenberg, M. S.; Garcia, M. J.; Thomas, J. D.

    2001-01-01

    Previous studies have shown that small intraventricular pressure gradients (IVPG) are important for efficient filling of the left ventricle (LV) and as a sensitive marker for ischemia. Unfortunately, there has previously been no way of measuring these noninvasively, severely limiting their research and clinical utility. Color Doppler M-mode (CMM) echocardiography provides a spatiotemporal velocity distribution along the inflow tract throughout diastole, which we hypothesized would allow direct estimation of IVPG by using the Euler equation. Digital CMM images, obtained simultaneously with intracardiac pressure waveforms in six dogs, were processed by numerical differentiation for the Euler equation, then integrated to estimate IVPG and the total (left atrial to left ventricular apex) pressure drop. CMM-derived estimates agreed well with invasive measurements (IVPG: y = 0.87x + 0.22, r = 0.96, P < 0.001, standard error of the estimate = 0.35 mmHg). Quantitative processing of CMM data allows accurate estimation of IVPG and tracking of changes induced by beta-adrenergic stimulation. This novel approach provides unique information on LV filling dynamics in an entirely noninvasive way that has previously not been available for assessment of diastolic filling and function.

  19. Creating an anthropomorphic digital MR phantom—an extensible tool for comparing and evaluating quantitative imaging algorithms

    NASA Astrophysics Data System (ADS)

    Bosca, Ryan J.; Jackson, Edward F.

    2016-01-01

    Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland-Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms.

  20. Shear-induced aggregation dynamics in a polymer microrod suspension

    NASA Astrophysics Data System (ADS)

    Kumar, Pramukta S.

    A non-Brownian suspension of micron scale rods is found to exhibit reversible shear-driven formation of disordered aggregates resulting in dramatic viscosity enhancement at low shear rates. Aggregate formation is imaged at low magnification using a combined rheometer and fluorescence microscope system. The size and structure of these aggregates are found to depend on shear rate and concentration, with larger aggregates present at lower shear rates and higher concentrations. Quantitative measurements of the early-stage aggregation process are modeled by a collision driven growth of porous structures which show that the aggregate density increases with a shear rate. A Krieger-Dougherty type constitutive relation and steady-state viscosity measurements are used to estimate the intrinsic viscosity of complex structures developed under shear. Higher magnification images are collected and used to validate the aggregate size versus density relationship, as well as to obtain particle flow fields via PIV. The flow fields provide a tantalizing view of fluctuations involved in the aggregation process. Interaction strength is estimated via contact force measurements and JKR theory and found to be extremely strong in comparison to shear forces present in the system, estimated using hydrodynamic arguments. All of the results are then combined to produce a consistent conceptual model of aggregation in the system that features testable consequences. These results represent a direct, quantitative, experimental study of aggregation and viscosity enhancement in rod suspension, and demonstrate a strategy for inferring inaccessible microscopic geometric properties of a dynamic system through the combination of quantitative imaging and rheology.

  1. A probabilistic method for computing quantitative risk indexes from medical injuries compensation claims.

    PubMed

    Dalle Carbonare, S; Folli, F; Patrini, E; Giudici, P; Bellazzi, R

    2013-01-01

    The increasing demand of health care services and the complexity of health care delivery require Health Care Organizations (HCOs) to approach clinical risk management through proper methods and tools. An important aspect of risk management is to exploit the analysis of medical injuries compensation claims in order to reduce adverse events and, at the same time, to optimize the costs of health insurance policies. This work provides a probabilistic method to estimate the risk level of a HCO by computing quantitative risk indexes from medical injury compensation claims. Our method is based on the estimate of a loss probability distribution from compensation claims data through parametric and non-parametric modeling and Monte Carlo simulations. The loss distribution can be estimated both on the whole dataset and, thanks to the application of a Bayesian hierarchical model, on stratified data. The approach allows to quantitatively assessing the risk structure of the HCO by analyzing the loss distribution and deriving its expected value and percentiles. We applied the proposed method to 206 cases of injuries with compensation requests collected from 1999 to the first semester of 2007 by the HCO of Lodi, in the Northern part of Italy. We computed the risk indexes taking into account the different clinical departments and the different hospitals involved. The approach proved to be useful to understand the HCO risk structure in terms of frequency, severity, expected and unexpected loss related to adverse events.

  2. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods

    PubMed Central

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-01-01

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation. PMID:26982626

  3. A no-gold-standard technique for objective assessment of quantitative nuclear-medicine imaging methods.

    PubMed

    Jha, Abhinav K; Caffo, Brian; Frey, Eric C

    2016-04-07

    The objective optimization and evaluation of nuclear-medicine quantitative imaging methods using patient data is highly desirable but often hindered by the lack of a gold standard. Previously, a regression-without-truth (RWT) approach has been proposed for evaluating quantitative imaging methods in the absence of a gold standard, but this approach implicitly assumes that bounds on the distribution of true values are known. Several quantitative imaging methods in nuclear-medicine imaging measure parameters where these bounds are not known, such as the activity concentration in an organ or the volume of a tumor. We extended upon the RWT approach to develop a no-gold-standard (NGS) technique for objectively evaluating such quantitative nuclear-medicine imaging methods with patient data in the absence of any ground truth. Using the parameters estimated with the NGS technique, a figure of merit, the noise-to-slope ratio (NSR), can be computed, which can rank the methods on the basis of precision. An issue with NGS evaluation techniques is the requirement of a large number of patient studies. To reduce this requirement, the proposed method explored the use of multiple quantitative measurements from the same patient, such as the activity concentration values from different organs in the same patient. The proposed technique was evaluated using rigorous numerical experiments and using data from realistic simulation studies. The numerical experiments demonstrated that the NSR was estimated accurately using the proposed NGS technique when the bounds on the distribution of true values were not precisely known, thus serving as a very reliable metric for ranking the methods on the basis of precision. In the realistic simulation study, the NGS technique was used to rank reconstruction methods for quantitative single-photon emission computed tomography (SPECT) based on their performance on the task of estimating the mean activity concentration within a known volume of interest. Results showed that the proposed technique provided accurate ranking of the reconstruction methods for 97.5% of the 50 noise realizations. Further, the technique was robust to the choice of evaluated reconstruction methods. The simulation study pointed to possible violations of the assumptions made in the NGS technique under clinical scenarios. However, numerical experiments indicated that the NGS technique was robust in ranking methods even when there was some degree of such violation.

  4. A methodology for estimating risks associated with landslides of contaminated soil into rivers.

    PubMed

    Göransson, Gunnel; Norrman, Jenny; Larson, Magnus; Alén, Claes; Rosén, Lars

    2014-02-15

    Urban areas adjacent to surface water are exposed to soil movements such as erosion and slope failures (landslides). A landslide is a potential mechanism for mobilisation and spreading of pollutants. This mechanism is in general not included in environmental risk assessments for contaminated sites, and the consequences associated with contamination in the soil are typically not considered in landslide risk assessments. This study suggests a methodology to estimate the environmental risks associated with landslides in contaminated sites adjacent to rivers. The methodology is probabilistic and allows for datasets with large uncertainties and the use of expert judgements, providing quantitative estimates of probabilities for defined failures. The approach is illustrated by a case study along the river Göta Älv, Sweden, where failures are defined and probabilities for those failures are estimated. Failures are defined from a pollution perspective and in terms of exceeding environmental quality standards (EQSs) and acceptable contaminant loads. Models are then suggested to estimate probabilities of these failures. A landslide analysis is carried out to assess landslide probabilities based on data from a recent landslide risk classification study along the river Göta Älv. The suggested methodology is meant to be a supplement to either landslide risk assessment (LRA) or environmental risk assessment (ERA), providing quantitative estimates of the risks associated with landslide in contaminated sites. The proposed methodology can also act as a basis for communication and discussion, thereby contributing to intersectoral management solutions. From the case study it was found that the defined failures are governed primarily by the probability of a landslide occurring. The overall probabilities for failure are low; however, if a landslide occurs the probabilities of exceeding EQS are high and the probability of having at least a 10% increase in the contamination load within one year is also high. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Dispersal kernel estimation: A comparison of empirical and modelled particle dispersion in a coastal marine system

    NASA Astrophysics Data System (ADS)

    Hrycik, Janelle M.; Chassé, Joël; Ruddick, Barry R.; Taggart, Christopher T.

    2013-11-01

    Early life-stage dispersal influences recruitment and is of significance in explaining the distribution and connectivity of marine species. Motivations for quantifying dispersal range from biodiversity conservation to the design of marine reserves and the mitigation of species invasions. Here we compare estimates of real particle dispersion in a coastal marine environment with similar estimates provided by hydrodynamic modelling. We do so by using a system of magnetically attractive particles (MAPs) and a magnetic-collector array that provides measures of Lagrangian dispersion based on the time-integration of MAPs dispersing through the array. MAPs released as a point source in a coastal marine location dispersed through the collector array over a 5-7 d period. A virtual release and observed (real-time) environmental conditions were used in a high-resolution three-dimensional hydrodynamic model to estimate the dispersal of virtual particles (VPs). The number of MAPs captured throughout the collector array and the number of VPs that passed through each corresponding model location were enumerated and compared. Although VP dispersal reflected several aspects of the observed MAP dispersal, the comparisons demonstrated model sensitivity to the small-scale (random-walk) particle diffusivity parameter (Kp). The one-dimensional dispersal kernel for the MAPs had an e-folding scale estimate in the range of 5.19-11.44 km, while those from the model simulations were comparable at 1.89-6.52 km, and also demonstrated sensitivity to Kp. Variations among comparisons are related to the value of Kp used in modelling and are postulated to be related to MAP losses from the water column and (or) shear dispersion acting on the MAPs; a process that is constrained in the model. Our demonstration indicates a promising new way of 1) quantitatively and empirically estimating the dispersal kernel in aquatic systems, and 2) quantitatively assessing and (or) improving regional hydrodynamic models.

  6. A quantitative model of optimal data selection in Wason's selection task.

    PubMed

    Hattori, Masasi

    2002-10-01

    The optimal data selection model proposed by Oaksford and Chater (1994) successfully formalized Wason's selection task (Wason, 1966). The model, however, involved some questionable assumptions and was also not sufficient as a model of the task because it could not provide quantitative predictions of the card selection frequencies. In this paper, the model was revised to provide quantitative fits to the data. The model can predict the selection frequencies of cards based on a selection tendency function (STF), or conversely, it enables the estimation of subjective probabilities from data. Past experimental data were first re-analysed based on the model. In Experiment 1, the superiority of the revised model was shown. However, when the relationship between antecedent and consequent was forced to deviate from the biconditional form, the model was not supported. In Experiment 2, it was shown that sufficient emphasis on probabilistic information can affect participants' performance. A detailed experimental method to sort participants by probabilistic strategies was introduced. Here, the model was supported by a subgroup of participants who used the probabilistic strategy. Finally, the results were discussed from the viewpoint of adaptive rationality.

  7. A preliminary report on the magnetic measurements of samples 72275 and 72255. [direction and magnitude of remanent magnetization

    NASA Technical Reports Server (NTRS)

    Banerjee, S. K.

    1974-01-01

    The direction and magnitude of natural remanent magnetization of five approximately 3-g subsamples of 72275 and 72255 and the high field saturation magnetization, coercive force, and isothermal remanent magnetization of 100-mg chip from each of these samples, were studied. Given an understanding of the magnetization processes, group 1 experiments provide information about the absolute direction of the ancient magnetizing field and a qualitative estimate of its size (paleointensity). The group 2 experiments yield a quantitative estimate of the iron content and a qualitative ideal of the grain sizes.

  8. Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Britton, Paul; Al Hassan, Mohammad; Ring, Robert

    2017-01-01

    Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.

  9. Estimating Hydrologic Fluxes, Crop Water Use, and Agricultural Land Area in China using Data Assimilation

    NASA Astrophysics Data System (ADS)

    Smith, Tiziana; McLaughlin, Dennis B.; Hoisungwan, Piyatida

    2016-04-01

    Crop production has significantly altered the terrestrial environment by changing land use and by altering the water cycle through both co-opted rainfall and surface water withdrawals. As the world's population continues to grow and individual diets become more resource-intensive, the demand for food - and the land and water necessary to produce it - will continue to increase. High-resolution quantitative data about water availability, water use, and agricultural land use are needed to develop sustainable water and agricultural planning and policies. However, existing data covering large areas with high resolution are susceptible to errors and can be physically inconsistent. China is an example of a large area where food demand is expected to increase and a lack of data clouds the resource management dialogue. Some assert that China will have insufficient land and water resources to feed itself, posing a threat to global food security if they seek to increase food imports. Others believe resources are plentiful. Without quantitative data, it is difficult to discern if these concerns are realistic or overly dramatized. This research presents a quantitative approach using data assimilation techniques to characterize hydrologic fluxes, crop water use (defined as crop evapotranspiration), and agricultural land use at 0.5 by 0.5 degree resolution and applies the methodology in China using data from around the year 2000. The approach uses the principles of water balance and of crop water requirements to assimilate existing data with a least-squares estimation technique, producing new estimates of water and land use variables that are physically consistent while minimizing differences from measured data. We argue that this technique for estimating water fluxes and agricultural land use can provide a useful basis for resource management modeling and policy, both in China and around the world.

  10. In vivo estimation of target registration errors during augmented reality laparoscopic surgery.

    PubMed

    Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J

    2018-06-01

    Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.

  11. Quantitative Viral Community DNA Analysis Reveals the Dominance of Single-Stranded DNA Viruses in Offshore Upper Bathyal Sediment from Tohoku, Japan

    PubMed Central

    Yoshida, Mitsuhiro; Mochizuki, Tomohiro; Urayama, Syun-Ichi; Yoshida-Takashima, Yukari; Nishi, Shinro; Hirai, Miho; Nomaki, Hidetaka; Takaki, Yoshihiro; Nunoura, Takuro; Takai, Ken

    2018-01-01

    Previous studies on marine environmental virology have primarily focused on double-stranded DNA (dsDNA) viruses; however, it has recently been suggested that single-stranded DNA (ssDNA) viruses are more abundant in marine ecosystems. In this study, we performed a quantitative viral community DNA analysis to estimate the relative abundance and composition of both ssDNA and dsDNA viruses in offshore upper bathyal sediment from Tohoku, Japan (water depth = 500 m). The estimated dsDNA viral abundance ranged from 3 × 106 to 5 × 106 genome copies per cm3 sediment, showing values similar to the range of fluorescence-based direct virus counts. In contrast, the estimated ssDNA viral abundance ranged from 1 × 108 to 3 × 109 genome copies per cm3 sediment, thus providing an estimation that the ssDNA viral populations represent 96.3–99.8% of the benthic total DNA viral assemblages. In the ssDNA viral metagenome, most of the identified viral sequences were associated with ssDNA viral families such as Circoviridae and Microviridae. The principle components analysis of the ssDNA viral sequence components from the sedimentary ssDNA viral metagenomic libraries found that the different depth viral communities at the study site all exhibited similar profiles compared with deep-sea sediment ones at other reference sites. Our results suggested that deep-sea benthic ssDNA viruses have been significantly underestimated by conventional direct virus counts and that their contributions to deep-sea benthic microbial mortality and geochemical cycles should be further addressed by such a new quantitative approach. PMID:29467725

  12. Short Course Introduction to Quantitative Mineral Resource Assessments

    USGS Publications Warehouse

    Singer, Donald A.

    2007-01-01

    This is an abbreviated text supplementing the content of three sets of slides used in a short course that has been presented by the author at several workshops. The slides should be viewed in the order of (1) Introduction and models, (2) Delineation and estimation, and (3) Combining estimates and summary. References cited in the slides are listed at the end of this text. The purpose of the three-part form of mineral resource assessments discussed in the accompanying slides is to make unbiased quantitative assessments in a format needed in decision-support systems so that consequences of alternative courses of action can be examined. The three-part form of mineral resource assessments was developed to assist policy makers evaluate the consequences of alternative courses of action with respect to land use and mineral-resource development. The audience for three-part assessments is a governmental or industrial policy maker, a manager of exploration, a planner of regional development, or similar decision-maker. Some of the tools and models presented here will be useful for selection of exploration sites, but that is a side benefit, not the goal. To provide unbiased information, we recommend the three-part form of mineral resource assessments where general locations of undiscovered deposits are delineated from a deposit type's geologic setting, frequency distributions of tonnages and grades of well-explored deposits serve as models of grades and tonnages of undiscovered deposits, and number of undiscovered deposits are estimated probabilistically by type. The internally consistent descriptive, grade and tonnage, deposit density, and economic models used in the design of the three-part form of assessments reduce the chances of biased estimates of the undiscovered resources. What and why quantitative resource assessments: The kind of assessment recommended here is founded in decision analysis in order to provide a framework for making decisions concerning mineral resources under conditions of uncertainty. What this means is that we start with the question of what kinds of questions is the decision maker trying to resolve and what forms of information would aid in resolving these questions. Some applications of mineral resource assessments: To plan and guide exploration programs, to assist in land use planning, to plan the location of infrastructure, to estimate mineral endowment, and to identify deposits that present special environmental challenges. Why not just rank prospects / areas? Need for financial analysis, need for comparison with other land uses, need for comparison with distant tracts of land, need to know how uncertain the estimates are, need for consideration of economic and environmental consequences of possible development. Our goal is to provide unbiased information useful to decision-makers.

  13. Quantitative Assessment of Cancer Risk from Exposure to Diesel Engine Emissions

    EPA Science Inventory

    Quantitative estimates of lung cancer risk from exposure to diesel engine emissions were developed using data from three chronic bioassays with Fischer 344 rats. uman target organ dose was estimated with the aid of a comprehensive dosimetry model. This model accounted for rat-hum...

  14. A QUANTITATIVE APPROACH FOR ESTIMATING EXPOSURE TO PESTICIDES IN THE AGRICULTURAL HEALTH STUDY

    EPA Science Inventory

    We developed a quantitative method to estimate chemical-specific pesticide exposures in a large prospective cohort study of over 58,000 pesticide applicators in North Carolina and Iowa. An enrollment questionnaire was administered to applicators to collect basic time- and inten...

  15. Quantification of Microbial Phenotypes

    PubMed Central

    Martínez, Verónica S.; Krömer, Jens O.

    2016-01-01

    Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694

  16. Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory

    NASA Astrophysics Data System (ADS)

    Deyi, Feng; Ichikawa, M.

    1989-11-01

    In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.

  17. Statistical inference with quantum measurements: methodologies for nitrogen vacancy centers in diamond

    NASA Astrophysics Data System (ADS)

    Hincks, Ian; Granade, Christopher; Cory, David G.

    2018-01-01

    The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.

  18. Estimation of forest fuel load from radar remote sensing

    USGS Publications Warehouse

    Saatchi, S.; Halligan, K.; Despain, Don G.; Crabtree, R.L.

    2007-01-01

    Understanding fire behavior characteristics and planning for fire management require maps showing the distribution of wildfire fuel loads at medium to fine spatial resolution across large landscapes. Radar sensors from airborne or spaceborne platforms have the potential of providing quantitative information about the forest structure and biomass components that can be readily translated to meaningful fuel load estimates for fire management. In this paper, we used multifrequency polarimetric synthetic aperture radar (SAR) imagery acquired over a large area of the Yellowstone National Park by the Airborne SAR sensor to estimate the distribution of forest biomass and canopy fuel loads. Semiempirical algorithms were developed to estimate crown and stem biomass and three major fuel load parameters, namely: 1) canopy fuel weight; 2) canopy bulk density; and 3) foliage moisture content. These estimates, when compared directly to measurements made at plot and stand levels, provided more than 70% accuracy and, when partitioned into fuel load classes, provided more than 85% accuracy. Specifically, the radar-generated fuel parameters were in good agreement with the field-based fuel measurements, resulting in coefficients of determination of R2 = 85 for the canopy fuel weight, R 2 = 0.84 for canopy bulk density, and R2 =0.78 for the foliage biomass. ?? 2007 IEEE.

  19. APPLICATION OF RADIOISOTOPES TO THE QUANTITATIVE CHROMATOGRAPHY OF FATTY ACIDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Budzynski, A.Z.; Zubrzycki, Z.J.; Campbell, I.G.

    1959-10-31

    The paper reports work done on the use of I/sup 131/, Zn/sup 65/, Sr/sup 90/, Zr/sup 95/, Ce/sup 144/ for the quantitative estimation of fatty acids on paper chromatograms, and for determination of the degree of usaturation of components of resolved fatty acid mixtures. I/sup 131/ is used to iodinate unsaturated fatty acids, and the amount of such acids is determined from the radiochromatogram. The degree of unsaturation of fatty acids is determined by estimation of the specific activiiy of spots. The other isotopes have been examined from the point of view of their suitability for estimation of total amountsmore » of fatty acids by formation of insoluble radioactive soaps held on the chromatogram. In particular, work is reported on the quantitative estimation of saturated fatty acids by measurement of the activity of their insoluble soaps with radioactive metals. Various quantitative relationships are described between amount of fatty acid in spot and such parameters as radiometrically estimated spot length, width, maximum intensity, and integrated spot activity. A convenient detection apparatus for taking radiochromatograms is also described. In conjunction with conventional chromatographic methods for resolving fatty acids the method permits the estimation of composition of fatty acid mixtures obtained from biological material. (auth)« less

  20. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.

  1. The Quantitative-MFG Test: A Linear Mixed Effect Model to Detect Maternal-Offspring Gene Interactions.

    PubMed

    Clark, Michelle M; Blangero, John; Dyer, Thomas D; Sobel, Eric M; Sinsheimer, Janet S

    2016-01-01

    Maternal-offspring gene interactions, aka maternal-fetal genotype (MFG) incompatibilities, are neglected in complex diseases and quantitative trait studies. They are implicated in birth to adult onset diseases but there are limited ways to investigate their influence on quantitative traits. We present the quantitative-MFG (QMFG) test, a linear mixed model where maternal and offspring genotypes are fixed effects and residual correlations between family members are random effects. The QMFG handles families of any size, common or general scenarios of MFG incompatibility, and additional covariates. We develop likelihood ratio tests (LRTs) and rapid score tests and show they provide correct inference. In addition, the LRT's alternative model provides unbiased parameter estimates. We show that testing the association of SNPs by fitting a standard model, which only considers the offspring genotypes, has very low power or can lead to incorrect conclusions. We also show that offspring genetic effects are missed if the MFG modeling assumptions are too restrictive. With genome-wide association study data from the San Antonio Family Heart Study, we demonstrate that the QMFG score test is an effective and rapid screening tool. The QMFG test therefore has important potential to identify pathways of complex diseases for which the genetic etiology remains to be discovered. © 2015 John Wiley & Sons Ltd/University College London.

  2. Spatio-temporal models of mental processes from fMRI.

    PubMed

    Janoos, Firdaus; Machiraju, Raghu; Singh, Shantanu; Morocz, Istvan Ákos

    2011-07-15

    Understanding the highly complex, spatially distributed and temporally organized phenomena entailed by mental processes using functional MRI is an important research problem in cognitive and clinical neuroscience. Conventional analysis methods focus on the spatial dimension of the data discarding the information about brain function contained in the temporal dimension. This paper presents a fully spatio-temporal multivariate analysis method using a state-space model (SSM) for brain function that yields not only spatial maps of activity but also its temporal structure along with spatially varying estimates of the hemodynamic response. Efficient algorithms for estimating the parameters along with quantitative validations are given. A novel low-dimensional feature-space for representing the data, based on a formal definition of functional similarity, is derived. Quantitative validation of the model and the estimation algorithms is provided with a simulation study. Using a real fMRI study for mental arithmetic, the ability of this neurophysiologically inspired model to represent the spatio-temporal information corresponding to mental processes is demonstrated. Moreover, by comparing the models across multiple subjects, natural patterns in mental processes organized according to different mental abilities are revealed. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Comparison Study of MS-HRM and Pyrosequencing Techniques for Quantification of APC and CDKN2A Gene Methylation

    PubMed Central

    Migheli, Francesca; Stoccoro, Andrea; Coppedè, Fabio; Wan Omar, Wan Adnan; Failli, Alessandra; Consolini, Rita; Seccia, Massimo; Spisni, Roberto; Miccoli, Paolo; Mathers, John C.; Migliore, Lucia

    2013-01-01

    There is increasing interest in the development of cost-effective techniques for the quantification of DNA methylation biomarkers. We analyzed 90 samples of surgically resected colorectal cancer tissues for APC and CDKN2A promoter methylation using methylation sensitive-high resolution melting (MS-HRM) and pyrosequencing. MS-HRM is a less expensive technique compared with pyrosequencing but is usually more limited because it gives a range of methylation estimates rather than a single value. Here, we developed a method for deriving single estimates, rather than a range, of methylation using MS-HRM and compared the values obtained in this way with those obtained using the gold standard quantitative method of pyrosequencing. We derived an interpolation curve using standards of known methylated/unmethylated ratio (0%, 12.5%, 25%, 50%, 75%, and 100% of methylation) to obtain the best estimate of the extent of methylation for each of our samples. We observed similar profiles of methylation and a high correlation coefficient between the two techniques. Overall, our new approach allows MS-HRM to be used as a quantitative assay which provides results which are comparable with those obtained by pyrosequencing. PMID:23326336

  4. A pre-edge analysis of Mn K-edge XANES spectra to help determine the speciation of manganese in minerals and glasses

    NASA Astrophysics Data System (ADS)

    Chalmin, E.; Farges, F.; Brown, G. E.

    2009-01-01

    High-resolution manganese K-edge X-ray absorption near edge structure spectra were collected on a set of 40 Mn-bearing minerals. The pre-edge feature information (position, area) was investigated to extract as much as possible quantitative valence and symmetry information for manganese in various “test” and “unknown” minerals and glasses. The samples present a range of manganese symmetry environments (tetrahedral, square planar, octahedral, and cubic) and valences (II to VII). The extraction of the pre-edge information is based on a previous multiple scattering and multiplet calculations for model compounds. Using the method described in this study, a robust estimation of the manganese valence could be obtained from the pre-edge region at 5% accuracy level. This method applied to 20 “test” compounds (such as hausmannite and rancieite) and to 15 “unknown” compounds (such as axinite and birnessite) provides a quantitative estimate of the average valence of manganese in complex minerals and silicate glasses.

  5. Estimating malaria transmission from humans to mosquitoes in a noisy landscape

    PubMed Central

    Reiner, Robert C.; Guerra, Carlos; Donnelly, Martin J.; Bousema, Teun; Drakeley, Chris; Smith, David L.

    2015-01-01

    A basic quantitative understanding of malaria transmission requires measuring the probability a mosquito becomes infected after feeding on a human. Parasite prevalence in mosquitoes is highly age-dependent, and the unknown age-structure of fluctuating mosquito populations impedes estimation. Here, we simulate mosquito infection dynamics, where mosquito recruitment is modelled seasonally with fractional Brownian noise, and we develop methods for estimating mosquito infection rates. We find that noise introduces bias, but the magnitude of the bias depends on the ‘colour' of the noise. Some of these problems can be overcome by increasing the sampling frequency, but estimates of transmission rates (and estimated reductions in transmission) are most accurate and precise if they combine parity, oocyst rates and sporozoite rates. These studies provide a basis for evaluating the adequacy of various entomological sampling procedures for measuring malaria parasite transmission from humans to mosquitoes and for evaluating the direct transmission-blocking effects of a vaccine. PMID:26400195

  6. Properties of young massive clusters obtained with different massive-star evolutionary models

    NASA Astrophysics Data System (ADS)

    Wofford, Aida; Charlot, Stéphane

    We undertake a comprehensive comparative test of seven widely-used spectral synthesis models using multi-band HST photometry of a sample of eight YMCs in two galaxies. We provide a first quantitative estimate of the accuracies and uncertainties of new models, show the good progress of models in fitting high-quality observations, and highlight the need of further comprehensive comparative tests.

  7. Carbon and nutrient contents in soils from the Kings River Experimental Watersheds, Sierra Nevada Mountains, California

    Treesearch

    D.W. Johnson; C.T. Hunsaker; D.W. Glass; B.M. Rau; B.A. Roath

    2011-01-01

    Soil C and nutrient contents were estimated for eight watersheds in two sites (one high elevation, Bull, and one low elevation, Providence) in the Kings River Experimental Watersheds in the western Sierra Nevada Mountains of California. Eighty-seven quantitative pits were dug to measure soil bulk density and total rock content, while three replicate surface samples...

  8. Mapping migratory flyways in Asia using dynamic Brownian bridge movement models

    USGS Publications Warehouse

    Palm, E.C.; Newman, S.H.; Prosser, Diann J.; Xiao, Xiangming; Luo, Ze; Batbayar, Nyambayar; Balachandran, Sivananinthaperumal; Takekawa, John Y.

    2015-01-01

    The dynamic Brownian bridge movement model improves our understanding of flyways by estimating relative use of regions in the flyway while providing detailed, quantitative information on migration timing and population connectivity including uncertainty between locations. This model effectively quantifies the relative importance of different migration corridors and stopover sites and may help prioritize specific areas in flyways for conservation of waterbird populations.

  9. Monte-Carlo-based phase retardation estimator for polarization sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki

    2011-08-01

    A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.

  10. Arterial Spin Labeling - Fast Imaging with Steady-State Free Precession (ASL-FISP): A Rapid and Quantitative Perfusion Technique for High Field MRI

    PubMed Central

    Gao, Ying; Goodnough, Candida L.; Erokwu, Bernadette O.; Farr, George W.; Darrah, Rebecca; Lu, Lan; Dell, Katherine M.; Yu, Xin; Flask, Chris A.

    2014-01-01

    Arterial Spin Labeling (ASL) is a valuable non-contrast perfusion MRI technique with numerous clinical applications. Many previous ASL MRI studies have utilized either Echo-Planar Imaging (EPI) or True Fast Imaging with Steady-State Free Precession (True FISP) readouts that are prone to off-resonance artifacts on high field MRI scanners. We have developed a rapid ASL-FISP MRI acquisition for high field preclinical MRI scanners providing perfusion-weighted images with little or no artifacts in less than 2 seconds. In this initial implementation, a FAIR (Flow-Sensitive Alternating Inversion Recovery) ASL preparation was combined with a rapid, centrically-encoded FISP readout. Validation studies on healthy C57/BL6 mice provided consistent estimation of in vivo mouse brain perfusion at 7 T and 9.4 T (249±38 ml/min/100g and 241±17 ml/min/100g, respectively). The utility of this method was further demonstrated in detecting significant perfusion deficits in a C57/BL6 mouse model of ischemic stroke. Reasonable kidney perfusion estimates were also obtained for a healthy C57/BL6 mouse exhibiting differential perfusion in the renal cortex and medulla. Overall, the ASL-FISP technique provides a rapid and quantitative in vivo assessment of tissue perfusion for high field MRI scanners with minimal image artifacts. PMID:24891124

  11. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  12. Development and characterization of a dynamic lesion phantom for the quantitative evaluation of dynamic contrast-enhanced MRI.

    PubMed

    Freed, Melanie; de Zwart, Jacco A; Hariharan, Prasanna; Myers, Matthew R; Badano, Aldo

    2011-10-01

    To develop a dynamic lesion phantom that is capable of producing physiological kinetic curves representative of those seen in human dynamic contrast-enhanced MRI (DCE-MRI) data. The objective of this phantom is to provide a platform for the quantitative comparison of DCE-MRI protocols to aid in the standardization and optimization of breast DCE-MRI. The dynamic lesion consists of a hollow, plastic mold with inlet and outlet tubes to allow flow of a contrast agent solution through the lesion over time. Border shape of the lesion can be controlled using the lesion mold production method. The configuration of the inlet and outlet tubes was determined using fluid transfer simulations. The total fluid flow rate was determined using x-ray images of the lesion for four different flow rates (0.25, 0.5, 1.0, and 1.5 ml/s) to evaluate the resultant kinetic curve shape and homogeneity of the contrast agent distribution in the dynamic lesion. High spatial and temporal resolution x-ray measurements were used to estimate the true kinetic curve behavior in the dynamic lesion for benign and malignant example curves. DCE-MRI example data were acquired of the dynamic phantom using a clinical protocol. The optimal inlet and outlet tube configuration for the lesion molds was two inlet molds separated by 30° and a single outlet tube directly between the two inlet tubes. X-ray measurements indicated that 1.0 ml/s was an appropriate total fluid flow rate and provided truth for comparison with MRI data of kinetic curves representative of benign and malignant lesions. DCE-MRI data demonstrated the ability of the phantom to produce realistic kinetic curves. The authors have constructed a dynamic lesion phantom, demonstrated its ability to produce physiological kinetic curves, and provided estimations of its true kinetic curve behavior. This lesion phantom provides a tool for the quantitative evaluation of DCE-MRI protocols, which may lead to improved discrimination of breast cancer lesions.

  13. Quantitative endoscopy: initial accuracy measurements.

    PubMed

    Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P

    2000-02-01

    The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.

  14. New EVSE Analytical Tools/Models: Electric Vehicle Infrastructure Projection Tool (EVI-Pro)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Eric W; Rames, Clement L; Muratori, Matteo

    This presentation addresses the fundamental question of how much charging infrastructure is needed in the United States to support PEVs. It complements ongoing EVSE initiatives by providing a comprehensive analysis of national PEV charging infrastructure requirements. The result is a quantitative estimate for a U.S. network of non-residential (public and workplace) EVSE that would be needed to support broader PEV adoption. The analysis provides guidance to public and private stakeholders who are seeking to provide nationwide charging coverage, improve the EVSE business case by maximizing station utilization, and promote effective use of private/public infrastructure investments.

  15. Development and evaluation of a model-based downscatter compensation method for quantitative I-131 SPECT

    PubMed Central

    Song, Na; Du, Yong; He, Bin; Frey, Eric C.

    2011-01-01

    Purpose: The radionuclide 131I has found widespread use in targeted radionuclide therapy (TRT), partly due to the fact that it emits photons that can be imaged to perform treatment planning or posttherapy dose verification as well as beta rays that are suitable for therapy. In both the treatment planning and dose verification applications, it is necessary to estimate the activity distribution in organs or tumors at several time points. In vivo estimates of the 131I activity distribution at each time point can be obtained from quantitative single-photon emission computed tomography (QSPECT) images and organ activity estimates can be obtained either from QSPECT images or quantification of planar projection data. However, in addition to the photon used for imaging, 131I decay results in emission of a number of other higher-energy photons with significant abundances. These higher-energy photons can scatter in the body, collimator, or detector and be counted in the 364 keV photopeak energy window, resulting in reduced image contrast and degraded quantitative accuracy; these photons are referred to as downscatter. The goal of this study was to develop and evaluate a model-based downscatter compensation method specifically designed for the compensation of high-energy photons emitted by 131I and detected in the imaging energy window. Methods: In the evaluation study, we used a Monte Carlo simulation (MCS) code that had previously been validated for other radionuclides. Thus, in preparation for the evaluation study, we first validated the code for 131I imaging simulation by comparison with experimental data. Next, we assessed the accuracy of the downscatter model by comparing downscatter estimates with MCS results. Finally, we combined the downscatter model with iterative reconstruction-based compensation for attenuation (A) and scatter (S) and the full (D) collimator-detector response of the 364 keV photons to form a comprehensive compensation method. We evaluated this combined method in terms of quantitative accuracy using the realistic 3D NCAT phantom and an activity distribution obtained from patient studies. We compared the accuracy of organ activity estimates in images reconstructed with and without addition of downscatter compensation from projections with and without downscatter contamination. Results: We observed that the proposed method provided substantial improvements in accuracy compared to no downscatter compensation and had accuracies comparable to reconstructions from projections without downscatter contamination. Conclusions: The results demonstrate that the proposed model-based downscatter compensation method is effective and may have a role in quantitative 131I imaging. PMID:21815394

  16. Benchmark dose analysis via nonparametric regression modeling

    PubMed Central

    Piegorsch, Walter W.; Xiong, Hui; Bhattacharya, Rabi N.; Lin, Lizhen

    2013-01-01

    Estimation of benchmark doses (BMDs) in quantitative risk assessment traditionally is based upon parametric dose-response modeling. It is a well-known concern, however, that if the chosen parametric model is uncertain and/or misspecified, inaccurate and possibly unsafe low-dose inferences can result. We describe a nonparametric approach for estimating BMDs with quantal-response data based on an isotonic regression method, and also study use of corresponding, nonparametric, bootstrap-based confidence limits for the BMD. We explore the confidence limits’ small-sample properties via a simulation study, and illustrate the calculations with an example from cancer risk assessment. It is seen that this nonparametric approach can provide a useful alternative for BMD estimation when faced with the problem of parametric model uncertainty. PMID:23683057

  17. Monitoring Crop Yield in USA Using a Satellite-Based Climate-Variability Impact Index

    NASA Technical Reports Server (NTRS)

    Zhang, Ping; Anderson, Bruce; Tan, Bin; Barlow, Mathew; Myneni, Ranga

    2011-01-01

    A quantitative index is applied to monitor crop growth and predict agricultural yield in continental USA. The Climate-Variability Impact Index (CVII), defined as the monthly contribution to overall anomalies in growth during a given year, is derived from 1-km MODIS Leaf Area Index. The growing-season integrated CVII can provide an estimate of the fractional change in overall growth during a given year. In turn these estimates can provide fine-scale and aggregated information on yield for various crops. Trained from historical records of crop production, a statistical model is used to produce crop yield during the growing season based upon the strong positive relationship between crop yield and the CVII. By examining the model prediction as a function of time, it is possible to determine when the in-season predictive capability plateaus and which months provide the greatest predictive capacity.

  18. A meta-analysis of prospective studies of coffee consumption and mortality for all causes, cancers and cardiovascular diseases.

    PubMed

    Malerba, Stefano; Turati, Federica; Galeone, Carlotta; Pelucchi, Claudio; Verga, Federica; La Vecchia, Carlo; Tavani, Alessandra

    2013-07-01

    Several prospective studies considered the relation between coffee consumption and mortality. Most studies, however, were underpowered to detect an association, since they included relatively few deaths. To obtain quantitative overall estimates, we combined all published data from prospective studies on the relation of coffee with mortality for all causes, all cancers, cardiovascular disease (CVD), coronary/ischemic heart disease (CHD/IHD) and stroke. A bibliography search, updated to January 2013, was carried out in PubMed and Embase to identify prospective observational studies providing quantitative estimates on mortality from all causes, cancer, CVD, CHD/IHD or stroke in relation to coffee consumption. A systematic review and meta-analysis was conducted to estimate overall relative risks (RR) and 95 % confidence intervals (CI) using random-effects models. The pooled RRs of all cause mortality for the study-specific highest versus low (≤1 cup/day) coffee drinking categories were 0.88 (95 % CI 0.84-0.93) based on all the 23 studies, and 0.87 (95 % CI 0.82-0.93) for the 19 smoking adjusting studies. The combined RRs for CVD mortality were 0.89 (95 % CI 0.77-1.02, 17 smoking adjusting studies) for the highest versus low drinking and 0.98 (95 % CI 0.95-1.00, 16 studies) for the increment of 1 cup/day. Compared with low drinking, the RRs for the highest consumption of coffee were 0.95 (95 % CI 0.78-1.15, 12 smoking adjusting studies) for CHD/IHD, 0.95 (95 % CI 0.70-1.29, 6 studies) for stroke, and 1.03 (95 % CI 0.97-1.10, 10 studies) for all cancers. This meta-analysis provides quantitative evidence that coffee intake is inversely related to all cause and, probably, CVD mortality.

  19. Current methods and advances in bone densitometry

    NASA Technical Reports Server (NTRS)

    Guglielmi, G.; Gluer, C. C.; Majumdar, S.; Blunt, B. A.; Genant, H. K.

    1995-01-01

    Bone mass is the primary, although not the only, determinant of fracture. Over the past few years a number of noninvasive techniques have been developed to more sensitively quantitate bone mass. These include single and dual photon absorptiometry (SPA and DPA), single and dual X-ray absorptiometry (SXA and DXA) and quantitative computed tomography (QCT). While differing in anatomic sites measured and in their estimates of precision, accuracy, and fracture discrimination, all of these methods provide clinically useful measurements of skeletal status. It is the intent of this review to discuss the pros and cons of these techniques and to present the new applications of ultrasound (US) and magnetic resonance (MRI) in the detection and management of osteoporosis.

  20. Comparative analysis of quantitative methodologies for Vibrionaceae biofilms.

    PubMed

    Chavez-Dozal, Alba A; Nourabadi, Neda; Erken, Martina; McDougald, Diane; Nishiguchi, Michele K

    2016-11-01

    Multiple symbiotic and free-living Vibrio spp. grow as a form of microbial community known as a biofilm. In the laboratory, methods to quantify Vibrio biofilm mass include crystal violet staining, direct colony-forming unit (CFU) counting, dry biofilm cell mass measurement, and observation of development of wrinkled colonies. Another approach for bacterial biofilms also involves the use of tetrazolium (XTT) assays (used widely in studies of fungi) that are an appropriate measure of metabolic activity and vitality of cells within the biofilm matrix. This study systematically tested five techniques, among which the XTT assay and wrinkled colony measurement provided the most reproducible, accurate, and efficient methods for the quantitative estimation of Vibrionaceae biofilms.

  1. Advanced probabilistic methods for quantifying the effects of various uncertainties in structural response

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.

    1988-01-01

    The effects of actual variations, also called uncertainties, in geometry and material properties on the structural response of a space shuttle main engine turbopump blade are evaluated. A normal distribution was assumed to represent the uncertainties statistically. Uncertainties were assumed to be totally random, partially correlated, and fully correlated. The magnitude of these uncertainties were represented in terms of mean and variance. Blade responses, recorded in terms of displacements, natural frequencies, and maximum stress, was evaluated and plotted in the form of probabilistic distributions under combined uncertainties. These distributions provide an estimate of the range of magnitudes of the response and probability of occurrence of a given response. Most importantly, these distributions provide the information needed to estimate quantitatively the risk in a structural design.

  2. Integrated computational model of the bioenergetics of isolated lung mitochondria

    PubMed Central

    Zhang, Xiao; Jacobs, Elizabeth R.; Camara, Amadou K. S.; Clough, Anne V.

    2018-01-01

    Integrated computational modeling provides a mechanistic and quantitative framework for describing lung mitochondrial bioenergetics. Thus, the objective of this study was to develop and validate a thermodynamically-constrained integrated computational model of the bioenergetics of isolated lung mitochondria. The model incorporates the major biochemical reactions and transport processes in lung mitochondria. A general framework was developed to model those biochemical reactions and transport processes. Intrinsic model parameters such as binding constants were estimated using previously published isolated enzymes and transporters kinetic data. Extrinsic model parameters such as maximal reaction and transport velocities were estimated by fitting the integrated bioenergetics model to published and new tricarboxylic acid cycle and respirometry data measured in isolated rat lung mitochondria. The integrated model was then validated by assessing its ability to predict experimental data not used for the estimation of the extrinsic model parameters. For example, the model was able to predict reasonably well the substrate and temperature dependency of mitochondrial oxygen consumption, kinetics of NADH redox status, and the kinetics of mitochondrial accumulation of the cationic dye rhodamine 123, driven by mitochondrial membrane potential, under different respiratory states. The latter required the coupling of the integrated bioenergetics model to a pharmacokinetic model for the mitochondrial uptake of rhodamine 123 from buffer. The integrated bioenergetics model provides a mechanistic and quantitative framework for 1) integrating experimental data from isolated lung mitochondria under diverse experimental conditions, and 2) assessing the impact of a change in one or more mitochondrial processes on overall lung mitochondrial bioenergetics. In addition, the model provides important insights into the bioenergetics and respiration of lung mitochondria and how they differ from those of mitochondria from other organs. To the best of our knowledge, this model is the first for the bioenergetics of isolated lung mitochondria. PMID:29889855

  3. Integrated computational model of the bioenergetics of isolated lung mitochondria.

    PubMed

    Zhang, Xiao; Dash, Ranjan K; Jacobs, Elizabeth R; Camara, Amadou K S; Clough, Anne V; Audi, Said H

    2018-01-01

    Integrated computational modeling provides a mechanistic and quantitative framework for describing lung mitochondrial bioenergetics. Thus, the objective of this study was to develop and validate a thermodynamically-constrained integrated computational model of the bioenergetics of isolated lung mitochondria. The model incorporates the major biochemical reactions and transport processes in lung mitochondria. A general framework was developed to model those biochemical reactions and transport processes. Intrinsic model parameters such as binding constants were estimated using previously published isolated enzymes and transporters kinetic data. Extrinsic model parameters such as maximal reaction and transport velocities were estimated by fitting the integrated bioenergetics model to published and new tricarboxylic acid cycle and respirometry data measured in isolated rat lung mitochondria. The integrated model was then validated by assessing its ability to predict experimental data not used for the estimation of the extrinsic model parameters. For example, the model was able to predict reasonably well the substrate and temperature dependency of mitochondrial oxygen consumption, kinetics of NADH redox status, and the kinetics of mitochondrial accumulation of the cationic dye rhodamine 123, driven by mitochondrial membrane potential, under different respiratory states. The latter required the coupling of the integrated bioenergetics model to a pharmacokinetic model for the mitochondrial uptake of rhodamine 123 from buffer. The integrated bioenergetics model provides a mechanistic and quantitative framework for 1) integrating experimental data from isolated lung mitochondria under diverse experimental conditions, and 2) assessing the impact of a change in one or more mitochondrial processes on overall lung mitochondrial bioenergetics. In addition, the model provides important insights into the bioenergetics and respiration of lung mitochondria and how they differ from those of mitochondria from other organs. To the best of our knowledge, this model is the first for the bioenergetics of isolated lung mitochondria.

  4. Frameshifted prion proteins as pathological agents: quantitative considerations.

    PubMed

    Wills, Peter R

    2013-05-21

    A quantitatively consistent explanation for the titres of infectivity found in a variety of prion-containing preparations is provided on the basis that the ætiological agents of transmissible spongiform encephalopathy comprise a very small population fraction of prion protein (PrP) variants, which contain frameshifted elements in their N-terminal octapeptide-repeat regions. A mechanism for the replication of frameshifted prions is described and calculations are performed to obtain estimates of the concentration of these PrP variants in normal and infected brain, as well as their enrichment in products of protein misfolding cyclic amplification. These calculations resolve the lack of proper quantitative correlation between measures of infectivity and the presence of conformationally-altered, protease-resistant variants of PrP. Experiments, which could confirm or eventually exclude the role of frameshifted variants in the ætiology of prion disease, are suggested. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Analytical scanning evanescent microwave microscope and control stage

    DOEpatents

    Xiang, Xiao-Dong; Gao, Chen; Duewer, Fred; Yang, Hai Tao; Lu, Yalin

    2013-01-22

    A scanning evanescent microwave microscope (SEMM) that uses near-field evanescent electromagnetic waves to probe sample properties is disclosed. The SEMM is capable of high resolution imaging and quantitative measurements of the electrical properties of the sample. The SEMM has the ability to map dielectric constant, loss tangent, conductivity, electrical impedance, and other electrical parameters of materials. Such properties are then used to provide distance control over a wide range, from to microns to nanometers, over dielectric and conductive samples for a scanned evanescent microwave probe, which enable quantitative non-contact and submicron spatial resolution topographic and electrical impedance profiling of dielectric, nonlinear dielectric and conductive materials. The invention also allows quantitative estimation of microwave impedance using signals obtained by the scanned evanescent microwave probe and quasistatic approximation modeling. The SEMM can be used to measure electrical properties of both dielectric and electrically conducting materials.

  6. Analytical scanning evanescent microwave microscope and control stage

    DOEpatents

    Xiang, Xiao-Dong; Gao, Chen; Duewer, Fred; Yang, Hai Tao; Lu, Yalin

    2009-06-23

    A scanning evanescent microwave microscope (SEMM) that uses near-field evanescent electromagnetic waves to probe sample properties is disclosed. The SEMM is capable of high resolution imaging and quantitative measurements of the electrical properties of the sample. The SEMM has the ability to map dielectric constant, loss tangent, conductivity, electrical impedance, and other electrical parameters of materials. Such properties are then used to provide distance control over a wide range, from to microns to nanometers, over dielectric and conductive samples for a scanned evanescent microwave probe, which enable quantitative non-contact and submicron spatial resolution topographic and electrical impedance profiling of dielectric, nonlinear dielectric and conductive materials. The invention also allows quantitative estimation of microwave impedance using signals obtained by the scanned evanescent microwave probe and quasistatic approximation modeling. The SEMM can be used to measure electrical properties of both dielectric and electrically conducting materials.

  7. Measuring iron in the brain using quantitative susceptibility mapping and X-ray fluorescence imaging

    PubMed Central

    Zheng, Weili; Nichol, Helen; Liu, Saifeng; Cheng, Yu-Chung N.; Haacke, E. Mark

    2013-01-01

    Measuring iron content in the brain has important implications for a number of neurodegenerative diseases. Quantitative susceptibility mapping (QSM), derived from magnetic resonance images, has been used to measure total iron content in vivo and in post mortem brain. In this paper, we show how magnetic susceptibility from QSM correlates with total iron content measured by X-ray fluorescence (XRF) imaging and by inductively coupled plasma mass spectrometry (ICPMS). The relationship between susceptibility and ferritin iron was estimated at 1.10 ± 0.08 ppb susceptibility per μg iron/g wet tissue, similar to that of iron in fixed (frozen/thawed) cadaveric brain and previously published data from unfixed brains. We conclude that magnetic susceptibility can provide a direct and reliable quantitative measurement of iron content and that it can be used clinically at least in regions with high iron content. PMID:23591072

  8. Dramatic Differences in Gut Bacterial Densities Correlate with Diet and Habitat in Rainforest Ants.

    PubMed

    Sanders, Jon G; Lukasik, Piotr; Frederickson, Megan E; Russell, Jacob A; Koga, Ryuichi; Knight, Rob; Pierce, Naomi E

    2017-10-01

    Abundance is a key parameter in microbial ecology, and important to estimates of potential metabolite flux, impacts of dispersal, and sensitivity of samples to technical biases such as laboratory contamination. However, modern amplicon-based sequencing techniques by themselves typically provide no information about the absolute abundance of microbes. Here, we use fluorescence microscopy and quantitative polymerase chain reaction as independent estimates of microbial abundance to test the hypothesis that microbial symbionts have enabled ants to dominate tropical rainforest canopies by facilitating herbivorous diets, and compare these methods to microbial diversity profiles from 16S rRNA amplicon sequencing. Through a systematic survey of ants from a lowland tropical forest, we show that the density of gut microbiota varies across several orders of magnitude among ant lineages, with median individuals from many genera only marginally above detection limits. Supporting the hypothesis that microbial symbiosis is important to dominance in the canopy, we find that the abundance of gut bacteria is positively correlated with stable isotope proxies of herbivory among canopy-dwelling ants, but not among ground-dwelling ants. Notably, these broad findings are much more evident in the quantitative data than in the 16S rRNA sequencing data. Our results provide quantitative context to the potential role of bacteria in facilitating the ants' dominance of the tropical rainforest canopy, and have broad implications for the interpretation of sequence-based surveys of microbial diversity. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.

  9. Uncertainty of quantitative microbiological methods of pharmaceutical analysis.

    PubMed

    Gunar, O V; Sakhno, N G

    2015-12-30

    The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Comparative Application of PLS and PCR Methods to Simultaneous Quantitative Estimation and Simultaneous Dissolution Test of Zidovudine - Lamivudine Tablets.

    PubMed

    Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli

    2015-01-01

    In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs.

  11. Semi-quantitative estimation of cellular SiO2 nanoparticles using flow cytometry combined with X-ray fluorescence measurements.

    PubMed

    Choi, Seo Yeon; Yang, Nuri; Jeon, Soo Kyung; Yoon, Tae Hyun

    2014-09-01

    In this study, we have demonstrated feasibility of a semi-quantitative approach for the estimation of cellular SiO2 nanoparticles (NPs), which is based on the flow cytometry measurements of their normalized side scattering intensity. In order to improve our understanding on the quantitative aspects of cell-nanoparticle interactions, flow cytometry, transmission electron microscopy, and X-ray fluorescence experiments were carefully performed for the HeLa cells exposed to SiO2 NPs with different core diameters, hydrodynamic sizes, and surface charges. Based on the observed relationships among the experimental data, a semi-quantitative cellular SiO2 NPs estimation method from their normalized side scattering and core diameters was proposed, which can be applied for the determination of cellular SiO2 NPs within their size-dependent linear ranges. © 2014 International Society for Advancement of Cytometry.

  12. The role of lung imaging in pulmonary embolism

    PubMed Central

    Mishkin, Fred S.; Johnson, Philip M.

    1973-01-01

    The advantages of lung scanning in suspected pulmonary embolism are its diagnostic sensitivity, simplicity and safety. The ability to delineate regional pulmonary ischaemia, to quantitate its extent and to follow its response to therapy provides valuable clinical data available by no other simple means. The negative scan effectively excludes pulmonary embolism but, although certain of its features favour the diagnosis of embolism, the positive scan inherently lacks specificity and requires angiographic confirmation when embolectomy, caval plication or infusion of a thrombolytic agent are contemplated. The addition of simple ventilation imaging techniques with radioxenon overcomes this limitation by providing accurate analog estimation or digital quantitation of regional ventilation: perfusion (V/Q) ratios fundamental to understanding the pathophysiologic consequences of embolism and other diseases of the lung. ImagesFig. 1Fig. 2Fig. 3Fig. 4Fig. 5Fig. 6Fig. 7p495-bFig. 8Fig. 9Fig. 10Fig. 11Fig. 12Fig. 13 PMID:4602128

  13. Quantitating T cell cross-reactivity for unrelated peptide antigens.

    PubMed

    Ishizuka, Jeffrey; Grebe, Kristie; Shenderov, Eugene; Peters, Bjoern; Chen, Qiongyu; Peng, Yanchun; Wang, Lili; Dong, Tao; Pasquetto, Valerie; Oseroff, Carla; Sidney, John; Hickman, Heather; Cerundolo, Vincenzo; Sette, Alessandro; Bennink, Jack R; McMichael, Andrew; Yewdell, Jonathan W

    2009-10-01

    Quantitating the frequency of T cell cross-reactivity to unrelated peptides is essential to understanding T cell responses in infectious and autoimmune diseases. Here we used 15 mouse or human CD8+ T cell clones (11 antiviral, 4 anti-self) in conjunction with a large library of defined synthetic peptides to examine nearly 30,000 TCR-peptide MHC class I interactions for cross-reactions. We identified a single cross-reaction consisting of an anti-self TCR recognizing a poxvirus peptide at relatively low sensitivity. We failed to identify any cross-reactions between the synthetic peptides in the panel and polyclonal CD8+ T cells raised to viral or alloantigens. These findings provide the best estimate to date of the frequency of T cell cross-reactivity to unrelated peptides ( approximately 1/30,000), explaining why cross-reactions between unrelated pathogens are infrequently encountered and providing a critical parameter for understanding the scope of self-tolerance.

  14. Quantitating T Cell Cross-Reactivity for Unrelated Peptide Antigens1

    PubMed Central

    Ishizuka, Jeffrey; Grebe, Kristie; Shenderov, Eugene; Peters, Bjoern; Chen, Qiongyu; Peng, YanChun; Wang, Lili; Dong, Tao; Pasquetto, Valerie; Osroff, Carla; Sidney, John; Hickman, Heather; Cerundolo, Vincenzo; Sette, Alessandro; Bennink, Jack R.; McMchael, Andrew; Yewdell, Jonathan W.

    2009-01-01

    Quantitating the frequency of T cell cross-reactivity to unrelated peptides is essential to understanding T cell responses in infectious and autoimmune diseases. Here we used 15 mouse or human CD8+ T cell clones (11 antiviral, 4 anti-self) in conjunction with a large library of defined synthetic peptides to examine nearly 30,000 TCR-peptide MHC class I interactions for cross-reactions. We identified a single cross-reaction consisting of an anti-self TCR recognizing a poxvirus peptide at relatively low sensitivity. We failed to identify any cross-reactions between the synthetic peptides in the panel and polyclonal CD8+ T cells raised to viral or alloantigens. These findings provide the best estimate to date of the frequency of T cell cross-reactivity to unrelated peptides (∼1/30,000), explaining why cross-reactions between unrelated pathogens are infrequently encountered and providing a critical parameter for understanding the scope of self-tolerance. PMID:19734234

  15. Dual-core mass-balance approach for evaluating mercury and210Pb atmospheric fallout and focusing to lakes

    USGS Publications Warehouse

    Van Metre, P.C.; Fuller, C.C.

    2009-01-01

    Determining atmospheric deposition rates of mercury and other contaminants using lake sediment cores requires a quantitative understanding of sediment focusing. Here we present a novel approach that solves mass-balance equations for two cores algebraically to estimate contaminant contributions to sediment from direct atmospheric fallout and from watershed and in-lake focusing. The model is applied to excess 210Pb and Hg in cores from Hobbs Lake, a high-altitude lake in Wyoming. Model results for excess 210Pb are consistent with estimates of fallout and focusing factors computed using excess 210Pb burdens in lake cores and soil cores from the watershed and model results for Hg fallout are consistent with fallout estimated using the soil-core-based 210Pb focusing factors. The lake cores indicate small increases in mercury deposition beginning in the late 1800s and large increases after 1940, with the maximum at the tops of the cores of 16-20 ??g/m 2year. These results suggest that global Hg emissions and possibly regional emissions in the western United States are affecting the north-central Rocky Mountains. Hg fallout estimates are generally consistent with fallout reported from an ice core from the nearby Upper Fremont Glacier, but with several notable differences. The model might not work for lakes with complex geometries and multiple sediment inputs, but for lakes with simple geometries, like Hobbs, it can provide a quantitative approach for evaluating sediment focusing and estimating contaminant fallout.

  16. Improved shear wave group velocity estimation method based on spatiotemporal peak and thresholding motion search

    PubMed Central

    Amador, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F.; Urban, Matthew W.

    2017-01-01

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared to conventional TTP. PMID:28092532

  17. Improved Shear Wave Group Velocity Estimation Method Based on Spatiotemporal Peak and Thresholding Motion Search.

    PubMed

    Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W

    2017-04-01

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.

  18. Assessment of the risk of introducing foot-and-mouth disease into Panama via a ferry operating between Cartagena, Colombia and Colon, Panama.

    PubMed

    White, W R; Crom, R L; Walker, K D

    1996-07-23

    It should be emphasized that the proposed ferry hazard categorizations do not represent absolute risks for introducing FMD into Panama, but instead provide a systematic method for comparing and estimating risks in the absence of quantitative data. A hazard rating of high may not necessarily represent a high quantitative risk for the introduction of FMD, but is high when compared to other scenarios. A low hazard rating may estimate a low quantitative risk of importing FMD, but economic consequences of a potential outbreak should also be considered. When further data become available, a more complete assessment of the risks of the Crucero Express compared to airplanes, cargo boats, and small boats can be performed. At present, the risk of the Crucero Express is at least as low as the other transport modes described above. Since vehicles are not presently allowed transport from Colombia to Panama, they present no risk to Panama, but with proper cleaning and disinfection procedures, vehicles can be permitted with low risk. However, the Crucero Express can carry 125 vehicles, and thorough cleaning and disinfection of this many cars will require modern and efficient facilities not yet present at either port.

  19. Comparison of detection limits in environmental analysis--is it possible? An approach on quality assurance in the lower working range by verification.

    PubMed

    Geiss, S; Einax, J W

    2001-07-01

    Detection limit, reporting limit and limit of quantitation are analytical parameters which describe the power of analytical methods. These parameters are used for internal quality assurance and externally for competing, especially in the case of trace analysis in environmental compartments. The wide variety of possibilities for computing or obtaining these measures in literature and in legislative rules makes any comparison difficult. Additionally, a host of terms have been used within the analytical community to describe detection and quantitation capabilities. Without trying to create an order for the variety of terms, this paper is aimed at providing a practical proposal for answering the main questions for the analysts concerning quality measures above. These main questions and related parameters were explained and graphically demonstrated. Estimation and verification of these parameters are the two steps to get real measures. A rule for a practical verification is given in a table, where the analyst can read out what to measure, what to estimate and which criteria have to be fulfilled. In this manner verified parameters detection limit, reporting limit and limit of quantitation now are comparable and the analyst himself is responsible to the unambiguity and reliability of these measures.

  20. A feeling for the numbers in biology

    PubMed Central

    Phillips, Rob; Milo, Ron

    2009-01-01

    Although the quantitative description of biological systems has been going on for centuries, recent advances in the measurement of phenomena ranging from metabolism to gene expression to signal transduction have resulted in a new emphasis on biological numeracy. This article describes the confluence of two different approaches to biological numbers. First, an impressive array of quantitative measurements make it possible to develop intuition about biological numbers ranging from how many gigatons of atmospheric carbon are fixed every year in the process of photosynthesis to the number of membrane transporters needed to provide sugars to rapidly dividing Escherichia coli cells. As a result of the vast array of such quantitative data, the BioNumbers web site has recently been developed as a repository for biology by the numbers. Second, a complementary and powerful tradition of numerical estimates familiar from the physical sciences and canonized in the so-called “Fermi problems” calls for efforts to estimate key biological quantities on the basis of a few foundational facts and simple ideas from physics and chemistry. In this article, we describe these two approaches and illustrate their synergism in several particularly appealing case studies. These case studies reveal the impact that an emphasis on numbers can have on important biological questions. PMID:20018695

  1. The hormesis database: the occurrence of hormetic dose responses in the toxicological literature.

    PubMed

    Calabrese, Edward J; Blain, Robyn B

    2011-10-01

    In 2005 we published an assessment of dose responses that satisfied a priori evaluative criteria for inclusion within the relational retrieval hormesis database (Calabrese and Blain, 2005). The database included information on study characteristics (e.g., biological model, gender, age and other relevant aspects, number of doses, dose distribution/range, quantitative features of the dose response, temporal features/repeat measures, and physical/chemical properties of the agents). The 2005 article covered information for about 5000 dose responses; the present article has been expanded to cover approximately 9000 dose responses. This assessment extends and strengthens the conclusion of the 2005 paper that the hormesis concept is broadly generalizable, being independent of biological model, endpoint measured and chemical class/physical agent. It also confirmed the definable quantitative features of hormetic dose responses in which the strong majority of dose responses display maximum stimulation less than twice that of the control group and a stimulatory width that is within approximately 10-20-fold of the estimated toxicological or pharmacological threshold. The remarkable consistency of the quantitative features of the hormetic dose response suggests that hormesis may provide an estimate of biological plasticity that is broadly generalized across plant, microbial and animal (invertebrate and vertebrate) models. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Method for accurate quantitation of background tissue optical properties in the presence of emission from a strong fluorescence marker

    NASA Astrophysics Data System (ADS)

    Bravo, Jaime; Davis, Scott C.; Roberts, David W.; Paulsen, Keith D.; Kanick, Stephen C.

    2015-03-01

    Quantification of targeted fluorescence markers during neurosurgery has the potential to improve and standardize surgical distinction between normal and cancerous tissues. However, quantitative analysis of marker fluorescence is complicated by tissue background absorption and scattering properties. Correction algorithms that transform raw fluorescence intensity into quantitative units, independent of absorption and scattering, require a paired measurement of localized white light reflectance to provide estimates of the optical properties. This study focuses on the unique problem of developing a spectral analysis algorithm to extract tissue absorption and scattering properties from white light spectra that contain contributions from both elastically scattered photons and fluorescence emission from a strong fluorophore (i.e. fluorescein). A fiber-optic reflectance device was used to perform measurements in a small set of optical phantoms, constructed with Intralipid (1% lipid), whole blood (1% volume fraction) and fluorescein (0.16-10 μg/mL). Results show that the novel spectral analysis algorithm yields accurate estimates of tissue parameters independent of fluorescein concentration, with relative errors of blood volume fraction, blood oxygenation fraction (BOF), and the reduced scattering coefficient (at 521 nm) of <7%, <1%, and <22%, respectively. These data represent a first step towards quantification of fluorescein in tissue in vivo.

  3. Quantitative microbial risk assessment model for Legionnaires' disease: assessment of human exposures for selected spa outbreaks.

    PubMed

    Armstrong, Thomas W; Haas, Charles N

    2007-08-01

    Evaluation of a quantitative microbial risk assessment (QMRA) model for Legionnaires' disease (LD) required Legionella exposure estimates for several well-documented LD outbreaks. Reports for a whirlpool spa and two natural spring spa outbreaks provided data for the exposure assessment, as well as rates of infection and mortality. Exposure estimates for the whirlpool spa outbreak employed aerosol generation, water composition, exposure duration data, and building ventilation parameters with a two-zone model. Estimates for the natural hot springs outbreaks used bacterial water to air partitioning coefficients and exposure duration information. The air concentration and dose calculations used input parameter distributions with Monte Carlo simulations to estimate exposures as probability distributions. The assessment considered two sets of assumptions about the transfer of Legionella from the water phase to the aerosol emitted from the whirlpool spa. The estimated air concentration near the whirlpool spa was 5 to 18 colony forming units per cubic meter (CFU/m(3)) and 50 to 180 CFU/m(3) for each of the alternate assumptions. The estimated 95th percentile ranges of Legionella dose for workers within 15 m of the whirlpool spa were 0.13-3.4 CFU and 1.3-34.5 CFU, respectively. The modeling for hot springs Spas 1 and 2 resulted in estimated arithmetic mean air concentrations of 360 and 17 CFU/m(3), respectively, and 95 percentile ranges for Legionella dose of 28 to 67 CFU and 1.1 to 3.7 CFU, respectively. The Legionella air concentration estimates fall in the range of limited reports on air concentrations of Legionella (0.33 to 190 CFU/m(3)) near showers, aerated faucets, and baths during filling with Legionella-contaminated water. These measurements may provide some indication that the estimates are of a reasonable magnitude, but they do not clarify the exposure estimates accuracy, since they were not obtained during LD outbreaks. Further research to improve the data used for the Legionella exposure assessment would strengthen the results. Several of the primary additional data needs include improved data for bacterial water to air partitioning coefficients, better accounting of time-activity-distance patterns and exposure potential in outbreak reports, and data for Legionella-containing aerosol viability decay instead of loss of capability for growth in culture.

  4. Consumer product chemical weight fractions from ingredient lists.

    PubMed

    Isaacs, Kristin K; Phillips, Katherine A; Biryol, Derya; Dionisio, Kathie L; Price, Paul S

    2018-05-01

    Assessing human exposures to chemicals in consumer products requires composition information. However, comprehensive composition data for products in commerce are not generally available. Many consumer products have reported ingredient lists that are constructed using specific guidelines. A probabilistic model was developed to estimate quantitative weight fraction (WF) values that are consistent with the rank of an ingredient in the list, the number of reported ingredients, and labeling rules. The model provides the mean, median, and 95% upper and lower confidence limit WFs for ingredients of any rank in lists of any length. WFs predicted by the model compared favorably with those reported on Material Safety Data Sheets. Predictions for chemicals known to provide specific functions in products were also found to reasonably agree with reported WFs. The model was applied to a selection of publicly available ingredient lists, thereby estimating WFs for 1293 unique ingredients in 1123 products in 81 product categories. Predicted WFs, although less precise than reported values, can be estimated for large numbers of product-chemical combinations and thus provide a useful source of data for high-throughput or screening-level exposure assessments.

  5. Variability of Kelvin wave momentum flux from high-resolution radiosonde and radio occultation data

    NASA Astrophysics Data System (ADS)

    Sjoberg, J. P.; Zeng, Z.; Ho, S. P.; Birner, T.; Anthes, R. A.; Johnson, R. H.

    2017-12-01

    Direct measurement of momentum flux from Kelvin waves in the stratosphere remains challenging. Constraining this flux from observations is an important step towards constraining the flux from models. Here we present results from analyses using linear theory to estimate the Kelvin wave amplitudes and momentum fluxes from both high-resolution radiosondes and from radio occultation (RO) data. These radiosonde data are from a contiguous 11-year span of soundings performed at two Department of Energy Atmospheric Radiation Measurement sites, while the RO data span 14 years from multiple satellite missions. Daily time series of the flux from both sources are found to be in quantitative agreement with previous studies. Climatological analyses of these data reveal the expected seasonal cycle and variability associated with the quasi-biennial oscillation. Though both data sets provide measurements on distinct spatial and temporal scales, the estimated flux from each provides insight into separate but complimentary aspects of how the Kelvin waves affect the stratosphere. Namely, flux derived from radiosonde sites provide details on the regional Kelvin wave variability, while the flux from RO data are zonal mean estimates.

  6. Estimation of whole body fat from appendicular soft tissue from peripheral quantitative computed tomography in adolescent girls

    PubMed Central

    Lee, Vinson R.; Blew, Rob M.; Farr, Josh N.; Tomas, Rita; Lohman, Timothy G.; Going, Scott B.

    2013-01-01

    Objective Assess the utility of peripheral quantitative computed tomography (pQCT) for estimating whole body fat in adolescent girls. Research Methods and Procedures Our sample included 458 girls (aged 10.7 ± 1.1y, mean BMI = 18.5 ± 3.3 kg/m2) who had DXA scans for whole body percent fat (DXA %Fat). Soft tissue analysis of pQCT scans provided thigh and calf subcutaneous percent fat and thigh and calf muscle density (muscle fat content surrogates). Anthropometric variables included weight, height and BMI. Indices of maturity included age and maturity offset. The total sample was split into validation (VS; n = 304) and cross-validation (CS; n = 154) samples. Linear regression was used to develop prediction equations for estimating DXA %Fat from anthropometric variables and pQCT-derived soft tissue components in VS and the best prediction equation was applied to CS. Results Thigh and calf SFA %Fat were positively correlated with DXA %Fat (r = 0.84 to 0.85; p <0.001) and thigh and calf muscle densities were inversely related to DXA %Fat (r = −0.30 to −0.44; p < 0.001). The best equation for estimating %Fat included thigh and calf SFA %Fat and thigh and calf muscle density (adj. R2 = 0.90; SEE = 2.7%). Bland-Altman analysis in CS showed accurate estimates of percent fat (adj. R2 = 0.89; SEE = 2.7%) with no bias. Discussion Peripheral QCT derived indices of adiposity can be used to accurately estimate whole body percent fat in adolescent girls. PMID:25147482

  7. The Effect of Pickling on Blue Borscht Gelatin and Other Interesting Diffusive Phenomena.

    ERIC Educational Resources Information Center

    Davis, Lawrence C.; Chou, Nancy C.

    1998-01-01

    Presents some simple demonstrations that students can construct for themselves in class to learn the difference between diffusion and convection rates. Uses cabbage leaves and gelatin and focuses on diffusion in ungelified media, a quantitative diffusion estimate with hydroxyl ions, and a quantitative diffusion estimate with photons. (DDR)

  8. Estimation of 3-D conduction velocity vector fields from cardiac mapping data.

    PubMed

    Barnette, A R; Bayly, P V; Zhang, S; Walcott, G P; Ideker, R E; Smith, W M

    2000-08-01

    A method to estimate three-dimensional (3-D) conduction velocity vector fields in cardiac tissue is presented. The speed and direction of propagation are found from polynomial "surfaces" fitted to space-time (x, y, z, t) coordinates of cardiac activity. The technique is applied to sinus rhythm and paced rhythm mapped with plunge needles at 396-466 sites in the canine myocardium. The method was validated on simulated 3-D plane and spherical waves. For simulated data, conduction velocities were estimated with an accuracy of 1%-2%. In experimental data, estimates of conduction speeds during paced rhythm were slower than those found during normal sinus rhythm. Vector directions were also found to differ between different types of beats. The technique was able to distinguish between premature ventricular contractions and sinus beats and between sinus and paced beats. The proposed approach to computing velocity vector fields provides an automated, physiological, and quantitative description of local electrical activity in 3-D tissue. This method may provide insight into abnormal conduction associated with fatal ventricular arrhythmias.

  9. An assessment of optical properties of dissolved organic material as quantitative source indicators in the Santa Ana River basin, Southern California

    USGS Publications Warehouse

    Bergamaschi, Brian A.; Kalve, Erica; Guenther, Larry; Mendez, Gregory O.; Belitz, Kenneth

    2005-01-01

    The ability to rapidly, reliably, and inexpensively characterize sources of dissolved organic material (DOM) in watersheds would allow water management agencies to more quickly identify problems in water sources, and to more efficiently allocate water resources by, for example, permitting real-time identification of high-quality water suitable for ground-water recharge, or poor-quality water in need of mitigation. This study examined the feasibility of using easily measurable intrinsic optical properties' absorbance and fluorescence spectra, as quantitative indicators of DOM sources and, thus, a predictor of water quality. The study focused on the Santa Ana River Basin, in southern California, USA, which comprises an area of dense urban development and an area of intense dairy production. Base flow in the Santa Ana Basin is primarily tertiary treated wastewater discharge. Available hydrologic data indicate that urban and agricultural runoff degrades water quality during storm events by introducing pathogens, nutrients, and other contaminants, including significant amounts of DOM. These conditions provide the basis for evaluating the use of DOM optical properties as a tracer of DOM from different sources. Sample spectra representing four principal DOM sources were identified among all samples collected in 1999 on the basis of basin hydrology, and the distribution of spectral variability within all the sample data. A linear mixing model provided quantitative estimates of relative endmember contribution to sample spectra for monthly, storm, and diurnal samples. The spectral properties of the four sources (endmembers), Pristine Water, Wastewater, Urban Water, and Dairy Water, accounted for 94 percent of the variability in optical properties observed in the study, suggesting that all important DOM sources were represented. The scale and distribution of the residual spectra, that not explained by the endmembers, suggested that the endmember spectra selected did not adequately represent Urban Water base flow. However, model assignments of sources generally agreed well with those expected, based on sampling location and hydrology. The results suggest that with a fuller characterization of the endmember spectra, analysis of optical properties will provide rapid quantitative estimates of the relative contribution of DOM sources in the Santa Ana Basin.

  10. Evaluating a multi-criteria model for hazard assessment in urban design. The Porto Marghera case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luria, Paolo; Aspinall, Peter A

    2003-08-01

    The aim of this paper is to describe a new approach to major industrial hazard assessment, which has been recently studied by the authors in conjunction with the Italian Environmental Protection Agency ('ARPAV'). The real opportunity for developing a different approach arose from the need of the Italian EPA to provide the Venice Port Authority with an appropriate estimation of major industrial hazards in Porto Marghera, an industrial estate near Venice (Italy). However, the standard model, the quantitative risk analysis (QRA), only provided a list of individual quantitative risk values, related to single locations. The experimental model is based onmore » a multi-criteria approach--the Analytic Hierarchy Process--which introduces the use of expert opinions, complementary skills and expertise from different disciplines in conjunction with quantitative traditional analysis. This permitted the generation of quantitative data on risk assessment from a series of qualitative assessments, on the present situation and on three other future scenarios, and use of this information as indirect quantitative measures, which could be aggregated for obtaining the global risk rate. This approach is in line with the main concepts proposed by the last European directive on Major Hazard Accidents, which recommends increasing the participation of operators, taking the other players into account and, moreover, paying more attention to the concepts of 'urban control', 'subjective risk' (risk perception) and intangible factors (factors not directly quantifiable)« less

  11. Repeat 24-hour recalls and locally developed food composition databases: a feasible method to estimate dietary adequacy in a multi-site preconception maternal nutrition RCT

    PubMed Central

    Lander, Rebecca L.; Hambidge, K. Michael; Krebs, Nancy F.; Westcott, Jamie E.; Garces, Ana; Figueroa, Lester; Tejeda, Gabriela; Lokangaka, Adrien; Diba, Tshilenge S.; Somannavar, Manjunath S.; Honnayya, Ranjitha; Ali, Sumera A.; Khan, Umber S.; McClure, Elizabeth M.; Thorsten, Vanessa R.; Stolka, Kristen B.

    2017-01-01

    ABSTRACT Background: Our aim was to utilize a feasible quantitative methodology to estimate the dietary adequacy of >900 first-trimester pregnant women in poor rural areas of the Democratic Republic of the Congo, Guatemala, India and Pakistan. This paper outlines the dietary methods used. Methods: Local nutritionists were trained at the sites by the lead study nutritionist and received ongoing mentoring throughout the study. Training topics focused on the standardized conduct of repeat multiple-pass 24-hr dietary recalls, including interview techniques, estimation of portion sizes, and construction of a unique site-specific food composition database (FCDB). Each FCDB was based on 13 food groups and included values for moisture, energy, 20 nutrients (i.e. macro- and micronutrients), and phytate (an anti-nutrient). Nutrient values for individual foods or beverages were taken from recently developed FAO-supported regional food composition tables or the USDA national nutrient database. Appropriate adjustments for differences in moisture and application of nutrient retention and yield factors after cooking were applied, as needed. Generic recipes for mixed dishes consumed by the study population were compiled at each site, followed by calculation of a median recipe per 100 g. Each recipe’s nutrient values were included in the FCDB. Final site FCDB checks were planned according to FAO/INFOODS guidelines. Discussion: This dietary strategy provides the opportunity to assess estimated mean group usual energy and nutrient intakes and estimated prevalence of the population ‘at risk’ of inadequate intakes in first-trimester pregnant women living in four low- and middle-income countries. While challenges and limitations exist, this methodology demonstrates the practical application of a quantitative dietary strategy for a large international multi-site nutrition trial, providing within- and between-site comparisons. Moreover, it provides an excellent opportunity for local capacity building and each site FCDB can be easily modified for additional research activities conducted in other populations living in the same area. PMID:28469549

  12. Data analysis in emission tomography using emission-count posteriors

    NASA Astrophysics Data System (ADS)

    Sitek, Arkadiusz

    2012-11-01

    A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.

  13. THE EVOLUTION OF SOLAR FLUX FROM 0.1 nm TO 160 {mu}m: QUANTITATIVE ESTIMATES FOR PLANETARY STUDIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Claire, Mark W.; Sheets, John; Meadows, Victoria S.

    2012-09-20

    Understanding changes in the solar flux over geologic time is vital for understanding the evolution of planetary atmospheres because it affects atmospheric escape and chemistry, as well as climate. We describe a numerical parameterization for wavelength-dependent changes to the non-attenuated solar flux appropriate for most times and places in the solar system. We combine data from the Sun and solar analogs to estimate enhanced UV and X-ray fluxes for the young Sun and use standard solar models to estimate changing visible and infrared fluxes. The parameterization, a series of multipliers relative to the modern top of the atmosphere flux atmore » Earth, is valid from 0.1 nm through the infrared, and from 0.6 Gyr through 6.7 Gyr, and is extended from the solar zero-age main sequence to 8.0 Gyr subject to additional uncertainties. The parameterization is applied to a representative modern day flux, providing quantitative estimates of the wavelength dependence of solar flux for paleodates relevant to the evolution of atmospheres in the solar system (or around other G-type stars). We validate the code by Monte Carlo analysis of uncertainties in stellar age and flux, and with comparisons to the solar proxies {kappa}{sup 1} Cet and EK Dra. The model is applied to the computation of photolysis rates on the Archean Earth.« less

  14. Development of software and modification of Q-FISH protocol for estimation of individual telomere length in immunopathology.

    PubMed

    Barkovskaya, M Sh; Bogomolov, A G; Knauer, N Yu; Rubtsov, N B; Kozlov, V A

    2017-04-01

    Telomere length is an important indicator of proliferative cell history and potential. Decreasing telomere length in the cells of an immune system can indicate immune aging in immune-mediated and chronic inflammatory diseases. Quantitative fluorescent in situ hybridization (Q-FISH) of a labeled (C 3 TA[Formula: see text] peptide nucleic acid probe onto fixed metaphase cells followed by digital image microscopy allows the evaluation of telomere length in the arms of individual chromosomes. Computer-assisted analysis of microscopic images can provide quantitative information on the number of telomeric repeats in individual telomeres. We developed new software to estimate telomere length. The MeTeLen software contains new options that can be used to solve some Q-FISH and microscopy problems, including correction of irregular light effects and elimination of background fluorescence. The identification and description of chromosomes and chromosome regions are essential to the Q-FISH technique. To improve the quality of cytogenetic analysis after Q-FISH, we optimized the temperature and time of DNA-denaturation to get better DAPI-banding of metaphase chromosomes. MeTeLen was tested by comparing telomere length estimations for sister chromatids, background fluorescence estimations, and correction of nonuniform light effects. The application of the developed software for analysis of telomere length in patients with rheumatoid arthritis was demonstrated.

  15. Real-Time PCR Quantification Using A Variable Reaction Efficiency Model

    PubMed Central

    Platts, Adrian E.; Johnson, Graham D.; Linnemann, Amelia K.; Krawetz, Stephen A.

    2008-01-01

    Quantitative real-time PCR remains a cornerstone technique in gene expression analysis and sequence characterization. Despite the importance of the approach to experimental biology the confident assignment of reaction efficiency to the early cycles of real-time PCR reactions remains problematic. Considerable noise may be generated where few cycles in the amplification are available to estimate peak efficiency. An alternate approach that uses data from beyond the log-linear amplification phase is explored with the aim of reducing noise and adding confidence to efficiency estimates. PCR reaction efficiency is regressed to estimate the per-cycle profile of an asymptotically departed peak efficiency, even when this is not closely approximated in the measurable cycles. The process can be repeated over replicates to develop a robust estimate of peak reaction efficiency. This leads to an estimate of the maximum reaction efficiency that may be considered primer-design specific. Using a series of biological scenarios we demonstrate that this approach can provide an accurate estimate of initial template concentration. PMID:18570886

  16. Evaluation of spatial filtering on the accuracy of wheat area estimate

    NASA Technical Reports Server (NTRS)

    Dejesusparada, N. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Delima, A. M.

    1982-01-01

    A 3 x 3 pixel spatial filter for postclassification was used for wheat classification to evaluate the effects of this procedure on the accuracy of area estimation using LANDSAT digital data obtained from a single pass. Quantitative analyses were carried out in five test sites (approx 40 sq km each) and t tests showed that filtering with threshold values significantly decreased errors of commission and omission. In area estimation filtering improved the overestimate of 4.5% to 2.7% and the root-mean-square error decreased from 126.18 ha to 107.02 ha. Extrapolating the same procedure of automatic classification using spatial filtering for postclassification to the whole study area, the accuracy in area estimate was improved from the overestimate of 10.9% to 9.7%. It is concluded that when single pass LANDSAT data is used for crop identification and area estimation the postclassification procedure using a spatial filter provides a more accurate area estimate by reducing classification errors.

  17. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.

  18. Dual Nozzle Aerodynamic and Cooling Analysis Study.

    DTIC Science & Technology

    1981-02-27

    program and to the aerodynamic model computer program. This pro - cedure was used to define two secondary nozzle contours for the baseline con - figuration...both the dual-throat and dual-expander con - cepts. Advanced analytical techniques were utilized to provide quantitative estimates of the bleed flow...preliminary heat transfer analysis of both con - cepts, and (5) engineering analysis of data from the NASA/MSFC hot-fire testing of a dual-throat

  19. Development of an SRM method for absolute quantitation of MYDGF/C19orf10 protein.

    PubMed

    Dwivedi, Ravi C; Krokhin, Oleg V; El-Gabalawy, Hani S; Wilkins, John A

    2016-06-01

    To develop a MS-based selected reaction monitoring (SRM) assay for quantitation of myeloid-derived growth factor (MYDGF) formerly chromosome 19 open reading frame (C19orf10). Candidate reporter peptides were identified in digests of recombinant MYDGF. Isotopically labeled forms of these reporter peptides were employed as internal standards for assay development. Two reference peptides were selected SYLYFQTFFK and GAEIEYAMAYSK with respective LOQ of 42 and 380 attomole per injection. Application of the assay to human serum and synovial fluid determined that the assay sensitivity was reduced and quantitation was not achievable. However, the partial depletion of albumin and immunoglobulin from synovial fluids provided estimates of 300-650 femtomoles per injection (0.7-1.6 nanomolar (nM) fluid concentrations) in three of the six samples analyzed. A validated sensitive assay for the quantitation of MYDGF in biological fluids was developed. However, the endogenous levels of MYDGF in such fluids are at or below the current levels of quantitation. The levels of MYDGF are lower than those previously reported using an ELISA. The current results suggest that additional steps may be required to remove high abundance proteins or to enrich MYDGF for SRM-based quantitation. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Quantitative genetic versions of Hamilton's rule with empirical applications

    PubMed Central

    McGlothlin, Joel W.; Wolf, Jason B.; Brodie, Edmund D.; Moore, Allen J.

    2014-01-01

    Hamilton's theory of inclusive fitness revolutionized our understanding of the evolution of social interactions. Surprisingly, an incorporation of Hamilton's perspective into the quantitative genetic theory of phenotypic evolution has been slow, despite the popularity of quantitative genetics in evolutionary studies. Here, we discuss several versions of Hamilton's rule for social evolution from a quantitative genetic perspective, emphasizing its utility in empirical applications. Although evolutionary quantitative genetics offers methods to measure each of the critical parameters of Hamilton's rule, empirical work has lagged behind theory. In particular, we lack studies of selection on altruistic traits in the wild. Fitness costs and benefits of altruism can be estimated using a simple extension of phenotypic selection analysis that incorporates the traits of social interactants. We also discuss the importance of considering the genetic influence of the social environment, or indirect genetic effects (IGEs), in the context of Hamilton's rule. Research in social evolution has generated an extensive body of empirical work focusing—with good reason—almost solely on relatedness. We argue that quantifying the roles of social and non-social components of selection and IGEs, in addition to relatedness, is now timely and should provide unique additional insights into social evolution. PMID:24686930

  1. Methods to estimate the transfer of contaminants into recycling products - A case study from Austria.

    PubMed

    Knapp, Julika; Allesch, Astrid; Müller, Wolfgang; Bockreis, Anke

    2017-11-01

    Recycling of waste materials is desirable to reduce the consumption of limited primary resources, but also includes the risk of recycling unwanted, hazardous substances. In Austria, the legal framework demands secondary products must not present a higher risk than comparable products derived from primary resources. However, the act provides no definition on how to assess this risk potential. This paper describes the development of different quantitative and qualitative methods to estimate the transfer of contaminants in recycling processes. The quantitative methods comprise the comparison of concentrations of harmful substances in recycling products to corresponding primary products and to existing limit values. The developed evaluation matrix, which considers further aspects, allows for the assessment of the qualitative risk potential. The results show that, depending on the assessed waste fraction, particular contaminants can be critical. Their concentrations were higher than in comparable primary materials and did not comply with existing limit values. On the other hand, the results show that a long-term, well-established quality control system can assure compliance with the limit values. The results of the qualitative assessment obtained with the evaluation matrix support the results of the quantitative assessment. Therefore, the evaluation matrix can be suitable to quickly screen waste streams used for recycling to estimate their potential environmental and health risks. To prevent the transfer of contaminants into product cycles, improved data of relevant substances in secondary resources are necessary. In addition, regulations for material recycling are required to assure adequate quality control measures, including limit values. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Improved dose-volume histogram estimates for radiopharmaceutical therapy by optimizing quantitative SPECT reconstruction parameters

    NASA Astrophysics Data System (ADS)

    Cheng, Lishui; Hobbs, Robert F.; Segars, Paul W.; Sgouros, George; Frey, Eric C.

    2013-06-01

    In radiopharmaceutical therapy, an understanding of the dose distribution in normal and target tissues is important for optimizing treatment. Three-dimensional (3D) dosimetry takes into account patient anatomy and the nonuniform uptake of radiopharmaceuticals in tissues. Dose-volume histograms (DVHs) provide a useful summary representation of the 3D dose distribution and have been widely used for external beam treatment planning. Reliable 3D dosimetry requires an accurate 3D radioactivity distribution as the input. However, activity distribution estimates from SPECT are corrupted by noise and partial volume effects (PVEs). In this work, we systematically investigated OS-EM based quantitative SPECT (QSPECT) image reconstruction in terms of its effect on DVHs estimates. A modified 3D NURBS-based Cardiac-Torso (NCAT) phantom that incorporated a non-uniform kidney model and clinically realistic organ activities and biokinetics was used. Projections were generated using a Monte Carlo (MC) simulation; noise effects were studied using 50 noise realizations with clinical count levels. Activity images were reconstructed using QSPECT with compensation for attenuation, scatter and collimator-detector response (CDR). Dose rate distributions were estimated by convolution of the activity image with a voxel S kernel. Cumulative DVHs were calculated from the phantom and QSPECT images and compared both qualitatively and quantitatively. We found that noise, PVEs, and ringing artifacts due to CDR compensation all degraded histogram estimates. Low-pass filtering and early termination of the iterative process were needed to reduce the effects of noise and ringing artifacts on DVHs, but resulted in increased degradations due to PVEs. Large objects with few features, such as the liver, had more accurate histogram estimates and required fewer iterations and more smoothing for optimal results. Smaller objects with fine details, such as the kidneys, required more iterations and less smoothing at early time points post-radiopharmaceutical administration but more smoothing and fewer iterations at later time points when the total organ activity was lower. The results of this study demonstrate the importance of using optimal reconstruction and regularization parameters. Optimal results were obtained with different parameters at each time point, but using a single set of parameters for all time points produced near-optimal dose-volume histograms.

  3. Comparison of the radiological and chemical toxicity of lead

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beitel, G.A.; Mott, S.

    1995-03-01

    This report estimates the worst-case radiological dose to an individual from ingested lead containing picocurie levels of radionuclides and then compares the calculated radiological health effects to the chemical toxic effects from that same lead. This comparison provides an estimate of the consequences of inadvertently recycling, in the commercial market, lead containing nominally undetectable concentrations of radionuclides. Quantitative expressions for the radiological and chemical toxicities of lead are based on concentrations of lead in the blood stream. The result shows that the chemical toxicity of lead is a greater health hazard, by orders of magnitude, than any probable companion radiationmore » dose.« less

  4. Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment

    NASA Technical Reports Server (NTRS)

    Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.

    2017-01-01

    "Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.

  5. Extreme precipitation depths for Texas, excluding the Trans-Pecos region

    USGS Publications Warehouse

    Lanning-Rush, Jennifer; Asquith, William H.; Slade, Raymond M.

    1998-01-01

    Storm durations of 1, 2, 3, 4, 5, and 6 days were investigated for this report. The extreme precipitation depth for a particular area is estimated from an “extreme precipitation curve” (an upper limit or envelope curve developed from graphs of extreme precipitation depths for each climatic region). The extreme precipitation curves were determined using precipitation depth-duration information from a subset (24 “extreme” storms) of 213 “notable” storms documented throughout Texas. The extreme precipitation curves can be used to estimate extreme precipitation depth for a particular area. The extreme precipitation depth represents a limiting depth, which can provide useful comparative information for more quantitative analyses.

  6. Monitoring vegetation conditions from LANDSAT for use in range management

    NASA Technical Reports Server (NTRS)

    Haas, R. H.; Deering, D. W.; Rouse, J. W., Jr.; Schell, J. A.

    1975-01-01

    A summary of the LANDSAT Great Plains Corridor projects and the principal results are presented. Emphasis is given to the use of satellite acquired phenological data for range management and agri-business activities. A convenient method of reducing LANDSAT MSS data to provide quantitative estimates of green biomass on rangelands in the Great Plains is explained. Suggestions for the use of this approach for evaluating range feed conditions are presented. A LANDSAT Follow-on project has been initiated which will employ the green biomass estimation method in a quasi-operational monitoring of range readiness and range feed conditions on a regional scale.

  7. Lognormal Uncertainty Estimation for Failure Rates

    NASA Technical Reports Server (NTRS)

    Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.

    2017-01-01

    "Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain. Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This presentation will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.

  8. Estimations of BCR-ABL/ABL transcripts by quantitative PCR in chronic myeloid leukaemia after allogeneic bone marrow transplantation and donor lymphocyte infusion.

    PubMed

    Otazú, Ivone B; Tavares, Rita de Cassia B; Hassan, Rocío; Zalcberg, Ilana; Tabak, Daniel G; Seuánez, Héctor N

    2002-02-01

    Serial assays of qualitative (multiplex and nested) and quantitative PCR were carried out for detecting and estimating the level of BCR-ABL transcripts in 39 CML patients following bone marrow transplantation. Seven of these patients, who received donor lymphocyte infusions (DLIs) following to relapse, were also monitored. Quantitative estimates of BCR-ABL transcripts were obtained by co-amplification with a competitor sequence. Estimates of ABL transcripts were used, an internal control and the ratio BCR-ABL/ABL was thus estimated for evaluating the kinetics of residual clones. Twenty four patients were followed shortly after BMT; two of these patients were in cytogenetic relapse coexisting with very high BCR-ABL levels while other 22 were in clinical, haematologic and cytogenetic remission 2-42 months after BMT. In this latter group, seven patients showed a favourable clinical-haematological progression in association with molecular remission while in 14 patients quantitative PCR assays indicated molecular relapse that was not associated with an early cytogenetic-haematologic relapse. BCR-ABL/ABL levels could not be correlated with presence of GVHD in 24 patients after BMT. In all seven patients treated with DLI, high levels of transcripts were detected at least 4 months before the appearance of clinical haematological relapse. Following DLI, five of these patients showed decreasing transcript levels from 2 to 5 logs between 4 and 12 months. In eight other patients studied long after BMT, five showed molecular relapse up to 117 months post-BMT and only one showed cytogenetic relapse. Our findings indicated that quantitative estimates of BCR-ABL transcripts were valuable for monitoring minimal residual disease in each patient.

  9. Trichloroethylene and Cancer: Systematic and Quantitative Review of Epidemiologic Evidence for Identifying Hazards

    PubMed Central

    Scott, Cheryl Siegel; Jinot, Jennifer

    2011-01-01

    We conducted a meta-analysis focusing on studies with high potential for trichloroethylene (TCE) exposure to provide quantitative evaluations of the evidence for associations between TCE exposure and kidney, liver, and non-Hodgkin lymphoma (NHL) cancers. A systematic review documenting essential design features, exposure assessment approaches, statistical analyses, and potential sources of confounding and bias identified twenty-four cohort and case-control studies on TCE and the three cancers of interest with high potential for exposure, including five recently published case-control studies of kidney cancer or NHL. Fixed- and random-effects models were fitted to the data on overall exposure and on the highest exposure group. Sensitivity analyses examined the influence of individual studies and of alternative risk estimate selections. For overall TCE exposure and kidney cancer, the summary relative risk (RRm) estimate from the random effects model was 1.27 (95% CI: 1.13, 1.43), with a higher RRm for the highest exposure groups (1.58, 95% CI: 1.28, 1.96). The RRm estimates were not overly sensitive to alternative risk estimate selections or to removal of an individual study. There was no apparent heterogeneity or publication bias. For NHL, RRm estimates for overall exposure and for the highest exposure group, respectively, were 1.23 (95% CI: 1.07, 1.42) and 1.43 (95% CI: 1.13, 1.82) and, for liver cancer, 1.29 (95% CI: 1.07, 1.56) and 1.28 (95% CI: 0.93, 1.77). Our findings provide strong support for a causal association between TCE exposure and kidney cancer. The support is strong but less robust for NHL, where issues of study heterogeneity, potential publication bias, and weaker exposure-response results contribute uncertainty, and more limited for liver cancer, where only cohort studies with small numbers of cases were available. PMID:22163205

  10. Audiovisual quality estimation of mobile phone video cameras with interpretation-based quality approach

    NASA Astrophysics Data System (ADS)

    Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte

    2007-01-01

    We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.

  11. 77 FR 70727 - Endangered and Threatened Wildlife and Plants; 90-Day Finding on a Petition to List the African...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-27

    ... indirectly through changes in regional climate; and (b) Quantitative research on the relationship of food...). Population Estimates The most quantitative estimate of the historic size of the African lion population... research conducted by Chardonnet et al., three subpopulations were described as consisting of 18 groups...

  12. Principles, performance, and applications of spectral reconstitution (SR) in quantitative analysis of oils by Fourier transform infrared spectroscopy (FT-IR).

    PubMed

    García-González, Diego L; Sedman, Jacqueline; van de Voort, Frederik R

    2013-04-01

    Spectral reconstitution (SR) is a dilution technique developed to facilitate the rapid, automated, and quantitative analysis of viscous oil samples by Fourier transform infrared spectroscopy (FT-IR). This technique involves determining the dilution factor through measurement of an absorption band of a suitable spectral marker added to the diluent, and then spectrally removing the diluent from the sample and multiplying the resulting spectrum to compensate for the effect of dilution on the band intensities. The facsimile spectrum of the neat oil thus obtained can then be qualitatively or quantitatively analyzed for the parameter(s) of interest. The quantitative performance of the SR technique was examined with two transition-metal carbonyl complexes as spectral markers, chromium hexacarbonyl and methylcyclopentadienyl manganese tricarbonyl. The estimation of the volume fraction (VF) of the diluent in a model system, consisting of canola oil diluted to various extents with odorless mineral spirits, served as the basis for assessment of these markers. The relationship between the VF estimates and the true volume fraction (VF(t)) was found to be strongly dependent on the dilution ratio and also depended, to a lesser extent, on the spectral resolution. These dependences are attributable to the effect of changes in matrix polarity on the bandwidth of the ν(CO) marker bands. Excellent VF(t) estimates were obtained by making a polarity correction devised with a variance-spectrum-delineated correction equation. In the absence of such a correction, SR was shown to introduce only a minor and constant bias, provided that polarity differences among all the diluted samples analyzed were minimal. This bias can be built into the calibration of a quantitative FT-IR analytical method by subjecting appropriate calibration standards to the same SR procedure as the samples to be analyzed. The primary purpose of the SR technique is to simplify preparation of diluted samples such that only approximate proportions need to be adhered to, rather than using exact weights or volumes, the marker accounting for minor variations. Additional applications discussed include the use of the SR technique in extraction-based, quantitative, automated FT-IR methods for the determination of moisture, acid number, and base number in lubricating oils, as well as of moisture content in edible oils.

  13. Estimation of sample size and testing power (part 5).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  14. A new mean estimator using auxiliary variables for randomized response models

    NASA Astrophysics Data System (ADS)

    Ozgul, Nilgun; Cingi, Hulya

    2013-10-01

    Randomized response models are commonly used in surveys dealing with sensitive questions such as abortion, alcoholism, sexual orientation, drug taking, annual income, tax evasion to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Starting from the pioneering work of Warner [7], many versions of RRM have been developed that can deal with quantitative responses. In this study, new mean estimator is suggested for RRM including quantitative responses. The mean square error is derived and a simulation study is performed to show the efficiency of the proposed estimator to other existing estimators in RRM.

  15. An optimized framework for quantitative magnetization transfer imaging of the cervical spinal cord in vivo

    PubMed Central

    Grussu, Francesco; Ianus, Andrada; Schneider, Torben; Prados, Ferran; Fairney, James; Ourselin, Sebastien; Alexander, Daniel C.; Cercignani, Mara; Gandini Wheeler‐Kingshott, Claudia A.M.; Samson, Rebecca S.

    2017-01-01

    Purpose To develop a framework to fully characterize quantitative magnetization transfer indices in the human cervical cord in vivo within a clinically feasible time. Methods A dedicated spinal cord imaging protocol for quantitative magnetization transfer was developed using a reduced field‐of‐view approach with echo planar imaging (EPI) readout. Sequence parameters were optimized based in the Cramer‐Rao‐lower bound. Quantitative model parameters (i.e., bound pool fraction, free and bound pool transverse relaxation times [ T2F, T2B], and forward exchange rate [k FB]) were estimated implementing a numerical model capable of dealing with the novelties of the sequence adopted. The framework was tested on five healthy subjects. Results Cramer‐Rao‐lower bound minimization produces optimal sampling schemes without requiring the establishment of a steady‐state MT effect. The proposed framework allows quantitative voxel‐wise estimation of model parameters at the resolution typically used for spinal cord imaging (i.e. 0.75 × 0.75 × 5 mm3), with a protocol duration of ∼35 min. Quantitative magnetization transfer parametric maps agree with literature values. Whole‐cord mean values are: bound pool fraction = 0.11(±0.01), T2F = 46.5(±1.6) ms, T2B = 11.0(±0.2) µs, and k FB = 1.95(±0.06) Hz. Protocol optimization has a beneficial effect on reproducibility, especially for T2B and k FB. Conclusion The framework developed enables robust characterization of spinal cord microstructure in vivo using qMT. Magn Reson Med 79:2576–2588, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:28921614

  16. Simulating the Refractive Index Structure Constant ({C}_{n}^{2}) in the Surface Layer at Antarctica with a Mesoscale Model

    NASA Astrophysics Data System (ADS)

    Qing, Chun; Wu, Xiaoqing; Li, Xuebin; Tian, Qiguo; Liu, Dong; Rao, Ruizhong; Zhu, Wenyue

    2018-01-01

    In this paper, we introduce an approach wherein the Weather Research and Forecasting (WRF) model is coupled with the bulk aerodynamic method to estimate the surface layer refractive index structure constant (C n 2) above Taishan Station in Antarctica. First, we use the measured meteorological parameters to estimate C n 2 using the bulk aerodynamic method, and second, we use the WRF model output parameters to estimate C n 2 using the bulk aerodynamic method. Finally, the corresponding C n 2 values from the micro-thermometer are compared with the C n 2 values estimated using the WRF model coupled with the bulk aerodynamic method. We analyzed the statistical operators—the bias, root mean square error (RMSE), bias-corrected RMSE (σ), and correlation coefficient (R xy )—in a 20 day data set to assess how this approach performs. In addition, we employ contingency tables to investigate the estimation quality of this approach, which provides complementary key information with respect to the bias, RMSE, σ, and R xy . The quantitative results are encouraging and permit us to confirm the fine performance of this approach. The main conclusions of this study tell us that this approach provides a positive impact on optimizing the observing time in astronomical applications and provides complementary key information for potential astronomical sites.

  17. Iterative optimization method for design of quantitative magnetization transfer imaging experiments.

    PubMed

    Levesque, Ives R; Sled, John G; Pike, G Bruce

    2011-09-01

    Quantitative magnetization transfer imaging (QMTI) using spoiled gradient echo sequences with pulsed off-resonance saturation can be a time-consuming technique. A method is presented for selection of an optimum experimental design for quantitative magnetization transfer imaging based on the iterative reduction of a discrete sampling of the Z-spectrum. The applicability of the technique is demonstrated for human brain white matter imaging at 1.5 T and 3 T, and optimal designs are produced to target specific model parameters. The optimal number of measurements and the signal-to-noise ratio required for stable parameter estimation are also investigated. In vivo imaging results demonstrate that this optimal design approach substantially improves parameter map quality. The iterative method presented here provides an advantage over free form optimal design methods, in that pragmatic design constraints are readily incorporated. In particular, the presented method avoids clustering and repeated measures in the final experimental design, an attractive feature for the purpose of magnetization transfer model validation. The iterative optimal design technique is general and can be applied to any method of quantitative magnetization transfer imaging. Copyright © 2011 Wiley-Liss, Inc.

  18. A Contemporary Assessment of Lateral Fluxes of Organic Carbon in Inland Waters of the USA and Delivery to Coastal Waters

    NASA Astrophysics Data System (ADS)

    Boyer, E. W.; Alexander, R. B.; Smith, R. A.; Shih, J.; Schwarz, G. E.

    2010-12-01

    Organic carbon (OC) is a critical water quality characteristic in surface waters, as it is an important component of the energy balance and food chains in freshwater and estuarine aquatic ecosystems, is significant in the mobilization and transport of contaminants along flow paths, and is associated with the formation of known carcinogens in drinking water supplies. The importance of OC dynamics on water quality has been recognized, but challenges remain in quantitatively addressing processes controlling OC fluxes over broad spatial scales in a hydrological context. Here, we: 1) quantified lateral OC fluxes in rivers, streams, and reservoirs across the nation; 2) partitioned how much organic carbon that is stored in lakes, rivers and streams comes from allochthonous sources (produced in the terrestrial landscape) versus autochthonous sources (produced in-stream by primary production); and 3) estimated the delivery of dissolved and total forms of organic carbon to coastal estuaries and embayments. To accomplish this, we developed national-scale models of organic carbon in U.S. surface waters using the spatially referenced regression on watersheds (SPARROW) technique. This approach uses mechanistic formulations, imposes mass balance constraints, and provides a formal parameter estimation structure to statistically estimate sources and fate of OC in terrestrial and aquatic ecosystems. We make use of a GIS based framework to describe sources of organic matter and characteristics of the landscape that affect its fate and transport, from spatial databases providing characterizations of climate, land cover, primary productivity, topography, soils, geology, and water routing. We calibrated and evaluated the model with statistical estimates of organic carbon loads that were observed at 1,125 monitoring stations across the nation. Our results illustrate spatial patterns and magnitudes OC loadings in rivers and reservoirs, highlighting hot spots and suggesting origins of the OC to each location. Further, our results yield quantitative estimates of aquatic OC fluxes for large water regions and for the nation, providing a refined estimate of the role of surface water fluxes of OC in relationship to regional and national carbon budgets. Finally, we are using our simulations to explore the potential role of climate and other changes in the terrestrial environment on OC fluxes in aquatic systems.

  19. Identification and uncertainty estimation of vertical reflectivity profiles using a Lagrangian approach to support quantitative precipitation measurements by weather radar

    NASA Astrophysics Data System (ADS)

    Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.

    2013-09-01

    This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.

  20. Bayesian data assimilation provides rapid decision support for vector-borne diseases

    PubMed Central

    Jewell, Chris P.; Brown, Richard G.

    2015-01-01

    Predicting the spread of vector-borne diseases in response to incursions requires knowledge of both host and vector demographics in advance of an outbreak. Although host population data are typically available, for novel disease introductions there is a high chance of the pathogen using a vector for which data are unavailable. This presents a barrier to estimating the parameters of dynamical models representing host–vector–pathogen interaction, and hence limits their ability to provide quantitative risk forecasts. The Theileria orientalis (Ikeda) outbreak in New Zealand cattle demonstrates this problem: even though the vector has received extensive laboratory study, a high degree of uncertainty persists over its national demographic distribution. Addressing this, we develop a Bayesian data assimilation approach whereby indirect observations of vector activity inform a seasonal spatio-temporal risk surface within a stochastic epidemic model. We provide quantitative predictions for the future spread of the epidemic, quantifying uncertainty in the model parameters, case infection times and the disease status of undetected infections. Importantly, we demonstrate how our model learns sequentially as the epidemic unfolds and provide evidence for changing epidemic dynamics through time. Our approach therefore provides a significant advance in rapid decision support for novel vector-borne disease outbreaks. PMID:26136225

  1. A meta-analysis of factors affecting trust in human-robot interaction.

    PubMed

    Hancock, Peter A; Billings, Deborah R; Schaefer, Kristin E; Chen, Jessie Y C; de Visser, Ewart J; Parasuraman, Raja

    2011-10-01

    We evaluate and quantify the effects of human, robot, and environmental factors on perceived trust in human-robot interaction (HRI). To date, reviews of trust in HRI have been qualitative or descriptive. Our quantitative review provides a fundamental empirical foundation to advance both theory and practice. Meta-analytic methods were applied to the available literature on trust and HRI. A total of 29 empirical studies were collected, of which 10 met the selection criteria for correlational analysis and 11 for experimental analysis. These studies provided 69 correlational and 47 experimental effect sizes. The overall correlational effect size for trust was r = +0.26,with an experimental effect size of d = +0.71. The effects of human, robot, and environmental characteristics were examined with an especial evaluation of the robot dimensions of performance and attribute-based factors. The robot performance and attributes were the largest contributors to the development of trust in HRI. Environmental factors played only a moderate role. Factors related to the robot itself, specifically, its performance, had the greatest current association with trust, and environmental factors were moderately associated. There was little evidence for effects of human-related factors. The findings provide quantitative estimates of human, robot, and environmental factors influencing HRI trust. Specifically, the current summary provides effect size estimates that are useful in establishing design and training guidelines with reference to robot-related factors of HRI trust. Furthermore, results indicate that improper trust calibration may be mitigated by the manipulation of robot design. However, many future research needs are identified.

  2. Unbiased estimation of oceanic mean rainfall from satellite borne radiometer measurements

    NASA Technical Reports Server (NTRS)

    Mittal, M. C.

    1981-01-01

    The statistical properties of the radar derived rainfall obtained during the GARP Atlantic Tropical Experiment (GATE) are used to derive quantitative estimates of the spatial and temporal sampling errors associated with estimating rainfall from brightness temperature measurements such as would be obtained from a satelliteborne microwave radiometer employing a practical size antenna aperture. A basis for a method of correcting the so called beam filling problem, i.e., for the effect of nonuniformity of rainfall over the radiometer beamwidth is provided. The method presented employs the statistical properties of the observations themselves without need for physical assumptions beyond those associated with the radiative transfer model. The simulation results presented offer a validation of the estimated accuracy that can be achieved and the graphs included permit evaluation of the effect of the antenna resolution on both the temporal and spatial sampling errors.

  3. Wide-field spectrally resolved quantitative fluorescence imaging system: toward neurosurgical guidance in glioma resection

    NASA Astrophysics Data System (ADS)

    Xie, Yijing; Thom, Maria; Ebner, Michael; Wykes, Victoria; Desjardins, Adrien; Miserocchi, Anna; Ourselin, Sebastien; McEvoy, Andrew W.; Vercauteren, Tom

    2017-11-01

    In high-grade glioma surgery, tumor resection is often guided by intraoperative fluorescence imaging. 5-aminolevulinic acid-induced protoporphyrin IX (PpIX) provides fluorescent contrast between normal brain tissue and glioma tissue, thus achieving improved tumor delineation and prolonged patient survival compared with conventional white-light-guided resection. However, commercially available fluorescence imaging systems rely solely on visual assessment of fluorescence patterns by the surgeon, which makes the resection more subjective than necessary. We developed a wide-field spectrally resolved fluorescence imaging system utilizing a Generation II scientific CMOS camera and an improved computational model for the precise reconstruction of the PpIX concentration map. In our model, the tissue's optical properties and illumination geometry, which distort the fluorescent emission spectra, are considered. We demonstrate that the CMOS-based system can detect low PpIX concentration at short camera exposure times, while providing high-pixel resolution wide-field images. We show that total variation regularization improves the contrast-to-noise ratio of the reconstructed quantitative concentration map by approximately twofold. Quantitative comparison between the estimated PpIX concentration and tumor histopathology was also investigated to further evaluate the system.

  4. Detecting Autophagy and Autophagy Flux in Chronic Myeloid Leukemia Cells Using a Cyto-ID Fluorescence Spectrophotometric Assay.

    PubMed

    Guo, Sujuan; Pridham, Kevin J; Sheng, Zhi

    2016-01-01

    Autophagy is a catabolic process whereby cellular components are degraded to fuel cells for longer survival during stress. Hence, autophagy plays a vital role in determining cell fate and is central for homeostasis and pathogenesis of many human diseases including chronic myeloid leukemia (CML). It has been well established that autophagy is important for the leukemogenesis as well as drug resistance in CML. Thus, autophagy is an intriguing therapeutic target. However, current approaches that detect autophagy lack reliability and often fail to provide quantitative measurements. To overcome this hurdle and facilitate the development of autophagy-related therapies, we have recently developed an autophagy assay termed as the Cyto-ID fluorescence spectrophotometric assay. This method uses a cationic fluorescence dye, Cyto-ID, which specifically labels autophagic compartments and is detected by a spectrophotometer to permit a large-scale and quantitative analysis. As such, it allows rapid, reliable, and quantitative detection of autophagy and estimation of autophagy flux. In this chapter, we further provide technical details of this method and step-by-step protocols for measuring autophagy or autophagy flux in CML cell lines as well as primary hematopoietic cells.

  5. The interrupted power law and the size of shadow banking.

    PubMed

    Fiaschi, Davide; Kondor, Imre; Marsili, Matteo; Volpati, Valerio

    2014-01-01

    Using public data (Forbes Global 2000) we show that the asset sizes for the largest global firms follow a Pareto distribution in an intermediate range, that is "interrupted" by a sharp cut-off in its upper tail, where it is totally dominated by financial firms. This flattening of the distribution contrasts with a large body of empirical literature which finds a Pareto distribution for firm sizes both across countries and over time. Pareto distributions are generally traced back to a mechanism of proportional random growth, based on a regime of constant returns to scale. This makes our findings of an "interrupted" Pareto distribution all the more puzzling, because we provide evidence that financial firms in our sample should operate in such a regime. We claim that the missing mass from the upper tail of the asset size distribution is a consequence of shadow banking activity and that it provides an (upper) estimate of the size of the shadow banking system. This estimate-which we propose as a shadow banking index-compares well with estimates of the Financial Stability Board until 2009, but it shows a sharper rise in shadow banking activity after 2010. Finally, we propose a proportional random growth model that reproduces the observed distribution, thereby providing a quantitative estimate of the intensity of shadow banking activity.

  6. HEALTH AND ENVIRONMENTAL EFFECTS DOCUMENT ...

    EPA Pesticide Factsheets

    Health and Environmental Effects Documents (HEEDS) are prepared for the Office of Solid Waste and Emergency Response (OSWER). This document series is intended to support listings under the Resource Conservation and Recovery Act (RCRA) as well as to provide health-related limits and goals for emergency and remedial actions under the Comprehensive Environmental Response, Compensation and Liability Act (CERCLA). Both published literature and information obtained from Agency Program Office files are evaluated as they pertain to potential human health, aquatic life and environmental effects of hazardous waste constituents. Several quantitative estimates are presented provided sufficient data are available. For systemic toxicants, these include Reference Doses (RfDs) for chronic and subchronic exposures for both the inhalation and oral exposures. In the case of suspected carcinogens, RfDs may not be estimated. Instead, a carcinogenic potency factor, or q1*, is provided. These potency estimates are derived for both oral and inhalation exposures where possible. In addition, unit risk estimates for air and drinking water are presented based on inhalation and oral data, respectively. Reportable quantities (RQs) based on both chronic toxicity and carcinogenicity are derived. The RQ is used to determine the quantity of a hazardous substance for which notification is required in the event of a release as specified under CERCLA.

  7. Assessment of Scheduling and Plan Execution of Apollo 14 Lunar Surface Operations

    NASA Technical Reports Server (NTRS)

    Marquez, Jessica J.

    2010-01-01

    Although over forty years have passed since first landing on the Moon, there is not yet a comprehensive, quantitative assessment of Apollo extravehicular activities (EVAs). Quantitatively evaluating lunar EVAs will provide a better understanding of the challenges involved with surface operations. This first evaluation of a surface EVA centers on comparing the planned and the as-ran timeline, specifically collecting data on discrepancies between durations that were estimated versus executed. Differences were summarized by task categories in order to gain insight as to the type of surface operation activities that were most challenging. One Apollo 14 EVA was assessed utilizing the described methodology. Selected metrics and task categorizations were effective, and limitations to this process were identified.

  8. Nonequilibrium fluctuations in metaphase spindles: polarized light microscopy, image registration, and correlation functions

    NASA Astrophysics Data System (ADS)

    Brugués, Jan; Needleman, Daniel J.

    2010-02-01

    Metaphase spindles are highly dynamic, nonequilibrium, steady-state structures. We study the internal fluctuations of spindles by computing spatio-temporal correlation functions of movies obtained from quantitative polarized light microscopy. These correlation functions are only physically meaningful if corrections are made for the net motion of the spindle. We describe our image registration algorithm in detail and we explore its robustness. Finally, we discuss the expression used for the estimation of the correlation function in terms of the nematic order of the microtubules which make up the spindle. Ultimately, studying the form of these correlation functions will provide a quantitative test of the validity of coarse-grained models of spindle structure inspired from liquid crystal physics.

  9. Multiparametric Quantitative Ultrasound Imaging in Assessment of Chronic Kidney Disease.

    PubMed

    Gao, Jing; Perlman, Alan; Kalache, Safa; Berman, Nathaniel; Seshan, Surya; Salvatore, Steven; Smith, Lindsey; Wehrli, Natasha; Waldron, Levi; Kodali, Hanish; Chevalier, James

    2017-11-01

    To evaluate the value of multiparametric quantitative ultrasound imaging in assessing chronic kidney disease (CKD) using kidney biopsy pathologic findings as reference standards. We prospectively measured multiparametric quantitative ultrasound markers with grayscale, spectral Doppler, and acoustic radiation force impulse imaging in 25 patients with CKD before kidney biopsy and 10 healthy volunteers. Based on all pathologic (glomerulosclerosis, interstitial fibrosis/tubular atrophy, arteriosclerosis, and edema) scores, the patients with CKD were classified into mild (no grade 3 and <2 of grade 2) and moderate to severe (at least 2 of grade 2 or 1 of grade 3) CKD groups. Multiparametric quantitative ultrasound parameters included kidney length, cortical thickness, pixel intensity, parenchymal shear wave velocity, intrarenal artery peak systolic velocity (PSV), end-diastolic velocity (EDV), and resistive index. We tested the difference in quantitative ultrasound parameters among mild CKD, moderate to severe CKD, and healthy controls using analysis of variance, analyzed correlations of quantitative ultrasound parameters with pathologic scores and the estimated glomerular filtration rate (GFR) using Pearson correlation coefficients, and examined the diagnostic performance of quantitative ultrasound parameters in determining moderate CKD and an estimated GFR of less than 60 mL/min/1.73 m 2 using receiver operating characteristic curve analysis. There were significant differences in cortical thickness, pixel intensity, PSV, and EDV among the 3 groups (all P < .01). Among quantitative ultrasound parameters, the top areas under the receiver operating characteristic curves for PSV and EDV were 0.88 and 0.97, respectively, for determining pathologic moderate to severe CKD, and 0.76 and 0.86 for estimated GFR of less than 60 mL/min/1.73 m 2 . Moderate to good correlations were found for PSV, EDV, and pixel intensity with pathologic scores and estimated GFR. The PSV, EDV, and pixel intensity are valuable in determining moderate to severe CKD. The value of shear wave velocity in assessing CKD needs further investigation. © 2017 by the American Institute of Ultrasound in Medicine.

  10. On the Agreement between Manual and Automated Methods for Single-Trial Detection and Estimation of Features from Event-Related Potentials

    PubMed Central

    Biurrun Manresa, José A.; Arguissain, Federico G.; Medina Redondo, David E.; Mørch, Carsten D.; Andersen, Ole K.

    2015-01-01

    The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen’s κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study. PMID:26258532

  11. Risk assessment of carcinogens in food.

    PubMed

    Barlow, Susan; Schlatter, Josef

    2010-03-01

    Approaches for the risk assessment of carcinogens in food have evolved as scientific knowledge has advanced. Early methods allowed little more than hazard identification and an indication of carcinogenic potency. Evaluation of the modes of action of carcinogens and their broad division into genotoxic and epigenetic (non-genotoxic, non-DNA reactive) carcinogens have played an increasing role in determining the approach followed and provide possibilities for more detailed risk characterisation, including provision of quantitative estimates of risk. Reliance on experimental animal data for the majority of risk assessments and the fact that human exposures to dietary carcinogens are often orders of magnitude below doses used in experimental studies has provided a fertile ground for discussion and diverging views on the most appropriate way to offer risk assessment advice. Approaches used by national and international bodies differ, with some offering numerical estimates of potential risks to human health, while others express considerable reservations about the validity of quantitative approaches requiring extrapolation of dose-response data below the observed range and instead offer qualitative advice. Recognising that qualitative advice alone does not provide risk managers with information on which to prioritise the need for risk management actions, a "margin of exposure" approach for substances that are both genotoxic and carcinogenic has been developed, which is now being used by the World Health Organization and the European Food Safety Authority. This review describes the evolution of risk assessment advice on carcinogens and discusses examples of ways in which carcinogens in food have been assessed in Europe.

  12. Evaluation of Daily Extreme Precipitation Derived From Long-term Global Satellite Quantitative Precipitation Estimates (QPEs)

    NASA Astrophysics Data System (ADS)

    Prat, O. P.; Nelson, B. R.; Nickl, E.; Ferraro, R. R.

    2017-12-01

    This study evaluates the ability of different satellite-based precipitation products to capture daily precipitation extremes over the entire globe. The satellite products considered are the datasets belonging to the Reference Environmental Data Records (REDRs) program (PERSIANN-CDR, GPCP, CMORPH, AMSU-A,B, Hydrologic bundle). Those products provide long-term global records of daily adjusted Quantitative Precipitation Estimates (QPEs) that range from 20-year (CMORPH-CDR) to 35-year (PERSIANN-CDR, GPCP) record of daily adjusted global precipitation. The AMSU-A,B, Hydro-bundle is an 11-year record of daily rain rate over land and ocean, snow cover and surface temperature over land, and sea ice concentration, cloud liquid water, and total precipitable water over ocean among others. The aim of this work is to evaluate the ability of the different satellite QPE products to capture daily precipitation extremes. This evaluation will also include comparison with in-situ data sets at the daily scale from the Global Historical Climatology Network (GHCN-Daily), the Global Precipitation Climatology Centre (GPCC) gridded full data daily product, and the US Climate Reference Network (USCRN). In addition, while the products mentioned above only provide QPEs, the AMSU-A,B hydro-bundle provides additional hydrological information (precipitable water, cloud liquid water, snow cover, sea ice concentration). We will also present an analysis of those additional variables available from global satellite measurements and their relevance and complementarity in the context of long-term hydrological and climate studies.

  13. Quantitative analysis of vascular parameters for micro-CT imaging of vascular networks with multi-resolution.

    PubMed

    Zhao, Fengjun; Liang, Jimin; Chen, Xueli; Liu, Junting; Chen, Dongmei; Yang, Xiang; Tian, Jie

    2016-03-01

    Previous studies showed that all the vascular parameters from both the morphological and topological parameters were affected with the altering of imaging resolutions. However, neither the sensitivity analysis of the vascular parameters at multiple resolutions nor the distinguishability estimation of vascular parameters from different data groups has been discussed. In this paper, we proposed a quantitative analysis method of vascular parameters for vascular networks of multi-resolution, by analyzing the sensitivity of vascular parameters at multiple resolutions and estimating the distinguishability of vascular parameters from different data groups. Combining the sensitivity and distinguishability, we designed a hybrid formulation to estimate the integrated performance of vascular parameters in a multi-resolution framework. Among the vascular parameters, degree of anisotropy and junction degree were two insensitive parameters that were nearly irrelevant with resolution degradation; vascular area, connectivity density, vascular length, vascular junction and segment number were five parameters that could better distinguish the vascular networks from different groups and abide by the ground truth. Vascular area, connectivity density, vascular length and segment number not only were insensitive to multi-resolution but could also better distinguish vascular networks from different groups, which provided guidance for the quantification of the vascular networks in multi-resolution frameworks.

  14. Properties of an entropy-based signal receiver with an application to ultrasonic molecular imaging.

    PubMed

    Hughes, M S; McCarthy, J E; Marsh, J N; Arbeit, J M; Neumann, R G; Fuhrhop, R W; Wallace, K D; Znidersic, D R; Maurizi, B N; Baldwin, S L; Lanza, G M; Wickline, S A

    2007-06-01

    Qualitative and quantitative properties of the finite part, H(f), of the Shannon entropy of a continuous waveform f(t) in the continuum limit are derived in order to illuminate its use for waveform characterization. Simple upper and lower bounds on H(f), based on features of f(t), are defined. Quantitative criteria for a priori estimation of the average-case variation of H(f) and log E(f), where E(f) is the signal energy of f(t) are also derived. These provide relative sensitivity estimates that could be used to prospectively choose optimal imaging strategies in real-time ultrasonic imaging machines, where system bandwidth is often pushed to its limits. To demonstrate the utility of these sensitivity relations for this application, a study designed to assess the feasibility of identification of angiogenic neovasculature targeted with perfluorocarbon nanoparticles that specifically bind to alpha(v)beta3-integrin expression in tumors was performed. The outcome of this study agrees with the prospective sensitivity estimates that were used for the two receivers. Moreover, these data demonstrate the ability of entropy-based signal receivers when used in conjunction with targeted nanoparticles to elucidate the presence of alpha(v)beta3 integrins in primordial neovasculature, particularly in acoustically unfavorable environments.

  15. Adversity magnifies the importance of social information in decision-making.

    PubMed

    Pérez-Escudero, Alfonso; de Polavieja, Gonzalo G

    2017-11-01

    Decision-making theories explain animal behaviour, including human behaviour, as a response to estimations about the environment. In the case of collective behaviour, they have given quantitative predictions of how animals follow the majority option. However, they have so far failed to explain that in some species and contexts social cohesion increases when conditions become more adverse (i.e. individuals choose the majority option with higher probability when the estimated quality of all available options decreases). We have found that this failure is due to modelling simplifications that aided analysis, like low levels of stochasticity or the assumption that only one choice is the correct one. We provide a more general but simple geometric framework to describe optimal or suboptimal decisions in collectives that gives insight into three different mechanisms behind this effect. The three mechanisms have in common that the private information acts as a gain factor to social information: a decrease in the privately estimated quality of all available options increases the impact of social information, even when social information itself remains unchanged. This increase in the importance of social information makes it more likely that agents will follow the majority option. We show that these results quantitatively explain collective behaviour in fish and experiments of social influence in humans. © 2017 The Authors.

  16. Development of combination tapered fiber-optic biosensor dip probe for quantitative estimation of interleukin-6 in serum samples

    NASA Astrophysics Data System (ADS)

    Wang, Chun Wei; Manne, Upender; Reddy, Vishnu B.; Oelschlager, Denise K.; Katkoori, Venkat R.; Grizzle, William E.; Kapoor, Rakesh

    2010-11-01

    A combination tapered fiber-optic biosensor (CTFOB) dip probe for rapid and cost-effective quantification of proteins in serum samples has been developed. This device relies on diode laser excitation and a charged-coupled device spectrometer and functions on a technique of sandwich immunoassay. As a proof of principle, this technique was applied in a quantitative estimation of interleukin IL-6. The probes detected IL-6 at picomolar levels in serum samples obtained from a patient with lupus, an autoimmune disease, and a patient with lymphoma. The estimated concentration of IL-6 in the lupus sample was 5.9 +/- 0.6 pM, and in the lymphoma sample, it was below the detection limit. These concentrations were verified by a procedure involving bead-based xMAP technology. A similar trend in the concentrations was observed. The specificity of the CTFOB dip probes was assessed by analysis with receiver operating characteristics. This analysis suggests that the dip probes can detect 5-pM or higher concentration of IL-6 in these samples with specificities of 100%. The results provide information for guiding further studies in the utilization of these probes to quantify other analytes in body fluids with high specificity and sensitivity.

  17. Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity

    NASA Astrophysics Data System (ADS)

    Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.

    2008-08-01

    A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.

  18. 3D-quantitative structure-activity relationship studies on benzothiadiazepine hydroxamates as inhibitors of tumor necrosis factor-alpha converting enzyme.

    PubMed

    Murumkar, Prashant R; Giridhar, Rajani; Yadav, Mange Ram

    2008-04-01

    A set of 29 benzothiadiazepine hydroxamates having selective tumor necrosis factor-alpha converting enzyme inhibitory activity were used to compare the quality and predictive power of 3D-quantitative structure-activity relationship, comparative molecular field analysis, and comparative molecular similarity indices models for the atom-based, centroid/atom-based, data-based, and docked conformer-based alignment. Removal of two outliers from the initial training set of molecules improved the predictivity of models. Among the 3D-quantitative structure-activity relationship models developed using the above four alignments, the database alignment provided the optimal predictive comparative molecular field analysis model for the training set with cross-validated r(2) (q(2)) = 0.510, non-cross-validated r(2) = 0.972, standard error of estimates (s) = 0.098, and F = 215.44 and the optimal comparative molecular similarity indices model with cross-validated r(2) (q(2)) = 0.556, non-cross-validated r(2) = 0.946, standard error of estimates (s) = 0.163, and F = 99.785. These models also showed the best test set prediction for six compounds with predictive r(2) values of 0.460 and 0.535, respectively. The contour maps obtained from 3D-quantitative structure-activity relationship studies were appraised for activity trends for the molecules analyzed. The comparative molecular similarity indices models exhibited good external predictivity as compared with that of comparative molecular field analysis models. The data generated from the present study helped us to further design and report some novel and potent tumor necrosis factor-alpha converting enzyme inhibitors.

  19. An assessment of the reliability of quantitative genetics estimates in study systems with high rate of extra-pair reproduction and low recruitment.

    PubMed

    Bourret, A; Garant, D

    2017-03-01

    Quantitative genetics approaches, and particularly animal models, are widely used to assess the genetic (co)variance of key fitness related traits and infer adaptive potential of wild populations. Despite the importance of precision and accuracy of genetic variance estimates and their potential sensitivity to various ecological and population specific factors, their reliability is rarely tested explicitly. Here, we used simulations and empirical data collected from an 11-year study on tree swallow (Tachycineta bicolor), a species showing a high rate of extra-pair paternity and a low recruitment rate, to assess the importance of identity errors, structure and size of the pedigree on quantitative genetic estimates in our dataset. Our simulations revealed an important lack of precision in heritability and genetic-correlation estimates for most traits, a low power to detect significant effects and important identifiability problems. We also observed a large bias in heritability estimates when using the social pedigree instead of the genetic one (deflated heritabilities) or when not accounting for an important cause of resemblance among individuals (for example, permanent environment or brood effect) in model parameterizations for some traits (inflated heritabilities). We discuss the causes underlying the low reliability observed here and why they are also likely to occur in other study systems. Altogether, our results re-emphasize the difficulties of generalizing quantitative genetic estimates reliably from one study system to another and the importance of reporting simulation analyses to evaluate these important issues.

  20. Multivariate Qst–Fst Comparisons: A Neutrality Test for the Evolution of the G Matrix in Structured Populations

    PubMed Central

    Martin, Guillaume; Chapuis, Elodie; Goudet, Jérôme

    2008-01-01

    Neutrality tests in quantitative genetics provide a statistical framework for the detection of selection on polygenic traits in wild populations. However, the existing method based on comparisons of divergence at neutral markers and quantitative traits (Qst–Fst) suffers from several limitations that hinder a clear interpretation of the results with typical empirical designs. In this article, we propose a multivariate extension of this neutrality test based on empirical estimates of the among-populations (D) and within-populations (G) covariance matrices by MANOVA. A simple pattern is expected under neutrality: D = 2Fst/(1 − Fst)G, so that neutrality implies both proportionality of the two matrices and a specific value of the proportionality coefficient. This pattern is tested using Flury's framework for matrix comparison [common principal-component (CPC) analysis], a well-known tool in G matrix evolution studies. We show the importance of using a Bartlett adjustment of the test for the small sample sizes typically found in empirical studies. We propose a dual test: (i) that the proportionality coefficient is not different from its neutral expectation [2Fst/(1 − Fst)] and (ii) that the MANOVA estimates of mean square matrices between and among populations are proportional. These two tests combined provide a more stringent test for neutrality than the classic Qst–Fst comparison and avoid several statistical problems. Extensive simulations of realistic empirical designs suggest that these tests correctly detect the expected pattern under neutrality and have enough power to efficiently detect mild to strong selection (homogeneous, heterogeneous, or mixed) when it is occurring on a set of traits. This method also provides a rigorous and quantitative framework for disentangling the effects of different selection regimes and of drift on the evolution of the G matrix. We discuss practical requirements for the proper application of our test in empirical studies and potential extensions. PMID:18245845

  1. Quantitative Guidance for Stove Usage and Performance to Achieve Health and Environmental Targets

    PubMed Central

    Chiang, Ranyee A.

    2015-01-01

    Background Displacing the use of polluting and inefficient cookstoves in developing countries is necessary to achieve the potential health and environmental benefits sought through clean cooking solutions. Yet little quantitative context has been provided on how much displacement of traditional technologies is needed to achieve targets for household air pollutant concentrations or fuel savings. Objectives This paper provides instructive guidance on the usage of cooking technologies required to achieve health and environmental improvements. Methods We evaluated different scenarios of displacement of traditional stoves with use of higher performing technologies. The air quality and fuel consumption impacts were estimated for these scenarios using a single-zone box model of indoor air quality and ratios of thermal efficiency. Results Stove performance and usage should be considered together, as lower performing stoves can result in similar or greater benefits than a higher performing stove if the lower performing stove has considerably higher displacement of the baseline stove. Based on the indoor air quality model, there are multiple performance–usage scenarios for achieving modest indoor air quality improvements. To meet World Health Organization guidance levels, however, three-stone fire and basic charcoal stove usage must be nearly eliminated to achieve the particulate matter target (< 1–3 hr/week), and substantially limited to meet the carbon monoxide guideline (< 7–9 hr/week). Conclusions Moderate health gains may be achieved with various performance–usage scenarios. The greatest benefits are estimated to be achieved by near-complete displacement of traditional stoves with clean technologies, emphasizing the need to shift in the long term to near exclusive use of clean fuels and stoves. The performance–usage scenarios are also provided as a tool to guide technology selection and prioritize behavior change opportunities to maximize impact. Citation Johnson MA, Chiang RA. 2015. Quantitative guidance for stove usage and performance to achieve health and environmental targets. Environ Health Perspect 123:820–826; http://dx.doi.org/10.1289/ehp.1408681 PMID:25816219

  2. The nest-concealment hypothesis: New insights from a comparative analysis

    USGS Publications Warehouse

    Borgmann, Kathi L.; Conway, Courtney J.

    2015-01-01

    Selection of a breeding site is critical for many animals, especially for birds whose offspring are stationary during development. Thus, birds are often assumed to prefer concealed nest sites. However, 74% of studies (n = 106) that have evaluated this relationship for open-cup nesting songbirds in North America failed to support the nest-concealment hypothesis. We conducted a comparative analysis to identify factors that contribute to variation in the ability of researchers to find support for the nest-concealment hypothesis. We found that some of the discrepancy among studies can be explained by interspecific differences in morphological and extrinsic factors that affect nest predation. Moreover, methods that investigators used to estimate concealment affected whether studies found support for the nest-concealment hypothesis; 33% of the studies that used quantitative estimates found support for the nest-concealment hypothesis whereas only 10% of the studies that used qualitative estimates found support. The timing of measurements also explained some of the ambiguity; studies that provided little information regarding the timing of their foliage density estimates were less likely to support the nest-concealment hypothesis. Species with more conspicuous male plumage were less likely to support the nest-concealment hypothesis when we analyzed studies that used visual estimates. Whereas species with more conspicuous female plumage were more likely to support the nest-concealment hypothesis when we analyzed studies that used quantitative measures. Our results demonstrate that support for the nest-concealment hypothesis has been equivocal, but that some of the ambiguity can be explained by morphological traits and methods used to measure concealment.

  3. Estimating Benzene Exposure Level over Time and by Industry Type through a Review of Literature on Korea.

    PubMed

    Park, Donguk; Choi, Sangjun; Ha, Kwonchul; Jung, Hyejung; Yoon, Chungsik; Koh, Dong-Hee; Ryu, Seunghun; Kim, Soogeun; Kang, Dongmug; Yoo, Kyemook

    2015-09-01

    The major purpose of this study is to construct a retrospective exposure assessment for benzene through a review of literature on Korea. Airborne benzene measurements reported in 34 articles were reviewed. A total of 15,729 individual measurements were compiled. Weighted arithmetic means [AM(w)] and their variance calculated across studies were summarized according to 5-year period intervals (prior to the 1970s through the 2010s) and industry type. Industries were classified according to Korea Standard Industrial Classification (KSIC) using information provided in the literature. We estimated quantitative retrospective exposure to benzene for each cell in the matrix through a combination of time and KSIC. Analysis of the AM(w) indicated reductions in exposure levels over time, regardless of industry, with mean levels prior to the 1980-1984 period of 50.4 ppm (n = 2,289), which dropped to 2.8 ppm (n = 305) in the 1990-1994 period, and to 0.1 ppm (n = 294) in the 1995-1999 period. There has been no improvement since the 2000s, when the AM(w) of 4.3 ppm (n = 6,211) for the 2005-2009 period and 4.5 ppm (n = 3,358) for the 2010-2013 period were estimated. A comparison by industry found no consistent patterns in the measurement results. Our estimated benzene measurements can be used to determine not only the possibility of retrospective exposure to benzene, but also to estimate the level of quantitative or semiquantitative retrospective exposure to benzene.

  4. Estimating malaria transmission from humans to mosquitoes in a noisy landscape.

    PubMed

    Reiner, Robert C; Guerra, Carlos; Donnelly, Martin J; Bousema, Teun; Drakeley, Chris; Smith, David L

    2015-10-06

    A basic quantitative understanding of malaria transmission requires measuring the probability a mosquito becomes infected after feeding on a human. Parasite prevalence in mosquitoes is highly age-dependent, and the unknown age-structure of fluctuating mosquito populations impedes estimation. Here, we simulate mosquito infection dynamics, where mosquito recruitment is modelled seasonally with fractional Brownian noise, and we develop methods for estimating mosquito infection rates. We find that noise introduces bias, but the magnitude of the bias depends on the 'colour' of the noise. Some of these problems can be overcome by increasing the sampling frequency, but estimates of transmission rates (and estimated reductions in transmission) are most accurate and precise if they combine parity, oocyst rates and sporozoite rates. These studies provide a basis for evaluating the adequacy of various entomological sampling procedures for measuring malaria parasite transmission from humans to mosquitoes and for evaluating the direct transmission-blocking effects of a vaccine. © 2015 The Authors.

  5. Correcting power and p-value calculations for bias in diffusion tensor imaging.

    PubMed

    Lauzon, Carolyn B; Landman, Bennett A

    2013-07-01

    Diffusion tensor imaging (DTI) provides quantitative parametric maps sensitive to tissue microarchitecture (e.g., fractional anisotropy, FA). These maps are estimated through computational processes and subject to random distortions including variance and bias. Traditional statistical procedures commonly used for study planning (including power analyses and p-value/alpha-rate thresholds) specifically model variability, but neglect potential impacts of bias. Herein, we quantitatively investigate the impacts of bias in DTI on hypothesis test properties (power and alpha-rate) using a two-sided hypothesis testing framework. We present theoretical evaluation of bias on hypothesis test properties, evaluate the bias estimation technique SIMEX for DTI hypothesis testing using simulated data, and evaluate the impacts of bias on spatially varying power and alpha rates in an empirical study of 21 subjects. Bias is shown to inflame alpha rates, distort the power curve, and cause significant power loss even in empirical settings where the expected difference in bias between groups is zero. These adverse effects can be attenuated by properly accounting for bias in the calculation of power and p-values. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Risk appreciation for living kidney donors: another new subspecialty?

    PubMed

    Steiner, Robert W

    2004-05-01

    Quantitative estimates of the risk of end stage renal disease (ESRD) for living donors would seem essential to defensible donor selection practices, as the 'safe/unsafe' model for donor selection is not viable. All kidney donors take risk, and four fundamental, qualitative criteria should instead be used to decide when donor rejection is justified. These criteria are lack of donor education about transplantation, donor irrationality, lack of free and voluntary donation, and/or that donor acceptance would unavoidably threaten the public trust or the integrity of the center's selection procedures. Such a data-based selection policy, with explicit documentation of unbiased and comprehensive donor education, will help neutralize the center's self interest in a more defensible way than by rejecting 'complicated' kidney donors out of hand, and in a more practical way than by the creation of center-independent donor counselors or waiting for donor registries to come to fruition. Living kidney donors with isolated medical abnormalities comprise a sizable subset of at risk donors for whom center acceptance practices vary markedly. This population provides a paradigm opportunity for quantitative risk estimation and counseling.

  7. Data quality at the Singapore Cancer Registry: An overview of comparability, completeness, validity and timeliness.

    PubMed

    Fung, Janice Wing Mei; Lim, Sandra Bee Lay; Zheng, Huili; Ho, William Ying Tat; Lee, Bee Guat; Chow, Khuan Yew; Lee, Hin Peng

    2016-08-01

    To provide a comprehensive evaluation of the quality of the data at the Singapore Cancer Registry (SCR). Quantitative and semi-quantitative methods were used to assess the comparability, completeness, accuracy and timeliness of data for the period of 1968-2013, with focus on the period 2008-2012. The SCR coding and classification systems follow international standards. The overall completeness was estimated at 98.1% using the flow method and 97.5% using the capture-recapture method, for the period of 2008-2012. For the same period, 91.9% of the cases were morphologically verified (site-specific range: 40.4-100%) with 1.1% DCO cases. The under-reporting in 2011 and 2012 due to timely publication was estimated at 0.03% and 0.51% respectively. This review shows that the processes in place at the SCR yields data which are internationally comparable, relatively complete, valid, and timely, allowing for greater confidence in the use of quality data in the areas of cancer prevention, treatment and control. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Quantifying Organic Matter in Surface Waters of the United States and Delivery to the Coastal Zone

    NASA Astrophysics Data System (ADS)

    Boyer, E. W.; Alexander, R. B.; Smith, R. A.; Shih, J.

    2012-12-01

    Organic carbon (OC) is a critical water quality characteristic in surface waters. It is an important component of the energy balance and food chains in freshwater and estuarine aquatic ecosystems, is significant in the mobilization and transport of contaminants along flow paths, and is associated with the formation of known carcinogens in drinking water supplies. The importance of OC dynamics on water quality has been recognized, but challenges remain in quantitatively addressing processes controlling OC fluxes over broad spatial scales in a hydrological context, and considering upstream-downstream linkages along flow paths. Here, we: 1) quantified lateral OC fluxes in rivers, streams, and reservoirs across the nation from headwaters to the coasts; 2) partitioned how much organic carbon that is stored in lakes, rivers and streams comes from allochthonous sources (produced in the terrestrial landscape) versus autochthonous sources (produced in-stream by primary production); 3) estimated the delivery of dissolved and total forms of organic carbon to coastal estuaries and embayments; and 4) considered seasonal factors affecting the temporal variation in OC responses. To accomplish this, we developed national-scale models of organic carbon in U.S. surface waters using the spatially referenced regression on watersheds (SPARROW) technique. The modeling approach uses mechanistic formulations, imposes mass balance constraints, and provides a formal parameter estimation structure to statistically estimate sources and fate of OC in terrestrial and aquatic ecosystems. We calibrated and evaluated the model with statistical estimates of OC loads that were observed at a network of monitoring stations across the nation, and further explored factors controlling seasonal dynamics of OC based on these long term monitoring data. Our results illustrate spatial patterns and magnitudes OC loadings in rivers, highlighting hot spots and suggesting origins of the OC to each location. Further, our results yield quantitative estimates of aquatic OC fluxes for large water regions and for the nation, providing a refined estimate of the role of surface water fluxes of OC in relationship to regional and national carbon budgets. Finally, we are using our simulations to explore the role of OC in relation to other nutrients in contributing to acidification and eutrophication of coastal waters.

  9. Dual respiratory and cardiac motion estimation in PET imaging: Methods design and quantitative evaluation.

    PubMed

    Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W

    2018-04-01

    The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.

  10. A non-parametric automatic blending methodology to estimate rainfall fields from rain gauge and radar data

    NASA Astrophysics Data System (ADS)

    Velasco-Forero, Carlos A.; Sempere-Torres, Daniel; Cassiraga, Eduardo F.; Jaime Gómez-Hernández, J.

    2009-07-01

    Quantitative estimation of rainfall fields has been a crucial objective from early studies of the hydrological applications of weather radar. Previous studies have suggested that flow estimations are improved when radar and rain gauge data are combined to estimate input rainfall fields. This paper reports new research carried out in this field. Classical approaches for the selection and fitting of a theoretical correlogram (or semivariogram) model (needed to apply geostatistical estimators) are avoided in this study. Instead, a non-parametric technique based on FFT is used to obtain two-dimensional positive-definite correlograms directly from radar observations, dealing with both the natural anisotropy and the temporal variation of the spatial structure of the rainfall in the estimated fields. Because these correlation maps can be automatically obtained at each time step of a given rainfall event, this technique might easily be used in operational (real-time) applications. This paper describes the development of the non-parametric estimator exploiting the advantages of FFT for the automatic computation of correlograms and provides examples of its application on a case study using six rainfall events. This methodology is applied to three different alternatives to incorporate the radar information (as a secondary variable), and a comparison of performances is provided. In particular, their ability to reproduce in estimated rainfall fields (i) the rain gauge observations (in a cross-validation analysis) and (ii) the spatial patterns of radar fields are analyzed. Results seem to indicate that the methodology of kriging with external drift [KED], in combination with the technique of automatically computing 2-D spatial correlograms, provides merged rainfall fields with good agreement with rain gauges and with the most accurate approach to the spatial tendencies observed in the radar rainfall fields, when compared with other alternatives analyzed.

  11. Identification and Quantitation of Flavanols and Proanthocyanidins in Foods: How Good are the Datas?

    PubMed Central

    Kelm, Mark A.; Hammerstone, John F.; Schmitz, Harold H.

    2005-01-01

    Evidence suggesting that dietary polyphenols, flavanols, and proanthocyanidins in particular offer significant cardiovascular health benefits is rapidly increasing. Accordingly, reliable and accurate methods are needed to provide qualitative and quantitative food composition data necessary for high quality epidemiological and clinical research. Measurements for flavonoids and proanthocyanidins have employed a range of analytical techniques, with various colorimetric assays still being popular for estimating total polyphenolic content in foods and other biological samples despite advances made with more sophisticated analyses. More crudely, estimations of polyphenol content as well as antioxidant activity are also reported with values relating to radical scavenging activity. High-performance liquid chromatography (HPLC) is the method of choice for quantitative analysis of individual polyphenols such as flavanols and proanthocyanidins. Qualitative information regarding proanthocyanidin structure has been determined by chemical methods such as thiolysis and by HPLC-mass spectrometry (MS) techniques at present. The lack of appropriate standards is the single most important factor that limits the aforementioned analyses. However, with ever expanding research in the arena of flavanols, proanthocyanidins, and health and the importance of their future inclusion in food composition databases, the need for standards becomes more critical. At present, sufficiently well-characterized standard material is available for selective flavanols and proanthocyanidins, and construction of at least a limited food composition database is feasible. PMID:15712597

  12. A novel environmental DNA approach to quantify the cryptic invasion of non-native genotypes.

    PubMed

    Uchii, Kimiko; Doi, Hideyuki; Minamoto, Toshifumi

    2016-03-01

    The invasion of non-native species that are closely related to native species can lead to competitive elimination of the native species and/or genomic extinction through hybridization. Such invasions often become serious before they are detected, posing unprecedented threats to biodiversity. A Japanese native strain of common carp (Cyprinus carpio) has become endangered owing to the invasion of non-native strains introduced from the Eurasian continent. Here, we propose a rapid environmental DNA-based approach to quantitatively monitor the invasion of non-native genotypes. Using this system, we developed a method to quantify the relative proportion of native and non-native DNA based on a single-nucleotide polymorphism using cycling probe technology in real-time PCR. The efficiency of this method was confirmed in aquarium experiments, where the quantified proportion of native and non-native DNA in the water was well correlated to the biomass ratio of native and non-native genotypes. This method provided quantitative estimates for the proportion of native and non-native DNA in natural rivers and reservoirs, which allowed us to estimate the degree of invasion of non-native genotypes without catching and analysing individual fish. Our approach would dramatically facilitate the process of quantitatively monitoring the invasion of non-native conspecifics in aquatic ecosystems, thus revealing a promising method for risk assessment and management in biodiversity conservation. © 2015 John Wiley & Sons Ltd.

  13. A quantitative risk assessment for the safety of carcase storage systems for scrapie infected farms.

    PubMed

    Adkin, A; Jones, D L; Eckford, R L; Edwards-Jones, G; Williams, A P

    2014-10-01

    To determine the risk associated with the use of carcase storage vessels on a scrapie infected farm. A stochastic quantitative risk assessment was developed to determine the rate of accumulation and fate of scrapie in a novel low-input storage system. For an example farm infected with classical scrapie, a mean of 10(3·6) Ovine Oral ID50 s was estimated to accumulate annually. Research indicates that the degradation of any prions present may range from insignificant to a magnitude of one or two logs over several months of storage. For infected farms, the likely partitioning of remaining prion into the sludge phase would necessitate the safe operation and removal of resulting materials from these systems. If complete mixing could be assumed, on average, the concentrations of infectivity are estimated to be slightly lower than that measured in placenta from infected sheep at lambing. This is the first quantitative assessment of the scrapie risk associated with fallen stock on farm and provides guidance to policy makers on the safety of one type of storage system and the relative risk when compared to other materials present on an infected farm. © 2014 Crown Copyright. Journal of Applied Microbiology © 2014 Society for Applied Microbiology This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

  14. Bone volume fraction and structural parameters for estimation of mechanical stiffness and failure load of human cancellous bone samples; in-vitro comparison of ultrasound transit time spectroscopy and X-ray μCT.

    PubMed

    Alomari, Ali Hamed; Wille, Marie-Luise; Langton, Christian M

    2018-02-01

    Conventional mechanical testing is the 'gold standard' for assessing the stiffness (N mm -1 ) and strength (MPa) of bone, although it is not applicable in-vivo since it is inherently invasive and destructive. The mechanical integrity of a bone is determined by its quantity and quality; being related primarily to bone density and structure respectively. Several non-destructive, non-invasive, in-vivo techniques have been developed and clinically implemented to estimate bone density, both areal (dual-energy X-ray absorptiometry (DXA)) and volumetric (quantitative computed tomography (QCT)). Quantitative ultrasound (QUS) parameters of velocity and attenuation are dependent upon both bone quantity and bone quality, although it has not been possible to date to transpose one particular QUS parameter into separate estimates of quantity and quality. It has recently been shown that ultrasound transit time spectroscopy (UTTS) may provide an accurate estimate of bone density and hence quantity. We hypothesised that UTTS also has the potential to provide an estimate of bone structure and hence quality. In this in-vitro study, 16 human femoral bone samples were tested utilising three techniques; UTTS, micro computed tomography (μCT), and mechanical testing. UTTS was utilised to estimate bone volume fraction (BV/TV) and two novel structural parameters, inter-quartile range of the derived transit time (UTTS-IQR) and the transit time of maximum proportion of sonic-rays (TTMP). μCT was utilised to derive BV/TV along with several bone structure parameters. A destructive mechanical test was utilised to measure the stiffness and strength (failure load) of the bone samples. BV/TV was calculated from the derived transit time spectrum (TTS); the correlation coefficient (R 2 ) with μCT-BV/TV was 0.885. For predicting mechanical stiffness and strength, BV/TV derived by both μCT and UTTS provided the strongest correlation with mechanical stiffness (R 2 =0.567 and 0.618 respectively) and mechanical strength (R 2 =0.747 and 0.736 respectively). When respective structural parameters were incorporated to BV/TV, multiple regression analysis indicated that none of the μCT histomorphometric parameters could improve the prediction of mechanical stiffness and strength, while for UTTS, adding TTMP to BV/TV increased the prediction of mechanical stiffness to R 2 =0.711 and strength to R 2 =0.827. It is therefore envisaged that UTTS may have the ability to estimate BV/TV along with providing an improved prediction of osteoporotic fracture risk, within routine clinical practice in the future. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Atmospheric Effects of Subsonic Aircraft: Interim Assessment Report of the Advanced Subsonic Technology Program

    NASA Technical Reports Server (NTRS)

    Friedl, Randall R. (Editor)

    1997-01-01

    This first interim assessment of the subsonic assessment (SASS) project attempts to summarize concisely the status of our knowledge concerning the impacts of present and future subsonic aircraft fleets. It also highlights the major areas of scientific uncertainty, through review of existing data bases and model-based sensitivity studies. In view of the need for substantial improvements in both model formulations and experimental databases, this interim assessment cannot provide confident numerical predictions of aviation impacts. However, a number of quantitative estimates are presented, which provide some guidance to policy makers.

  16. Generalized paired-agent kinetic model for in vivo quantification of cancer cell-surface receptors under receptor saturation conditions

    NASA Astrophysics Data System (ADS)

    Sadeghipour, N.; Davis, S. C.; Tichauer, K. M.

    2017-01-01

    New precision medicine drugs oftentimes act through binding to specific cell-surface cancer receptors, and thus their efficacy is highly dependent on the availability of those receptors and the receptor concentration per cell. Paired-agent molecular imaging can provide quantitative information on receptor status in vivo, especially in tumor tissue; however, to date, published approaches to paired-agent quantitative imaging require that only ‘trace’ levels of imaging agent exist compared to receptor concentration. This strict requirement may limit applicability, particularly in drug binding studies, which seek to report on a biological effect in response to saturating receptors with a drug moiety. To extend the regime over which paired-agent imaging may be used, this work presents a generalized simplified reference tissue model (GSRTM) for paired-agent imaging developed to approximate receptor concentration in both non-receptor-saturated and receptor-saturated conditions. Extensive simulation studies show that tumor receptor concentration estimates recovered using the GSRTM are more accurate in receptor-saturation conditions than the standard simple reference tissue model (SRTM) (% error (mean  ±  sd): GSRTM 0  ±  1 and SRTM 50  ±  1) and match the SRTM accuracy in non-saturated conditions (% error (mean  ±  sd): GSRTM 5  ±  5 and SRTM 0  ±  5). To further test the approach, GSRTM-estimated receptor concentration was compared to SRTM-estimated values extracted from tumor xenograft in vivo mouse model data. The GSRTM estimates were observed to deviate from the SRTM in tumors with low receptor saturation (which are likely in a saturated regime). Finally, a general ‘rule-of-thumb’ algorithm is presented to estimate the expected level of receptor saturation that would be achieved in a given tissue provided dose and pharmacokinetic information about the drug or imaging agent being used, and physiological information about the tissue. These studies suggest that the GSRTM is necessary when receptor saturation exceeds 20% and highlight the potential for GSRTM to accurately measure receptor concentrations under saturation conditions, such as might be required during high dose drug studies, or for imaging applications where high concentrations of imaging agent are required to optimize signal-to-noise conditions. This model can also be applied to PET and SPECT imaging studies that tend to suffer from noisier data, but require one less parameter to fit if images are converted to imaging agent concentration (quantitative PET/SPECT).

  17. Reef-associated crustacean fauna: biodiversity estimates using semi-quantitative sampling and DNA barcoding

    NASA Astrophysics Data System (ADS)

    Plaisance, L.; Knowlton, N.; Paulay, G.; Meyer, C.

    2009-12-01

    The cryptofauna associated with coral reefs accounts for a major part of the biodiversity in these ecosystems but has been largely overlooked in biodiversity estimates because the organisms are hard to collect and identify. We combine a semi-quantitative sampling design and a DNA barcoding approach to provide metrics for the diversity of reef-associated crustacean. Twenty-two similar-sized dead heads of Pocillopora were sampled at 10 m depth from five central Pacific Ocean localities (four atolls in the Northern Line Islands and in Moorea, French Polynesia). All crustaceans were removed, and partial cytochrome oxidase subunit I was sequenced from 403 individuals, yielding 135 distinct taxa using a species-level criterion of 5% similarity. Most crustacean species were rare; 44% of the OTUs were represented by a single individual, and an additional 33% were represented by several specimens found only in one of the five localities. The Northern Line Islands and Moorea shared only 11 OTUs. Total numbers estimated by species richness statistics (Chao1 and ACE) suggest at least 90 species of crustaceans in Moorea and 150 in the Northern Line Islands for this habitat type. However, rarefaction curves for each region failed to approach an asymptote, and Chao1 and ACE estimators did not stabilize after sampling eight heads in Moorea, so even these diversity figures are underestimates. Nevertheless, even this modest sampling effort from a very limited habitat resulted in surprisingly high species numbers.

  18. Setting population targets for mammals using body mass as a predictor of population persistence.

    PubMed

    Hilbers, Jelle P; Santini, Luca; Visconti, Piero; Schipper, Aafke M; Pinto, Cecilia; Rondinini, Carlo; Huijbregts, Mark A J

    2017-04-01

    Conservation planning and biodiversity assessments need quantitative targets to optimize planning options and assess the adequacy of current species protection. However, targets aiming at persistence require population-specific data, which limit their use in favor of fixed and nonspecific targets, likely leading to unequal distribution of conservation efforts among species. We devised a method to derive equitable population targets; that is, quantitative targets of population size that ensure equal probabilities of persistence across a set of species and that can be easily inferred from species-specific traits. In our method, we used models of population dynamics across a range of life-history traits related to species' body mass to estimate minimum viable population targets. We applied our method to a range of body masses of mammals, from 2 g to 3825 kg. The minimum viable population targets decreased asymptotically with increasing body mass and were on the same order of magnitude as minimum viable population estimates from species- and context-specific studies. Our approach provides a compromise between pragmatic, nonspecific population targets and detailed context-specific estimates of population viability for which only limited data are available. It enables a first estimation of species-specific population targets based on a readily available trait and thus allows setting equitable targets for population persistence in large-scale and multispecies conservation assessments and planning. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  19. Numerical simulations of the hard X-ray pulse intensity distribution at the Linac Coherent Light Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pardini, Tom; Aquila, Andrew; Boutet, Sebastien

    Numerical simulations of the current and future pulse intensity distributions at selected locations along the Far Experimental Hall, the hard X-ray section of the Linac Coherent Light Source (LCLS), are provided. Estimates are given for the pulse fluence, energy and size in and out of focus, taking into account effects due to the experimentally measured divergence of the X-ray beam, and measured figure errors of all X-ray optics in the beam path. Out-of-focus results are validated by comparison with experimental data. Previous work is expanded on, providing quantitatively correct predictions of the pulse intensity distribution. Numerical estimates in focus aremore » particularly important given that the latter cannot be measured with direct imaging techniques due to detector damage. Finally, novel numerical estimates of improvements to the pulse intensity distribution expected as part of the on-going upgrade of the LCLS X-ray transport system are provided. As a result, we suggest how the new generation of X-ray optics to be installed would outperform the old one, satisfying the tight requirements imposed by X-ray free-electron laser facilities.« less

  20. Numerical simulations of the hard X-ray pulse intensity distribution at the Linac Coherent Light Source

    DOE PAGES

    Pardini, Tom; Aquila, Andrew; Boutet, Sebastien; ...

    2017-06-15

    Numerical simulations of the current and future pulse intensity distributions at selected locations along the Far Experimental Hall, the hard X-ray section of the Linac Coherent Light Source (LCLS), are provided. Estimates are given for the pulse fluence, energy and size in and out of focus, taking into account effects due to the experimentally measured divergence of the X-ray beam, and measured figure errors of all X-ray optics in the beam path. Out-of-focus results are validated by comparison with experimental data. Previous work is expanded on, providing quantitatively correct predictions of the pulse intensity distribution. Numerical estimates in focus aremore » particularly important given that the latter cannot be measured with direct imaging techniques due to detector damage. Finally, novel numerical estimates of improvements to the pulse intensity distribution expected as part of the on-going upgrade of the LCLS X-ray transport system are provided. As a result, we suggest how the new generation of X-ray optics to be installed would outperform the old one, satisfying the tight requirements imposed by X-ray free-electron laser facilities.« less

  1. Weighing trees with lasers: advances, challenges and opportunities

    PubMed Central

    Boni Vicari, M.; Burt, A.; Calders, K.; Lewis, S. L.; Raumonen, P.; Wilkes, P.

    2018-01-01

    Terrestrial laser scanning (TLS) is providing exciting new ways to quantify tree and forest structure, particularly above-ground biomass (AGB). We show how TLS can address some of the key uncertainties and limitations of current approaches to estimating AGB based on empirical allometric scaling equations (ASEs) that underpin all large-scale estimates of AGB. TLS provides extremely detailed non-destructive measurements of tree form independent of tree size and shape. We show examples of three-dimensional (3D) TLS measurements from various tropical and temperate forests and describe how the resulting TLS point clouds can be used to produce quantitative 3D models of branch and trunk size, shape and distribution. These models can drastically improve estimates of AGB, provide new, improved large-scale ASEs, and deliver insights into a range of fundamental tree properties related to structure. Large quantities of detailed measurements of individual 3D tree structure also have the potential to open new and exciting avenues of research in areas where difficulties of measurement have until now prevented statistical approaches to detecting and understanding underlying patterns of scaling, form and function. We discuss these opportunities and some of the challenges that remain to be overcome to enable wider adoption of TLS methods. PMID:29503726

  2. An Innovative Method for Estimating Soil Retention at a ...

    EPA Pesticide Factsheets

    Planning for a sustainable future should include an accounting of services currently provided by ecosystems such as erosion control. Retention of soil improves fertility, increases water retention, and decreases sedimentation in streams and rivers. Landscapes patterns that facilitate these services could help reduce costs for flood control, dredging of reservoirs and waterways, while maintaining habitat for fish and other species important to recreational and tourism industries. Landscape scale geospatial data available for the continental United States was leveraged to estimate sediment erosion (RUSLE-based, Renard, et al. 1997) employing recent geospatial techniques of sediment delivery ratio (SDR) estimation (Cavalli, et al. 2013). The approach was designed to derive a quantitative approximation of the ecological services provided by vegetative cover, management practices, and other surface features with respect to protecting soils from the erosion processes of detachment, transport, and deposition. Quantities of soil retained on the landscape and potential erosion for multiple land cover scenarios relative to current (NLCD 2011) conditions were calculated for each calendar month, and summed to yield annual estimations at a 30-meter grid cell. Continental-scale data used included MODIS NDVI data (2000-2014) to estimate monthly USLE C-factors, gridded soil survey geographic (gSSURGO) soils data (annual USLE K factor), PRISM rainfall data (monthly USLE

  3. A robust approach for ECG-based analysis of cardiopulmonary coupling.

    PubMed

    Zheng, Jiewen; Wang, Weidong; Zhang, Zhengbo; Wu, Dalei; Wu, Hao; Peng, Chung-Kang

    2016-07-01

    Deriving respiratory signal from a surface electrocardiogram (ECG) measurement has advantage of simultaneously monitoring of cardiac and respiratory activities. ECG-based cardiopulmonary coupling (CPC) analysis estimated by heart period variability and ECG-derived respiration (EDR) shows promising applications in medical field. The aim of this paper is to provide a quantitative analysis of the ECG-based CPC, and further improve its performance. Two conventional strategies were tested to obtain EDR signal: R-S wave amplitude and area of the QRS complex. An adaptive filter was utilized to extract the common component of inter-beat interval (RRI) and EDR, generating enhanced versions of EDR signal. CPC is assessed through probing the nonlinear phase interactions between RRI series and respiratory signal. Respiratory oscillations presented in both RRI series and respiratory signals were extracted by ensemble empirical mode decomposition for coupling analysis via phase synchronization index. The results demonstrated that CPC estimated from conventional EDR series exhibits constant and proportional biases, while that estimated from enhanced EDR series is more reliable. Adaptive filtering can improve the accuracy of the ECG-based CPC estimation significantly and achieve robust CPC analysis. The improved ECG-based CPC estimation may provide additional prognostic information for both sleep medicine and autonomic function analysis. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. 76 FR 13018 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... statistical surveys that yield quantitative results that can be generalized to the population of study. This... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. Total Burden Estimate for the...

  5. A boosted optimal linear learner for retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Poletti, E.; Grisan, E.

    2014-03-01

    Ocular fundus images provide important information about retinal degeneration, which may be related to acute pathologies or to early signs of systemic diseases. An automatic and quantitative assessment of vessel morphological features, such as diameters and tortuosity, can improve clinical diagnosis and evaluation of retinopathy. At variance with available methods, we propose a data-driven approach, in which the system learns a set of optimal discriminative convolution kernels (linear learner). The set is progressively built based on an ADA-boost sample weighting scheme, providing seamless integration between linear learner estimation and classification. In order to capture the vessel appearance changes at different scales, the kernels are estimated on a pyramidal decomposition of the training samples. The set is employed as a rotating bank of matched filters, whose response is used by the boosted linear classifier to provide a classification of each image pixel into the two classes of interest (vessel/background). We tested the approach fundus images available from the DRIVE dataset. We show that the segmentation performance yields an accuracy of 0.94.

  6. On estimating the economic value of insectivorous bats: Prospects and priorities for biologists

    USGS Publications Warehouse

    Boyles, Justin G.; Sole, Catherine L.; Cryan, Paul M.; McCracken, Gary F.

    2013-01-01

    Bats are among the most economically important nondomesticated mammals in the world. They are well-known pollinators and seed dispersers, but crop pest suppression is probably the most valuable ecosystem service provided by bats. Scientific literature and popular media often include reports of crop pests in the diet of bats and anecdotal or extrapolated estimates of how many insects are eaten by bats. However, quantitative estimates of the ecosystem services provided by bats in agricultural systems are rare, and the few estimates that are available are limited to a single cotton-dominated system in Texas. Despite the tremendous value for conservation and economic security of such information, surprisingly few scientific efforts have been dedicated to quantifying the economic value of bats. Here, we outline the types of information needed to better quantify the value of bats in agricultural ecosystems. Because of the complexity of the ecosystems involved, creative experimental design and innovative new methods will help advance our knowledge in this area. Experiments involving bats in agricultural systems may be needed sooner than later, before population declines associated with white-nose syndrome and wind turbines potentially render them impossible.

  7. Shear wave velocity imaging using transient electrode perturbation: phantom and ex vivo validation.

    PubMed

    DeWall, Ryan J; Varghese, Tomy; Madsen, Ernest L

    2011-03-01

    This paper presents a new shear wave velocity imaging technique to monitor radio-frequency and microwave ablation procedures, coined electrode vibration elastography. A piezoelectric actuator attached to an ablation needle is transiently vibrated to generate shear waves that are tracked at high frame rates. The time-to-peak algorithm is used to reconstruct the shear wave velocity and thereby the shear modulus variations. The feasibility of electrode vibration elastography is demonstrated using finite element models and ultrasound simulations, tissue-mimicking phantoms simulating fully (phantom 1) and partially ablated (phantom 2) regions, and an ex vivo bovine liver ablation experiment. In phantom experiments, good boundary delineation was observed. Shear wave velocity estimates were within 7% of mechanical measurements in phantom 1 and within 17% in phantom 2. Good boundary delineation was also demonstrated in the ex vivo experiment. The shear wave velocity estimates inside the ablated region were higher than mechanical testing estimates, but estimates in the untreated tissue were within 20% of mechanical measurements. A comparison of electrode vibration elastography and electrode displacement elastography showed the complementary information that they can provide. Electrode vibration elastography shows promise as an imaging modality that provides ablation boundary delineation and quantitative information during ablation procedures.

  8. Quantitative assessment of the microbial risk of leafy greens from farm to consumption: preliminary framework, data, and risk estimates.

    PubMed

    Danyluk, Michelle D; Schaffner, Donald W

    2011-05-01

    This project was undertaken to relate what is known about the behavior of Escherichia coli O157:H7 under laboratory conditions and integrate this information to what is known regarding the 2006 E. coli O157:H7 spinach outbreak in the context of a quantitative microbial risk assessment. The risk model explicitly assumes that all contamination arises from exposure in the field. Extracted data, models, and user inputs were entered into an Excel spreadsheet, and the modeling software @RISK was used to perform Monte Carlo simulations. The model predicts that cut leafy greens that are temperature abused will support the growth of E. coli O157:H7, and populations of the organism may increase by as much a 1 log CFU/day under optimal temperature conditions. When the risk model used a starting level of -1 log CFU/g, with 0.1% of incoming servings contaminated, the predicted numbers of cells per serving were within the range of best available estimates of pathogen levels during the outbreak. The model predicts that levels in the field of -1 log CFU/g and 0.1% prevalence could have resulted in an outbreak approximately the size of the 2006 E. coli O157:H7 outbreak. This quantitative microbial risk assessment model represents a preliminary framework that identifies available data and provides initial risk estimates for pathogenic E. coli in leafy greens. Data gaps include retail storage times, correlations between storage time and temperature, determining the importance of E. coli O157:H7 in leafy greens lag time models, and validation of the importance of cross-contamination during the washing process.

  9. SAS program for quantitative stratigraphic correlation by principal components

    USGS Publications Warehouse

    Hohn, M.E.

    1985-01-01

    A SAS program is presented which constructs a composite section of stratigraphic events through principal components analysis. The variables in the analysis are stratigraphic sections and the observational units are range limits of taxa. The program standardizes data in each section, extracts eigenvectors, estimates missing range limits, and computes the composite section from scores of events on the first principal component. Provided is an option of several types of diagnostic plots; these help one to determine conservative range limits or unrealistic estimates of missing values. Inspection of the graphs and eigenvalues allow one to evaluate goodness of fit between the composite and measured data. The program is extended easily to the creation of a rank-order composite. ?? 1985.

  10. Sandstone copper assessment of the Teniz Basin, Kazakhstan: Chapter R in Global mineral resource assessment

    USGS Publications Warehouse

    Cossette, Pamela M.; Bookstrom, Arthur A.; Hayes, Timothy S.; Robinson, Gilpin R.; Wallis, John C.; Zientek, Michael L.

    2014-01-01

    A quantitative mineral resource assessment has been completed that (1) delineates one 49,714 km2 tract permissive for undiscovered, sandstone subtype, sediment-hosted stratabound copper deposits, and (2) provides probabilistic estimates of numbers of undiscovered deposits and probable amounts of copper resource contained in those deposits. The permissive tract delineated in this assessment encompasses no previously known sandstone subtype, sediment-hosted stratabound copper deposits. However, this assessment estimates (with 30 percent probability) that a mean of nine undiscovered sandstone subtype copper deposits may be present in the Teniz Basin and could contain a mean total of 8.9 million metric tons of copper and 7,500 metric tons of silver.

  11. Measurement of lung expansion with computed tomography and comparison with quantitative histology.

    PubMed

    Coxson, H O; Mayo, J R; Behzad, H; Moore, B J; Verburgt, L M; Staples, C A; Paré, P D; Hogg, J C

    1995-11-01

    The total and regional lung volumes were estimated from computed tomography (CT), and the pleural pressure gradient was determined by using the milliliters of gas per gram of tissue estimated from the X-ray attenuation values and the pressure-volume curve of the lung. The data show that CT accurately estimated the volume of the resected lobe but overestimated its weight by 24 +/- 19%. The volume of gas per gram of tissue was less in the gravity-dependent regions due to a pleural pressure gradient of 0.24 +/- 0.08 cmH2O/cm of descent in the thorax. The proportion of tissue to air obtained with CT was similar to that obtained by quantitative histology. We conclude that the CT scan can be used to estimate total and regional lung volumes and that measurements of the proportions of tissue and air within the thorax by CT can be used in conjunction with quantitative histology to evaluate lung structure.

  12. Quantitative ESD Guidelines for Charged Spacecraft Derived from the Physics of Discharges

    NASA Technical Reports Server (NTRS)

    Frederickson, A. R.

    1992-01-01

    Quantitative guidelines are proposed for Electrostatic Discharge (ESD) pulse shape on charged spacecraft. The guidelines are based on existing ground test data, and on a physical description of the pulsed discharge process. The guidelines are designed to predict pulse shape for surface charging and internal charging on a wide variety of spacecraft structures. The pulses depend on the area of the sample, its capacitance to ground, and the strength of the electric field in the vacuum adjacent to the charged surface. By knowing the pulse shape, current vs. time, one can determine if nearby circuits are threatened by the pulse. The quantitative guidelines might be used to estimate the level of threat to an existing spacecraft, or to redesign a spacecraft to reduce its pulses to a known safe level. The experiments which provide the data and the physics that allow one to interpret the data will be discussed, culminating in examples of how to predict pulse shape/size. This method has been used, but not confirmed, on several spacecraft.

  13. Shot noise-limited Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry

    NASA Astrophysics Data System (ADS)

    Chen, Shichao; Zhu, Yizheng

    2017-02-01

    Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.

  14. Quantitative analysis of factors that affect oil pipeline network accident based on Bayesian networks: A case study in China

    NASA Astrophysics Data System (ADS)

    Zhang, Chao; Qin, Ting Xin; Huang, Shuai; Wu, Jian Song; Meng, Xin Yan

    2018-06-01

    Some factors can affect the consequences of oil pipeline accident and their effects should be analyzed to improve emergency preparation and emergency response. Although there are some qualitative analysis models of risk factors' effects, the quantitative analysis model still should be researched. In this study, we introduce a Bayesian network (BN) model of risk factors' effects analysis in an oil pipeline accident case that happened in China. The incident evolution diagram is built to identify the risk factors. And the BN model is built based on the deployment rule for factor nodes in BN and the expert knowledge by Dempster-Shafer evidence theory. Then the probabilities of incident consequences and risk factors' effects can be calculated. The most likely consequences given by this model are consilient with the case. Meanwhile, the quantitative estimations of risk factors' effects may provide a theoretical basis to take optimal risk treatment measures for oil pipeline management, which can be used in emergency preparation and emergency response.

  15. Quantitative Analysis of the Cervical Texture by Ultrasound and Correlation with Gestational Age.

    PubMed

    Baños, Núria; Perez-Moreno, Alvaro; Migliorelli, Federico; Triginer, Laura; Cobo, Teresa; Bonet-Carne, Elisenda; Gratacos, Eduard; Palacio, Montse

    2017-01-01

    Quantitative texture analysis has been proposed to extract robust features from the ultrasound image to detect subtle changes in the textures of the images. The aim of this study was to evaluate the feasibility of quantitative cervical texture analysis to assess cervical tissue changes throughout pregnancy. This was a cross-sectional study including singleton pregnancies between 20.0 and 41.6 weeks of gestation from women who delivered at term. Cervical length was measured, and a selected region of interest in the cervix was delineated. A model to predict gestational age based on features extracted from cervical images was developed following three steps: data splitting, feature transformation, and regression model computation. Seven hundred images, 30 per gestational week, were included for analysis. There was a strong correlation between the gestational age at which the images were obtained and the estimated gestational age by quantitative analysis of the cervical texture (R = 0.88). This study provides evidence that quantitative analysis of cervical texture can extract features from cervical ultrasound images which correlate with gestational age. Further research is needed to evaluate its applicability as a biomarker of the risk of spontaneous preterm birth, as well as its role in cervical assessment in other clinical situations in which cervical evaluation might be relevant. © 2016 S. Karger AG, Basel.

  16. Development and characterization of a dynamic lesion phantom for the quantitative evaluation of dynamic contrast-enhanced MRI

    PubMed Central

    Freed, Melanie; de Zwart, Jacco A.; Hariharan, Prasanna; R. Myers, Matthew; Badano, Aldo

    2011-01-01

    Purpose: To develop a dynamic lesion phantom that is capable of producing physiological kinetic curves representative of those seen in human dynamic contrast-enhanced MRI (DCE-MRI) data. The objective of this phantom is to provide a platform for the quantitative comparison of DCE-MRI protocols to aid in the standardization and optimization of breast DCE-MRI. Methods: The dynamic lesion consists of a hollow, plastic mold with inlet and outlet tubes to allow flow of a contrast agent solution through the lesion over time. Border shape of the lesion can be controlled using the lesion mold production method. The configuration of the inlet and outlet tubes was determined using fluid transfer simulations. The total fluid flow rate was determined using x-ray images of the lesion for four different flow rates (0.25, 0.5, 1.0, and 1.5 ml∕s) to evaluate the resultant kinetic curve shape and homogeneity of the contrast agent distribution in the dynamic lesion. High spatial and temporal resolution x-ray measurements were used to estimate the true kinetic curve behavior in the dynamic lesion for benign and malignant example curves. DCE-MRI example data were acquired of the dynamic phantom using a clinical protocol. Results: The optimal inlet and outlet tube configuration for the lesion molds was two inlet molds separated by 30° and a single outlet tube directly between the two inlet tubes. X-ray measurements indicated that 1.0 ml∕s was an appropriate total fluid flow rate and provided truth for comparison with MRI data of kinetic curves representative of benign and malignant lesions. DCE-MRI data demonstrated the ability of the phantom to produce realistic kinetic curves. Conclusions: The authors have constructed a dynamic lesion phantom, demonstrated its ability to produce physiological kinetic curves, and provided estimations of its true kinetic curve behavior. This lesion phantom provides a tool for the quantitative evaluation of DCE-MRI protocols, which may lead to improved discrimination of breast cancer lesions. PMID:21992378

  17. Quantitative estimation of pesticide-likeness for agrochemical discovery.

    PubMed

    Avram, Sorin; Funar-Timofei, Simona; Borota, Ana; Chennamaneni, Sridhar Rao; Manchala, Anil Kumar; Muresan, Sorel

    2014-12-01

    The design of chemical libraries, an early step in agrochemical discovery programs, is frequently addressed by means of qualitative physicochemical and/or topological rule-based methods. The aim of this study is to develop quantitative estimates of herbicide- (QEH), insecticide- (QEI), fungicide- (QEF), and, finally, pesticide-likeness (QEP). In the assessment of these definitions, we relied on the concept of desirability functions. We found a simple function, shared by the three classes of pesticides, parameterized particularly, for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings. Subsequently, we describe the scoring of each pesticide class by the corresponding quantitative estimate. In a comparative study, we assessed the performance of the scoring functions using extensive datasets of patented pesticides. The hereby-established quantitative assessment has the ability to rank compounds whether they fail well-established pesticide-likeness rules or not, and offer an efficient way to prioritize (class-specific) pesticides. These findings are valuable for the efficient estimation of pesticide-likeness of vast chemical libraries in the field of agrochemical discovery. Graphical AbstractQuantitative models for pesticide-likeness were derived using the concept of desirability functions parameterized for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings.

  18. Quantitative methods for estimating the anisotropy of the strength properties and the phase composition of Mg-Al alloys

    NASA Astrophysics Data System (ADS)

    Betsofen, S. Ya.; Kolobov, Yu. R.; Volkova, E. F.; Bozhko, S. A.; Voskresenskaya, I. I.

    2015-04-01

    Quantitative methods have been developed to estimate the anisotropy of the strength properties and to determine the phase composition of Mg-Al alloys. The efficiency of the methods is confirmed for MA5 alloy subjected to severe plastic deformation. It is shown that the Taylor factors calculated for basal slip averaged over all orientations of a polycrystalline aggregate with allowance for texture can be used for a quantitative estimation of the contribution of the texture of semifinished magnesium alloy products to the anisotropy of their strength properties. A technique of determining the composition of a solid solution and the intermetallic phase Al12Mg17 content is developed using the measurement of the lattice parameters of the solid solution and the known dependence of these lattice parameters on the composition.

  19. Single-exposure quantitative phase imaging in color-coded LED microscopy.

    PubMed

    Lee, Wonchan; Jung, Daeseong; Ryu, Suho; Joo, Chulmin

    2017-04-03

    We demonstrate single-shot quantitative phase imaging (QPI) in a platform of color-coded LED microscopy (cLEDscope). The light source in a conventional microscope is replaced by a circular LED pattern that is trisected into subregions with equal area, assigned to red, green, and blue colors. Image acquisition with a color image sensor and subsequent computation based on weak object transfer functions allow for the QPI of a transparent specimen. We also provide a correction method for color-leakage, which may be encountered in implementing our method with consumer-grade LEDs and image sensors. Most commercially available LEDs and image sensors do not provide spectrally isolated emissions and pixel responses, generating significant error in phase estimation in our method. We describe the correction scheme for this color-leakage issue, and demonstrate improved phase measurement accuracy. The computational model and single-exposure QPI capability of our method are presented by showing images of calibrated phase samples and cellular specimens.

  20. Hydrologic applications of weather radar

    NASA Astrophysics Data System (ADS)

    Seo, Dong-Jun; Habib, Emad; Andrieu, Hervé; Morin, Efrat

    2015-12-01

    By providing high-resolution quantitative precipitation information (QPI), weather radars have revolutionized hydrology in the last two decades. With the aid of GIS technology, radar-based quantitative precipitation estimates (QPE) have enabled routine high-resolution hydrologic modeling in many parts of the world. Given the ever-increasing need for higher-resolution hydrologic and water resources information for a wide range of applications, one may expect that the use of weather radar will only grow. Despite the tremendous progress, a number of significant scientific, technological and engineering challenges remain to realize its potential. New challenges are also emerging as new areas of applications are discovered, explored and pursued. The purpose of this special issue is to provide the readership with some of the latest advances, lessons learned, experiences gained, and science issues and challenges related to hydrologic applications of weather radar. The special issue features 20 contributions on various topics which reflect the increasing diversity as well as the areas of focus in radar hydrology today. The contributions may be grouped as follows:

  1. Transmission of Bacterial Zoonotic Pathogens between Pets and Humans: The Role of Pet Food.

    PubMed

    Lambertini, Elisabetta; Buchanan, Robert L; Narrod, Clare; Pradhan, Abani K

    2016-01-01

    Recent Salmonella outbreaks associated with dry pet food and treats raised the level of concern for these products as vehicle of pathogen exposure for both pets and their owners. The need to characterize the microbiological and risk profiles of this class of products is currently not supported by sufficient specific data. This systematic review summarizes existing data on the main variables needed to support an ingredients-to-consumer quantitative risk model to (1) describe the microbial ecology of bacterial pathogens in the dry pet food production chain, (2) estimate pet exposure to pathogens through dry food consumption, and (3) assess human exposure and illness incidence due to contact with pet food and pets in the household. Risk models populated with the data here summarized will provide a tool to quantitatively address the emerging public health concerns associated with pet food and the effectiveness of mitigation measures. Results of such models can provide a basis for improvements in production processes, risk communication to consumers, and regulatory action.

  2. Classical conditioning through auditory stimuli in Drosophila: methods and models

    PubMed Central

    Menda, Gil; Bar, Haim Y.; Arthur, Ben J.; Rivlin, Patricia K.; Wyttenbach, Robert A.; Strawderman, Robert L.; Hoy, Ronald R.

    2011-01-01

    SUMMARY The role of sound in Drosophila melanogaster courtship, along with its perception via the antennae, is well established, as is the ability of this fly to learn in classical conditioning protocols. Here, we demonstrate that a neutral acoustic stimulus paired with a sucrose reward can be used to condition the proboscis-extension reflex, part of normal feeding behavior. This appetitive conditioning produces results comparable to those obtained with chemical stimuli in aversive conditioning protocols. We applied a logistic model with general estimating equations to predict the dynamics of learning, which successfully predicts the outcome of training and provides a quantitative estimate of the rate of learning. Use of acoustic stimuli with appetitive conditioning provides both an alternative to models most commonly used in studies of learning and memory in Drosophila and a means of testing hearing in both sexes, independently of courtship responsiveness. PMID:21832129

  3. Overview of Brain Microdialysis

    PubMed Central

    Chefer, Vladimir I.; Thompson, Alexis C.; Zapata, Agustin; Shippenberg, Toni S.

    2010-01-01

    The technique of microdialysis enables sampling and collecting of small-molecular-weight substances from the interstitial space. It is a widely used method in neuroscience and is one of the few techniques available that permits quantification of neurotransmitters, peptides, and hormones in the behaving animal. More recently, it has been used in tissue preparations for quantification of neurotransmitter release. This unit provides a brief review of the history of microdialysis and its general application in the neurosciences. The authors review the theoretical principles underlying the microdialysis process, methods available for estimating extracellular concentration from dialysis samples (i.e., relative recovery), the various factors that affect the estimate of in vivo relative recovery, and the importance of determining in vivo relative recovery to data interpretation. Several areas of special note, including impact of tissue trauma on the interpretation of microdialysis results, are discussed. Step-by-step instructions for the planning and execution of conventional and quantitative microdialysis experiments are provided. PMID:19340812

  4. Quantitative Aging Pattern in Mouse Urine Vapor as Measured by Gas-Liquid Chromatography

    NASA Technical Reports Server (NTRS)

    Robinson, Arthur B.; Dirren, Henri; Sheets, Alan; Miquel, Jaime; Lundgren, Paul R.

    1975-01-01

    We have discovered a quantitative aging pattern in mouse urine vapor. The diagnostic power of the pattern has been found to be high. We hope that this pattern will eventually allow quantitative estimates of physiological age and some insight into the biochemistry of aging.

  5. Food Consumption and Handling Survey for Quantitative Microbiological Consumer Phase Risk Assessments.

    PubMed

    Chardon, Jurgen; Swart, Arno

    2016-07-01

    In the consumer phase of a typical quantitative microbiological risk assessment (QMRA), mathematical equations identify data gaps. To acquire useful data we designed a food consumption and food handling survey (2,226 respondents) for QMRA applications that is especially aimed at obtaining quantitative data. For a broad spectrum of food products, the survey covered the following topics: processing status at retail, consumer storage, preparation, and consumption. Questions were designed to facilitate distribution fitting. In the statistical analysis, special attention was given to the selection of the most adequate distribution to describe the data. Bootstrap procedures were used to describe uncertainty. The final result was a coherent quantitative consumer phase food survey and parameter estimates for food handling and consumption practices in The Netherlands, including variation over individuals and uncertainty estimates.

  6. Exercise and insulin resistance in youth: a meta-analysis.

    PubMed

    Fedewa, Michael V; Gist, Nicholas H; Evans, Ellen M; Dishman, Rod K

    2014-01-01

    The prevalence of obesity and diabetes is increasing among children, adolescents, and adults. Although estimates of the efficacy of exercise training on fasting insulin and insulin resistance have been provided, for adults similar estimates have not been provided for youth. This systematic review and meta-analysis provides a quantitative estimate of the effectiveness of exercise training on fasting insulin and insulin resistance in children and adolescents. Potential sources were limited to peer-reviewed articles published before June 25, 2013, and gathered from the PubMed, SPORTDiscus, Physical Education Index, and Web of Science online databases. Analysis was limited to randomized controlled trials by using combinations of the terms adolescent, child, pediatric, youth, exercise training, physical activity, diabetes, insulin, randomized trial, and randomized controlled trial. The authors assessed 546 sources, of which 4.4% (24 studies) were eligible for inclusion. Thirty-two effects were used to estimate the effect of exercise training on fasting insulin, with 15 effects measuring the effect on insulin resistance. Estimated effects were independently calculated by multiple authors, and conflicts were resolved before calculating the overall effect. Based on the cumulative results from these studies, a small to moderate effect was found for exercise training on fasting insulin and improving insulin resistance in youth (Hedges' d effect size = 0.48 [95% confidence interval: 0.22-0.74], P < .001 and 0.31 [95% confidence interval: 0.06-0.56], P < .05, respectively). These results support the use of exercise training in the prevention and treatment of type 2 diabetes.

  7. Rainfall Estimation and Performance Characterization Using an X-band Dual-Polarization Radar in the San Francisco Bay Area

    NASA Astrophysics Data System (ADS)

    Cifelli, R.; Chen, H.; Chandra, C. V.

    2016-12-01

    The San Francisco Bay area is home to over 5 million people. In February 2016, the area also hosted the NFL Super bowl, bringing additional people and focusing national attention to the region. Based on the El Nino forecast, public officials expressed concern for heavy rainfall and flooding with the potential for threats to public safety, costly flood damage to infrastructure, negative impacts to water quality (e.g., combined sewer overflows) and major disruptions in transportation. Mitigation of the negative impacts listed above requires accurate precipitation monitoring (quantitative precipitation estimation-QPE) and prediction (including radar nowcasting). The proximity to terrain and maritime conditions as well as the siting of existing NEXRAD radars are all challenges in providing accurate, short-term near surface rainfall estimates in the Bay area urban region. As part of a collaborative effort between the National Oceanic and Atmospheric Administration (NOAA) Earth System Research Laboratory, Colorado State University (CSU), and Santa Clara Valley Water District (SCVWD), an X-band dual-polarization radar was deployed in Santa Clara Valley in February of 2016 to provide support for the National Weather Service during the Super Bowl and NOAA's El Nino Rapid Response field campaign. This high-resolution radar was deployed on the roof of one of the buildings at the Penitencia Water Treatment Plant. The main goal was to provide detailed precipitation information for use in weather forecasting and assists the water district in their ability to predict rainfall and streamflow with real-time rainfall data over Santa Clara County especially during a potentially large El Nino year. The following figure shows the radar's coverage map, as well as sample reflectivity observations on March 06, 2016, at 00:04UTC. This paper presents results from a pilot study from February, 2016 to May, 2016 demonstrating the use of X-band weather radar for quantitative precipitation estimation (QPE) in the Bay Area. The radar rainfall products are evaluated with rain gauge observations collected by SCVWD. The comparison with gages show the excellent performance of X-band radar for rainfall monitoring in the Bay Area.

  8. Linearization improves the repeatability of quantitative dynamic contrast-enhanced MRI.

    PubMed

    Jones, Kyle M; Pagel, Mark D; Cárdenas-Rodríguez, Julio

    2018-04-01

    The purpose of this study was to compare the repeatabilities of the linear and nonlinear Tofts and reference region models (RRM) for dynamic contrast-enhanced MRI (DCE-MRI). Simulated and experimental DCE-MRI data from 12 rats with a flank tumor of C6 glioma acquired over three consecutive days were analyzed using four quantitative and semi-quantitative DCE-MRI metrics. The quantitative methods used were: 1) linear Tofts model (LTM), 2) non-linear Tofts model (NTM), 3) linear RRM (LRRM), and 4) non-linear RRM (NRRM). The following semi-quantitative metrics were used: 1) maximum enhancement ratio (MER), 2) time to peak (TTP), 3) initial area under the curve (iauc64), and 4) slope. LTM and NTM were used to estimate K trans , while LRRM and NRRM were used to estimate K trans relative to muscle (R Ktrans ). Repeatability was assessed by calculating the within-subject coefficient of variation (wSCV) and the percent intra-subject variation (iSV) determined with the Gage R&R analysis. The iSV for R Ktrans using LRRM was two-fold lower compared to NRRM at all simulated and experimental conditions. A similar trend was observed for the Tofts model, where LTM was at least 50% more repeatable than the NTM under all experimental and simulated conditions. The semi-quantitative metrics iauc64 and MER were as equally repeatable as K trans and R Ktrans estimated by LTM and LRRM respectively. The iSV for iauc64 and MER were significantly lower than the iSV for slope and TTP. In simulations and experimental results, linearization improves the repeatability of quantitative DCE-MRI by at least 30%, making it as repeatable as semi-quantitative metrics. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Quantitative Estimates of the Social Benefits of Learning, 1: Crime. Wider Benefits of Learning Research Report.

    ERIC Educational Resources Information Center

    Feinstein, Leon

    The cost benefits of lifelong learning in the United Kingdom were estimated, based on quantitative evidence. Between 1975-1996, 43 police force areas in England and Wales were studied to determine the effect of wages on crime. It was found that a 10 percent rise in the average pay of those on low pay reduces the overall area property crime rate by…

  10. Fire frequency in the Interior Columbia River Basin: Building regional models from fire history data

    USGS Publications Warehouse

    McKenzie, D.; Peterson, D.L.; Agee, James K.

    2000-01-01

    Fire frequency affects vegetation composition and successional pathways; thus it is essential to understand fire regimes in order to manage natural resources at broad spatial scales. Fire history data are lacking for many regions for which fire management decisions are being made, so models are needed to estimate past fire frequency where local data are not yet available. We developed multiple regression models and tree-based (classification and regression tree, or CART) models to predict fire return intervals across the interior Columbia River basin at 1-km resolution, using georeferenced fire history, potential vegetation, cover type, and precipitation databases. The models combined semiqualitative methods and rigorous statistics. The fire history data are of uneven quality; some estimates are based on only one tree, and many are not cross-dated. Therefore, we weighted the models based on data quality and performed a sensitivity analysis of the effects on the models of estimation errors that are due to lack of cross-dating. The regression models predict fire return intervals from 1 to 375 yr for forested areas, whereas the tree-based models predict a range of 8 to 150 yr. Both types of models predict latitudinal and elevational gradients of increasing fire return intervals. Examination of regional-scale output suggests that, although the tree-based models explain more of the variation in the original data, the regression models are less likely to produce extrapolation errors. Thus, the models serve complementary purposes in elucidating the relationships among fire frequency, the predictor variables, and spatial scale. The models can provide local managers with quantitative information and provide data to initialize coarse-scale fire-effects models, although predictions for individual sites should be treated with caution because of the varying quality and uneven spatial coverage of the fire history database. The models also demonstrate the integration of qualitative and quantitative methods when requisite data for fully quantitative models are unavailable. They can be tested by comparing new, independent fire history reconstructions against their predictions and can be continually updated, as better fire history data become available.

  11. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    PubMed

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main contribution of this article is the development of an open-source, free to use tool that encapsulates several well-known methods for the estimation of the input function and the quantification of dynamic PET FDG studies. Some alternative strategies are also proposed and implemented in the tool for the segmentation of blood pools and parameter estimation. The tool was tested on phantoms with encouraging results that suggest that even bloodless estimators could provide a viable alternative to blood sampling for quantification using graphical analysis. The open tool is a promising opportunity for collaboration among investigators and further validation on real studies.

  12. Detection and Estimation of 2-D Distributions of Greenhouse Gas Source Concentrations and Emissions over Complex Urban Environments and Industrial Sites

    NASA Astrophysics Data System (ADS)

    Zaccheo, T. S.; Pernini, T.; Dobler, J. T.; Blume, N.; Braun, M.

    2017-12-01

    This work highlights the use of the greenhouse-gas laser imaging tomography experiment (GreenLITETM) data in conjunction with a sparse tomography approach to identify and quantify both urban and industrial sources of CO2 and CH4. The GreenLITETM system provides a user-defined set of time-sequenced intersecting chords or integrated column measurements at a fixed height through a quasi-horizontal plane of interest. This plane, with unobstructed views along the lines of sight, may range from complex industrial facilities to a small city scale or urban sector. The continuous time phased absorption measurements are converted to column concentrations and combined with a plume based model to estimate the 2-D distribution of gas concentration over extended areas ranging from 0.04-25 km2. Finally, these 2-D maps of concentration are combined with ancillary meteorological and atmospheric data to identify potential emission sources and provide first order estimates of their associated fluxes. In this presentation, we will provide a brief overview of the systems and results from both controlled release experiments and a long-term system deployment in Paris, FR. These results provide a quantitative assessment of the system's ability to detect and estimate CO2 and CH4 sources, and demonstrate its ability to perform long-term autonomous monitoring and quantification of either persistent or sporadic emissions that may have both health and safety as well as environmental impacts.

  13. Block matching and Wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.

    2017-07-01

    We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.

  14. Temporal lobe epilepsy: quantitative MR volumetry in detection of hippocampal atrophy.

    PubMed

    Farid, Nikdokht; Girard, Holly M; Kemmotsu, Nobuko; Smith, Michael E; Magda, Sebastian W; Lim, Wei Y; Lee, Roland R; McDonald, Carrie R

    2012-08-01

    To determine the ability of fully automated volumetric magnetic resonance (MR) imaging to depict hippocampal atrophy (HA) and to help correctly lateralize the seizure focus in patients with temporal lobe epilepsy (TLE). This study was conducted with institutional review board approval and in compliance with HIPAA regulations. Volumetric MR imaging data were analyzed for 34 patients with TLE and 116 control subjects. Structural volumes were calculated by using U.S. Food and Drug Administration-cleared software for automated quantitative MR imaging analysis (NeuroQuant). Results of quantitative MR imaging were compared with visual detection of atrophy, and, when available, with histologic specimens. Receiver operating characteristic analyses were performed to determine the optimal sensitivity and specificity of quantitative MR imaging for detecting HA and asymmetry. A linear classifier with cross validation was used to estimate the ability of quantitative MR imaging to help lateralize the seizure focus. Quantitative MR imaging-derived hippocampal asymmetries discriminated patients with TLE from control subjects with high sensitivity (86.7%-89.5%) and specificity (92.2%-94.1%). When a linear classifier was used to discriminate left versus right TLE, hippocampal asymmetry achieved 94% classification accuracy. Volumetric asymmetries of other subcortical structures did not improve classification. Compared with invasive video electroencephalographic recordings, lateralization accuracy was 88% with quantitative MR imaging and 85% with visual inspection of volumetric MR imaging studies but only 76% with visual inspection of clinical MR imaging studies. Quantitative MR imaging can depict the presence and laterality of HA in TLE with accuracy rates that may exceed those achieved with visual inspection of clinical MR imaging studies. Thus, quantitative MR imaging may enhance standard visual analysis, providing a useful and viable means for translating volumetric analysis into clinical practice.

  15. Optimizing the Terzaghi Estimator of the 3D Distribution of Rock Fracture Orientations

    NASA Astrophysics Data System (ADS)

    Tang, Huiming; Huang, Lei; Juang, C. Hsein; Zhang, Junrong

    2017-08-01

    Orientation statistics are prone to bias when surveyed with the scanline mapping technique in which the observed probabilities differ, depending on the intersection angle between the fracture and the scanline. This bias leads to 1D frequency statistical data that are poorly representative of the 3D distribution. A widely accessible estimator named after Terzaghi was developed to estimate 3D frequencies from 1D biased observations, but the estimation accuracy is limited for fractures at narrow intersection angles to scanlines (termed the blind zone). Although numerous works have concentrated on accuracy with respect to the blind zone, accuracy outside the blind zone has rarely been studied. This work contributes to the limited investigations of accuracy outside the blind zone through a qualitative assessment that deploys a mathematical derivation of the Terzaghi equation in conjunction with a quantitative evaluation that uses fractures simulation and verification of natural fractures. The results show that the estimator does not provide a precise estimate of 3D distributions and that the estimation accuracy is correlated with the grid size adopted by the estimator. To explore the potential for improving accuracy, the particular grid size producing maximum accuracy is identified from 168 combinations of grid sizes and two other parameters. The results demonstrate that the 2° × 2° grid size provides maximum accuracy for the estimator in most cases when applied outside the blind zone. However, if the global sample density exceeds 0.5°-2, then maximum accuracy occurs at a grid size of 1° × 1°.

  16. A novel power spectrum calculation method using phase-compensation and weighted averaging for the estimation of ultrasound attenuation.

    PubMed

    Heo, Seo Weon; Kim, Hyungsuk

    2010-05-01

    An estimation of ultrasound attenuation in soft tissues is critical in the quantitative ultrasound analysis since it is not only related to the estimations of other ultrasound parameters, such as speed of sound, integrated scatterers, or scatterer size, but also provides pathological information of the scanned tissue. However, estimation performances of ultrasound attenuation are intimately tied to the accurate extraction of spectral information from the backscattered radiofrequency (RF) signals. In this paper, we propose two novel techniques for calculating a block power spectrum from the backscattered ultrasound signals. These are based on the phase-compensation of each RF segment using the normalized cross-correlation to minimize estimation errors due to phase variations, and the weighted averaging technique to maximize the signal-to-noise ratio (SNR). The simulation results with uniform numerical phantoms demonstrate that the proposed method estimates local attenuation coefficients within 1.57% of the actual values while the conventional methods estimate those within 2.96%. The proposed method is especially effective when we deal with the signal reflected from the deeper depth where the SNR level is lower or when the gated window contains a small number of signal samples. Experimental results, performed at 5MHz, were obtained with a one-dimensional 128 elements array, using the tissue-mimicking phantoms also show that the proposed method provides better estimation results (within 3.04% of the actual value) with smaller estimation variances compared to the conventional methods (within 5.93%) for all cases considered. Copyright 2009 Elsevier B.V. All rights reserved.

  17. NEXRAD quantitative precipitation estimates, data acquisition, and processing for the DuPage County, Illinois, streamflow-simulation modeling system

    USGS Publications Warehouse

    Ortel, Terry W.; Spies, Ryan R.

    2015-11-19

    Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).

  18. Impact of format and content of visual display of data on comprehension, choice and preference: a systematic review.

    PubMed

    Hildon, Zoe; Allwood, Dominique; Black, Nick

    2012-02-01

    Displays comparing the performance of healthcare providers are largely based on commonsense. To review the literature on the impact of compositional format and content of quantitative data displays on people's comprehension, choice and preference. Ovid databases, expert recommendations and snowballing techniques. Evaluations of the impact of different formats (bar charts, tables and pictographs) and content (ordering, explanatory visual cues, etc.) of quantitative data displays meeting defined quality criteria. Data extraction Type of decision; decision-making domains; audiences; formats; content; methodology; findings. Most of the 30 studies used quantitative (n= 26) methods with patients or public groups (n= 28) rather than with professionals (n= 2). Bar charts were the most frequent format, followed by pictographs and tables. As regards format, tables and pictographs appeared better understood than bar charts despite the latter being preferred. Although accessible to less numerate and older populations, pictographs tended to lead to more risk avoidance. Tables appeared accessible to all. Aspects of content enhancing the impact of data displays included giving visual explanatory cues and contextual information while still attempting simplicity ('less is more'); ordering data; consistency. Icons rather than numbers were more user-friendly but could lead to over-estimation of risk. Uncertainty was not widely understood, nor well represented. Though heterogeneous and limited in scope, there is sufficient research evidence to inform the presentation of quantitative data that compares the performance of healthcare providers. The impact of new formats, such as funnel plots, needs to be evaluated.

  19. How social information can improve estimation accuracy in human groups.

    PubMed

    Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-11-21

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.

  20. How social information can improve estimation accuracy in human groups

    PubMed Central

    Jayles, Bertrand; Kim, Hye-rin; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-01-01

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects’ sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. PMID:29118142

  1. qPIPSA: Relating enzymatic kinetic parameters and interaction fields

    PubMed Central

    Gabdoulline, Razif R; Stein, Matthias; Wade, Rebecca C

    2007-01-01

    Background The simulation of metabolic networks in quantitative systems biology requires the assignment of enzymatic kinetic parameters. Experimentally determined values are often not available and therefore computational methods to estimate these parameters are needed. It is possible to use the three-dimensional structure of an enzyme to perform simulations of a reaction and derive kinetic parameters. However, this is computationally demanding and requires detailed knowledge of the enzyme mechanism. We have therefore sought to develop a general, simple and computationally efficient procedure to relate protein structural information to enzymatic kinetic parameters that allows consistency between the kinetic and structural information to be checked and estimation of kinetic constants for structurally and mechanistically similar enzymes. Results We describe qPIPSA: quantitative Protein Interaction Property Similarity Analysis. In this analysis, molecular interaction fields, for example, electrostatic potentials, are computed from the enzyme structures. Differences in molecular interaction fields between enzymes are then related to the ratios of their kinetic parameters. This procedure can be used to estimate unknown kinetic parameters when enzyme structural information is available and kinetic parameters have been measured for related enzymes or were obtained under different conditions. The detailed interaction of the enzyme with substrate or cofactors is not modeled and is assumed to be similar for all the proteins compared. The protein structure modeling protocol employed ensures that differences between models reflect genuine differences between the protein sequences, rather than random fluctuations in protein structure. Conclusion Provided that the experimental conditions and the protein structural models refer to the same protein state or conformation, correlations between interaction fields and kinetic parameters can be established for sets of related enzymes. Outliers may arise due to variation in the importance of different contributions to the kinetic parameters, such as protein stability and conformational changes. The qPIPSA approach can assist in the validation as well as estimation of kinetic parameters, and provide insights into enzyme mechanism. PMID:17919319

  2. Eliciting improved quantitative judgements using the IDEA protocol: A case study in natural resource management.

    PubMed

    Hemming, Victoria; Walshe, Terry V; Hanea, Anca M; Fidler, Fiona; Burgman, Mark A

    2018-01-01

    Natural resource management uses expert judgement to estimate facts that inform important decisions. Unfortunately, expert judgement is often derived by informal and largely untested protocols, despite evidence that the quality of judgements can be improved with structured approaches. We attribute the lack of uptake of structured protocols to the dearth of illustrative examples that demonstrate how they can be applied within pressing time and resource constraints, while also improving judgements. In this paper, we demonstrate how the IDEA protocol for structured expert elicitation may be deployed to overcome operational challenges while improving the quality of judgements. The protocol was applied to the estimation of 14 future abiotic and biotic events on the Great Barrier Reef, Australia. Seventy-six participants with varying levels of expertise related to the Great Barrier Reef were recruited and allocated randomly to eight groups. Each participant provided their judgements using the four-step question format of the IDEA protocol ('Investigate', 'Discuss', 'Estimate', 'Aggregate') through remote elicitation. When the events were realised, the participant judgements were scored in terms of accuracy, calibration and informativeness. The results demonstrate that the IDEA protocol provides a practical, cost-effective, and repeatable approach to the elicitation of quantitative estimates and uncertainty via remote elicitation. We emphasise that i) the aggregation of diverse individual judgements into pooled group judgments almost always outperformed individuals, and ii) use of a modified Delphi approach helped to remove linguistic ambiguity, and further improved individual and group judgements. Importantly, the protocol encourages review, critical appraisal and replication, each of which is required if judgements are to be used in place of data in a scientific context. The results add to the growing body of literature that demonstrates the merit of using structured elicitation protocols. We urge decision-makers and analysts to use insights and examples to improve the evidence base of expert judgement in natural resource management.

  3. An Extensive Unified Thermo-Electric Module Characterization Method

    PubMed Central

    Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio

    2016-01-01

    Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios. PMID:27983575

  4. An Extensive Unified Thermo-Electric Module Characterization Method.

    PubMed

    Attivissimo, Filippo; Guarnieri Calò Carducci, Carlo; Lanzolla, Anna Maria Lucia; Spadavecchia, Maurizio

    2016-12-13

    Thermo-Electric Modules (TEMs) are being increasingly used in power generation as a valid alternative to batteries, providing autonomy to sensor nodes or entire Wireless Sensor Networks, especially for energy harvesting applications. Often, manufacturers provide some essential parameters under determined conditions, like for example, maximum temperature difference between the surfaces of the TEM or for maximum heat absorption, but in many cases, a TEM-based system is operated under the best conditions only for a fraction of the time, thus, when dynamic working conditions occur, the performance estimation of TEMs is crucial to determine their actual efficiency. The focus of this work is on using a novel procedure to estimate the parameters of both the electrical and thermal equivalent model and investigate their relationship with the operating temperature and the temperature gradient. The novelty of the method consists in the use of a simple test configuration to stimulate the modules and simultaneously acquire electrical and thermal data to obtain all parameters in a single test. Two different current profiles are proposed as possible stimuli, which use depends on the available test instrumentation, and relative performance are compared both quantitatively and qualitatively, in terms of standard deviation and estimation uncertainty. Obtained results, besides agreeing with both technical literature and a further estimation method based on module specifications, also provides the designer a detailed description of the module behavior, useful to simulate its performance in different scenarios.

  5. Quantitative modelling in cognitive ergonomics: predicting signals passed at danger.

    PubMed

    Moray, Neville; Groeger, John; Stanton, Neville

    2017-02-01

    This paper shows how to combine field observations, experimental data and mathematical modelling to produce quantitative explanations and predictions of complex events in human-machine interaction. As an example, we consider a major railway accident. In 1999, a commuter train passed a red signal near Ladbroke Grove, UK, into the path of an express. We use the Public Inquiry Report, 'black box' data, and accident and engineering reports to construct a case history of the accident. We show how to combine field data with mathematical modelling to estimate the probability that the driver observed and identified the state of the signals, and checked their status. Our methodology can explain the SPAD ('Signal Passed At Danger'), generate recommendations about signal design and placement and provide quantitative guidance for the design of safer railway systems' speed limits and the location of signals. Practitioner Summary: Detailed ergonomic analysis of railway signals and rail infrastructure reveals problems of signal identification at this location. A record of driver eye movements measures attention, from which a quantitative model for out signal placement and permitted speeds can be derived. The paper is an example of how to combine field data, basic research and mathematical modelling to solve ergonomic design problems.

  6. An approach to standardization of urine sediment analysis via suggestion of a common manual protocol.

    PubMed

    Ko, Dae-Hyun; Ji, Misuk; Kim, Sollip; Cho, Eun-Jung; Lee, Woochang; Yun, Yeo-Min; Chun, Sail; Min, Won-Ki

    2016-01-01

    The results of urine sediment analysis have been reported semiquantitatively. However, as recent guidelines recommend quantitative reporting of urine sediment, and with the development of automated urine sediment analyzers, there is an increasing need for quantitative analysis of urine sediment. Here, we developed a protocol for urine sediment analysis and quantified the results. Based on questionnaires, various reports, guidelines, and experimental results, we developed a protocol for urine sediment analysis. The results of this new protocol were compared with those obtained with a standardized chamber and an automated sediment analyzer. Reference intervals were also estimated using new protocol. We developed a protocol with centrifugation at 400 g for 5 min, with the average concentration factor of 30. The correlation between quantitative results of urine sediment analysis, the standardized chamber, and the automated sediment analyzer were generally good. The conversion factor derived from the new protocol showed a better fit with the results of manual count than the default conversion factor in the automated sediment analyzer. We developed a protocol for manual urine sediment analysis to quantitatively report the results. This protocol may provide a mean for standardization of urine sediment analysis.

  7. General Methods for Evolutionary Quantitative Genetic Inference from Generalized Mixed Models.

    PubMed

    de Villemereuil, Pierre; Schielzeth, Holger; Nakagawa, Shinichi; Morrissey, Michael

    2016-11-01

    Methods for inference and interpretation of evolutionary quantitative genetic parameters, and for prediction of the response to selection, are best developed for traits with normal distributions. Many traits of evolutionary interest, including many life history and behavioral traits, have inherently nonnormal distributions. The generalized linear mixed model (GLMM) framework has become a widely used tool for estimating quantitative genetic parameters for nonnormal traits. However, whereas GLMMs provide inference on a statistically convenient latent scale, it is often desirable to express quantitative genetic parameters on the scale upon which traits are measured. The parameters of fitted GLMMs, despite being on a latent scale, fully determine all quantities of potential interest on the scale on which traits are expressed. We provide expressions for deriving each of such quantities, including population means, phenotypic (co)variances, variance components including additive genetic (co)variances, and parameters such as heritability. We demonstrate that fixed effects have a strong impact on those parameters and show how to deal with this by averaging or integrating over fixed effects. The expressions require integration of quantities determined by the link function, over distributions of latent values. In general cases, the required integrals must be solved numerically, but efficient methods are available and we provide an implementation in an R package, QGglmm. We show that known formulas for quantities such as heritability of traits with binomial and Poisson distributions are special cases of our expressions. Additionally, we show how fitted GLMM can be incorporated into existing methods for predicting evolutionary trajectories. We demonstrate the accuracy of the resulting method for evolutionary prediction by simulation and apply our approach to data from a wild pedigreed vertebrate population. Copyright © 2016 de Villemereuil et al.

  8. Observer variation in the assessment of root canal curvature.

    PubMed

    Faraj, S; Boutsioukis, C

    2017-02-01

    To evaluate the inter- and intra-observer agreement between training/trained endodontists regarding the ex vivo classification of root canal curvature into three categories and its measurement using three quantitative methods. Periapical radiographs of seven extracted human posterior teeth with varying degrees of curvature were exposed ex vivo. Twenty training/trained endodontists were asked to classify the root canal curvature into three categories (<10°, 10-30°, >30°), to measure the curvature using three quantitative methods (Schneider, Weine, Pruett) and to draw angles of 10° or 30°, as a control experiment. The procedure was repeated after six weeks. Inter- and intra-observer agreement was evaluated by the intraclass correlation coefficient and weighted kappa. The inter-observer agreement on the visual classification of root canal curvature was substantial (ICC = 0.65, P < 0.018), but a trend towards underestimation of the angle was evident. Participants modified their classifications both within and between the two sessions. Median angles drawn as a control experiment were not significantly different from the target values (P > 0.10), but the results of individual participants varied. When quantitative methods were used, the inter- and intra-observer agreement on the angle measurements was considerably better (ICC = 0.76-0.82, P < 0.001) than on the radius measurements (ICC = 0.16-0.19, P > 0.895). Visual estimation of root canal curvature was not reliable. The use of computer-based quantitative methods is recommended. The measurement of radius of curvature was more subjective than angle measurement. Endodontic Associations need to provide specific guidelines on how to estimate root canal curvature in case difficulty assessment forms. © 2015 International Endodontic Journal. Published by John Wiley & Sons Ltd.

  9. Pentacam Scheimpflug quantitative imaging of the crystalline lens and intraocular lens.

    PubMed

    Rosales, Patricia; Marcos, Susana

    2009-05-01

    To implement geometrical and optical distortion correction methods for anterior segment Scheimpflug images obtained with a commercially available system (Pentacam, Oculus Optikgeräte GmbH). Ray tracing algorithms were implemented to obtain corrected ocular surface geometry from the original images captured by the Pentacam's CCD camera. As details of the optical layout were not fully provided by the manufacturer, an iterative procedure (based on imaging of calibrated spheres) was developed to estimate the camera lens specifications. The correction procedure was tested on Scheimpflug images of a physical water cell model eye (with polymethylmethacrylate cornea and a commercial IOL of known dimensions) and of a normal human eye previously measured with a corrected optical and geometrical distortion Scheimpflug camera (Topcon SL-45 [Topcon Medical Systems Inc] from the Vrije University, Amsterdam, Holland). Uncorrected Scheimpflug images show flatter surfaces and thinner lenses than in reality. The application of geometrical and optical distortion correction algorithms improves the accuracy of the estimated anterior lens radii of curvature by 30% to 40% and of the estimated posterior lens by 50% to 100%. The average error in the retrieved radii was 0.37 and 0.46 mm for the anterior and posterior lens radii of curvature, respectively, and 0.048 mm for lens thickness. The Pentacam Scheimpflug system can be used to obtain quantitative information on the geometry of the crystalline lens, provided that geometrical and optical distortion correction algorithms are applied, within the accuracy of state-of-the art phakometry and biometry. The techniques could improve with exact knowledge of the technical specifications of the instrument, improved edge detection algorithms, consideration of aspheric and non-rotationally symmetrical surfaces, and introduction of a crystalline gradient index.

  10. Heritable Variation for Sex Ratio under Environmental Sex Determination in the Common Snapping Turtle (Chelydra Serpentina)

    PubMed Central

    Janzen, F. J.

    1992-01-01

    The magnitude of quantitative genetic variation for primary sex ratio was measured in families extracted from a natural population of the common snapping turtle (Chelydra serpentina), which possesses temperature-dependent sex determination (TSD). Eggs were incubated at three temperatures that produced mixed sex ratios. This experimental design provided estimates of the heritability of sex ratio in multiple environments and a test of the hypothesis that genotype X environment (G X E) interactions may be maintaining genetic variation for sex ratio in this population of C. serpentina. Substantial quantitative genetic variation for primary sex ratio was detected in all experimental treatments. These results in conjunction with the occurrence of TSD in this species provide support for three critical assumptions of Fisher's theory for the microevolution of sex ratio. There were statistically significant effects of family and incubation temperature on sex ratio, but no significant interaction was observed. Estimates of the genetic correlations of sex ratio across environments were highly positive and essentially indistinguishable from +1. These latter two findings suggest that G X E interaction is not the mechanism maintaining genetic variation for sex ratio in this system. Finally, although substantial heritable variation exists for primary sex ratio of C. serpentina under constant temperatures, estimates of the effective heritability of primary sex ratio in nature are approximately an order of magnitude smaller. Small effective heritability and a long generation time in C. serpentina imply that evolution of sex ratios would be slow even in response to strong selection by, among other potential agents, any rapid and/or substantial shifts in local temperatures, including those produced by changes in the global climate. PMID:1592234

  11. Spatiotemporal Evolution of Ebola Virus Disease at Sub-National Level during the 2014 West Africa Epidemic: Model Scrutiny and Data Meagreness.

    PubMed

    Santermans, Eva; Robesyn, Emmanuel; Ganyani, Tapiwa; Sudre, Bertrand; Faes, Christel; Quinten, Chantal; Van Bortel, Wim; Haber, Tom; Kovac, Thomas; Van Reeth, Frank; Testa, Marco; Hens, Niel; Plachouras, Diamantis

    2016-01-01

    The Ebola outbreak in West Africa has infected at least 27,443 individuals and killed 11,207, based on data until 24 June, 2015, released by the World Health Organization (WHO). This outbreak has been characterised by extensive geographic spread across the affected countries Guinea, Liberia and Sierra Leone, and by localized hotspots within these countries. The rapid recognition and quantitative assessment of localised areas of higher transmission can inform the optimal deployment of public health resources. A variety of mathematical models have been used to estimate the evolution of this epidemic, and some have pointed out the importance of the spatial heterogeneity apparent from incidence maps. However, little is known about the district-level transmission. Given that many response decisions are taken at sub-national level, the current study aimed to investigate the spatial heterogeneity by using a different modelling framework, built on publicly available data at district level. Furthermore, we assessed whether this model could quantify the effect of intervention measures and provide predictions at a local level to guide public health action. We used a two-stage modelling approach: a) a flexible spatiotemporal growth model across all affected districts and b) a deterministic SEIR compartmental model per district whenever deemed appropriate. Our estimates show substantial differences in the evolution of the outbreak in the various regions of Guinea, Liberia and Sierra Leone, illustrating the importance of monitoring the outbreak at district level. We also provide an estimate of the time-dependent district-specific effective reproduction number, as a quantitative measure to compare transmission between different districts and give input for informed decisions on control measures and resource allocation. Prediction and assessing the impact of control measures proved to be difficult without more accurate data. In conclusion, this study provides us a useful tool at district level for public health, and illustrates the importance of collecting and sharing data.

  12. Bayesian data assimilation provides rapid decision support for vector-borne diseases.

    PubMed

    Jewell, Chris P; Brown, Richard G

    2015-07-06

    Predicting the spread of vector-borne diseases in response to incursions requires knowledge of both host and vector demographics in advance of an outbreak. Although host population data are typically available, for novel disease introductions there is a high chance of the pathogen using a vector for which data are unavailable. This presents a barrier to estimating the parameters of dynamical models representing host-vector-pathogen interaction, and hence limits their ability to provide quantitative risk forecasts. The Theileria orientalis (Ikeda) outbreak in New Zealand cattle demonstrates this problem: even though the vector has received extensive laboratory study, a high degree of uncertainty persists over its national demographic distribution. Addressing this, we develop a Bayesian data assimilation approach whereby indirect observations of vector activity inform a seasonal spatio-temporal risk surface within a stochastic epidemic model. We provide quantitative predictions for the future spread of the epidemic, quantifying uncertainty in the model parameters, case infection times and the disease status of undetected infections. Importantly, we demonstrate how our model learns sequentially as the epidemic unfolds and provide evidence for changing epidemic dynamics through time. Our approach therefore provides a significant advance in rapid decision support for novel vector-borne disease outbreaks. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  13. Assessing the risk posed by natural hazards to infrastructures

    NASA Astrophysics Data System (ADS)

    Eidsvig, Unni; Kristensen, Krister; Vidar Vangelsten, Bjørn

    2015-04-01

    The modern society is increasingly dependent on infrastructures to maintain its function, and disruption in one of the infrastructure systems may have severe consequences. The Norwegian municipalities have, according to legislation, a duty to carry out a risk and vulnerability analysis and plan and prepare for emergencies in a short- and long term perspective. Vulnerability analysis of the infrastructures and their interdependencies is an important part of this analysis. This paper proposes a model for assessing the risk posed by natural hazards to infrastructures. The model prescribes a three level analysis with increasing level of detail, moving from qualitative to quantitative analysis. This paper focuses on the second level, which consists of a semi-quantitative analysis. The purpose of this analysis is to perform a screening of the scenarios of natural hazards threatening the infrastructures identified in the level 1 analysis and investigate the need for further analyses, i.e. level 3 quantitative analyses. The proposed level 2 analysis considers the frequency of the natural hazard, different aspects of vulnerability including the physical vulnerability of the infrastructure itself and the societal dependency on the infrastructure. An indicator-based approach is applied, ranking the indicators on a relative scale. The proposed indicators characterize the robustness of the infrastructure, the importance of the infrastructure as well as interdependencies between society and infrastructure affecting the potential for cascading effects. Each indicator is ranked on a 1-5 scale based on pre-defined ranking criteria. The aggregated risk estimate is a combination of the semi-quantitative vulnerability indicators, as well as quantitative estimates of the frequency of the natural hazard and the number of users of the infrastructure. Case studies for two Norwegian municipalities are presented, where risk to primary road, water supply and power network threatened by storm and landslide is assessed. The application examples show that the proposed model provides a useful tool for screening of undesirable events, with the ultimate goal to reduce the societal vulnerability.

  14. Risk assessment of carcinogens in food

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barlow, Susan, E-mail: suebarlow@mistral.co.u; Schlatter, Josef; Federal Office of Public Health, Consumer Protection Directorate, Stauffacherstrasse 101, CH-8004 Zuerich

    2010-03-01

    Approaches for the risk assessment of carcinogens in food have evolved as scientific knowledge has advanced. Early methods allowed little more than hazard identification and an indication of carcinogenic potency. Evaluation of the modes of action of carcinogens and their broad division into genotoxic and epigenetic (non-genotoxic, non-DNA reactive) carcinogens have played an increasing role in determining the approach followed and provide possibilities for more detailed risk characterisation, including provision of quantitative estimates of risk. Reliance on experimental animal data for the majority of risk assessments and the fact that human exposures to dietary carcinogens are often orders of magnitudemore » below doses used in experimental studies has provided a fertile ground for discussion and diverging views on the most appropriate way to offer risk assessment advice. Approaches used by national and international bodies differ, with some offering numerical estimates of potential risks to human health, while others express considerable reservations about the validity of quantitative approaches requiring extrapolation of dose-response data below the observed range and instead offer qualitative advice. Recognising that qualitative advice alone does not provide risk managers with information on which to prioritise the need for risk management actions, a 'margin of exposure' approach for substances that are both genotoxic and carcinogenic has been developed, which is now being used by the World Health Organization and the European Food Safety Authority. This review describes the evolution of risk assessment advice on carcinogens and discusses examples of ways in which carcinogens in food have been assessed in Europe.« less

  15. Integrating Genomic Analysis with the Genetic Basis of Gene Expression: Preliminary Evidence of the Identification of Causal Genes for Cardiovascular and Metabolic Traits Related to Nutrition in Mexicans123

    PubMed Central

    Bastarrachea, Raúl A.; Gallegos-Cabriales, Esther C.; Nava-González, Edna J.; Haack, Karin; Voruganti, V. Saroja; Charlesworth, Jac; Laviada-Molina, Hugo A.; Veloz-Garza, Rosa A.; Cardenas-Villarreal, Velia Margarita; Valdovinos-Chavez, Salvador B.; Gomez-Aguilar, Patricia; Meléndez, Guillermo; López-Alvarenga, Juan Carlos; Göring, Harald H. H.; Cole, Shelley A.; Blangero, John; Comuzzie, Anthony G.; Kent, Jack W.

    2012-01-01

    Whole-transcriptome expression profiling provides novel phenotypes for analysis of complex traits. Gene expression measurements reflect quantitative variation in transcript-specific messenger RNA levels and represent phenotypes lying close to the action of genes. Understanding the genetic basis of gene expression will provide insight into the processes that connect genotype to clinically significant traits representing a central tenet of system biology. Synchronous in vivo expression profiles of lymphocytes, muscle, and subcutaneous fat were obtained from healthy Mexican men. Most genes were expressed at detectable levels in multiple tissues, and RNA levels were correlated between tissue types. A subset of transcripts with high reliability of expression across tissues (estimated by intraclass correlation coefficients) was enriched for cis-regulated genes, suggesting that proximal sequence variants may influence expression similarly in different cellular environments. This integrative global gene expression profiling approach is proving extremely useful for identifying genes and pathways that contribute to complex clinical traits. Clearly, the coincidence of clinical trait quantitative trait loci and expression quantitative trait loci can help in the prioritization of positional candidate genes. Such data will be crucial for the formal integration of positional and transcriptomic information characterized as genetical genomics. PMID:22797999

  16. Quantitative analysis of visible surface defect risk in tablets during film coating using terahertz pulsed imaging.

    PubMed

    Niwa, Masahiro; Hiraishi, Yasuhiro

    2014-01-30

    Tablets are the most common form of solid oral dosage produced by pharmaceutical industries. There are several challenges to successful and consistent tablet manufacturing. One well-known quality issue is visible surface defects, which generally occur due to insufficient physical strength, causing breakage or abrasion during processing, packaging, or shipping. Techniques that allow quantitative evaluation of surface strength and the risk of surface defect would greatly aid in quality control. Here terahertz pulsed imaging (TPI) was employed to evaluate the surface properties of core tablets with visible surface defects of varying severity after film coating. Other analytical methods, such as tensile strength measurements, friability testing, and scanning electron microscopy (SEM), were used to validate TPI results. Tensile strength and friability provided no information on visible surface defect risk, whereas the TPI-derived unique parameter terahertz electric field peak strength (TEFPS) provided spatial distribution of surface density/roughness information on core tablets, which helped in estimating tablet abrasion risk prior to film coating and predicting the location of the defects. TPI also revealed the relationship between surface strength and blending condition and is a nondestructive, quantitative approach to aid formulation development and quality control that can reduce visible surface defect risk in tablets. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. The 2006 William Feinberg lecture: shifting the paradigm from stroke to global vascular risk estimation.

    PubMed

    Sacco, Ralph L

    2007-06-01

    By the year 2010, it is estimated that 18.1 million people worldwide will die annually because of cardiovascular diseases and stroke. "Global vascular risk" more broadly includes the multiple overlapping disease silos of stroke, myocardial infarction, peripheral arterial disease, and vascular death. Estimation of global vascular risk requires consideration of a variety of variables including demographics, environmental behaviors, and risk factors. Data from multiple studies suggest continuous linear relationships between the physiological vascular risk modulators of blood pressure, lipids, and blood glucose rather than treating these conditions as categorical risk factors. Constellations of risk factors may be more relevant than individual categorical components. Exciting work with novel risk factors may also have predictive value in estimates of global vascular risk. Advances in imaging have led to the measurement of subclinical conditions such as carotid intima-media thickness and subclinical brain conditions such as white matter hyperintensities and silent infarcts. These subclinical measurements may be intermediate stages in the transition from asymptomatic to symptomatic vascular events, appear to be associated with the fundamental vascular risk factors, and represent opportunities to more precisely quantitate disease progression. The expansion of studies in molecular epidemiology and detection of genetic markers underlying vascular risks also promises to extend our precision of global vascular risk estimation. Global vascular risk estimation will require quantitative methods that bundle these multi-dimensional data into more precise estimates of future risk. The power of genetic information coupled with data on demographics, risk-inducing behaviors, vascular risk modulators, biomarkers, and measures of subclinical conditions should provide the most realistic approximation of an individual's future global vascular risk. The ultimate public health benefit, however, will depend on not only identification of global vascular risk but also the realization that we can modify this risk and prove the prediction models wrong.

  18. Monochloramine Disinfection Kinetics of Nitrosomonas europaea by Propidium Monoazide Quantitative PCR and Live/Dead BacLight Methods▿

    PubMed Central

    Wahman, David G.; Wulfeck-Kleier, Karen A.; Pressman, Jonathan G.

    2009-01-01

    Monochloramine disinfection kinetics were determined for the pure-culture ammonia-oxidizing bacterium Nitrosomonas europaea (ATCC 19718) by two culture-independent methods, namely, Live/Dead BacLight (LD) and propidium monoazide quantitative PCR (PMA-qPCR). Both methods were first verified with mixtures of heat-killed (nonviable) and non-heat-killed (viable) cells before a series of batch disinfection experiments with stationary-phase cultures (batch grown for 7 days) at pH 8.0, 25°C, and 5, 10, and 20 mg Cl2/liter monochloramine. Two data sets were generated based on the viability method used, either (i) LD or (ii) PMA-qPCR. These two data sets were used to estimate kinetic parameters for the delayed Chick-Watson disinfection model through a Bayesian analysis implemented in WinBUGS. This analysis provided parameter estimates of 490 mg Cl2-min/liter for the lag coefficient (b) and 1.6 × 10−3 to 4.0 × 10−3 liter/mg Cl2-min for the Chick-Watson disinfection rate constant (k). While estimates of b were similar for both data sets, the LD data set resulted in a greater k estimate than that obtained with the PMA-qPCR data set, implying that the PMA-qPCR viability measure was more conservative than LD. For N. europaea, the lag phase was not previously reported for culture-independent methods and may have implications for nitrification in drinking water distribution systems. This is the first published application of a PMA-qPCR method for disinfection kinetic model parameter estimation as well as its application to N. europaea or monochloramine. Ultimately, this PMA-qPCR method will allow evaluation of monochloramine disinfection kinetics for mixed-culture bacteria in drinking water distribution systems. PMID:19561179

  19. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors—Air Gap Effect

    PubMed Central

    Bore, Thierry; Wagner, Norman; Delepine Lesoille, Sylvie; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-01-01

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling. PMID:27096865

  20. Estimating Benzene Exposure Level over Time and by Industry Type through a Review of Literature on Korea

    PubMed Central

    Park, Donguk; Choi, Sangjun; Ha, Kwonchul; Jung, Hyejung; Yoon, Chungsik; Koh, Dong-Hee; Ryu, Seunghun; Kim, Soogeun; Kang, Dongmug; Yoo, Kyemook

    2015-01-01

    The major purpose of this study is to construct a retrospective exposure assessment for benzene through a review of literature on Korea. Airborne benzene measurements reported in 34 articles were reviewed. A total of 15,729 individual measurements were compiled. Weighted arithmetic means [AM(w)] and their variance calculated across studies were summarized according to 5-year period intervals (prior to the 1970s through the 2010s) and industry type. Industries were classified according to Korea Standard Industrial Classification (KSIC) using information provided in the literature. We estimated quantitative retrospective exposure to benzene for each cell in the matrix through a combination of time and KSIC. Analysis of the AM(w) indicated reductions in exposure levels over time, regardless of industry, with mean levels prior to the 1980–1984 period of 50.4 ppm (n = 2,289), which dropped to 2.8 ppm (n = 305) in the 1990–1994 period, and to 0.1 ppm (n = 294) in the 1995–1999 period. There has been no improvement since the 2000s, when the AM(w) of 4.3 ppm (n = 6,211) for the 2005–2009 period and 4.5 ppm (n = 3,358) for the 2010–2013 period were estimated. A comparison by industry found no consistent patterns in the measurement results. Our estimated benzene measurements can be used to determine not only the possibility of retrospective exposure to benzene, but also to estimate the level of quantitative or semiquantitative retrospective exposure to benzene. PMID:26929825

  1. Error Analysis of Clay-Rock Water Content Estimation with Broadband High-Frequency Electromagnetic Sensors--Air Gap Effect.

    PubMed

    Bore, Thierry; Wagner, Norman; Lesoille, Sylvie Delepine; Taillade, Frederic; Six, Gonzague; Daout, Franck; Placko, Dominique

    2016-04-18

    Broadband electromagnetic frequency or time domain sensor techniques present high potential for quantitative water content monitoring in porous media. Prior to in situ application, the impact of the relationship between the broadband electromagnetic properties of the porous material (clay-rock) and the water content on the frequency or time domain sensor response is required. For this purpose, dielectric properties of intact clay rock samples experimental determined in the frequency range from 1 MHz to 10 GHz were used as input data in 3-D numerical frequency domain finite element field calculations to model the one port broadband frequency or time domain transfer function for a three rods based sensor embedded in the clay-rock. The sensor response in terms of the reflection factor was analyzed in time domain with classical travel time analysis in combination with an empirical model according to Topp equation, as well as the theoretical Lichtenecker and Rother model (LRM) to estimate the volumetric water content. The mixture equation considering the appropriate porosity of the investigated material provide a practical and efficient approach for water content estimation based on classical travel time analysis with the onset-method. The inflection method is not recommended for water content estimation in electrical dispersive and absorptive material. Moreover, the results clearly indicate that effects due to coupling of the sensor to the material cannot be neglected. Coupling problems caused by an air gap lead to dramatic effects on water content estimation, even for submillimeter gaps. Thus, the quantitative determination of the in situ water content requires careful sensor installation in order to reach a perfect probe clay rock coupling.

  2. Using Dynamic Contrast Enhanced MRI to Quantitatively Characterize Maternal Vascular Organization in the Primate Placenta

    PubMed Central

    Frias, A.E.; Schabel, M.C.; Roberts, V.H.J.; Tudorica, A.; Grigsby, P.L.; Oh, K.Y.; Kroenke, C. D.

    2015-01-01

    Purpose The maternal microvasculature of the primate placenta is organized into 10-20 perfusion domains that are functionally optimized to facilitate nutrient exchange to support fetal growth. This study describes a dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) method for identifying vascular domains, and quantifying maternal blood flow in them. Methods A rhesus macaque on the 133rd day of pregnancy (G133, term=165 days) underwent Doppler ultrasound (US) procedures, DCE-MRI, and Cesarean-section delivery. Serial T1-weighted images acquired throughout intravenous injection of a contrast reagent (CR) bolus were analyzed to obtain CR arrival time maps of the placenta. Results Watershed segmentation of the arrival time map identified 16 perfusion domains. The number and location of these domains corresponded to anatomical cotyledonary units observed following delivery. Analysis of the CR wave front through each perfusion domain enabled determination of volumetric flow, which ranged from 9.03 to 44.9 mL/sec (25.2 ± 10.3 mL/sec). These estimates are supported by Doppler US results. Conclusions The DCE-MRI analysis described here provides quantitative estimates of the number of maternal perfusion domains in a primate placenta, and estimates flow within each domain. Anticipated extensions of this technique are to the study placental function in nonhuman primate models of obstetric complications. PMID:24753177

  3. Estimates of occupational safety and health impacts resulting from large-scale production of major photovoltaic technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Owens, T.; Ungers, L.; Briggs, T.

    1980-08-01

    The purpose of this study is to estimate both quantitatively and qualitatively, the worker and societal risks attributable to four photovoltaic cell (solar cell) production processes. Quantitative risk values were determined by use of statistics from the California semiconductor industry. The qualitative risk assessment was performed using a variety of both governmental and private sources of data. The occupational health statistics derived from the semiconductor industry were used to predict injury and fatality levels associated with photovoltaic cell manufacturing. The use of these statistics to characterize the two silicon processes described herein is defensible from the standpoint that many ofmore » the same process steps and materials are used in both the semiconductor and photovoltaic industries. These health statistics are less applicable to the gallium arsenide and cadmium sulfide manufacturing processes, primarily because of differences in the materials utilized. Although such differences tend to discourage any absolute comparisons among the four photovoltaic cell production processes, certain relative comparisons are warranted. To facilitate a risk comparison of the four processes, the number and severity of process-related chemical hazards were assessed. This qualitative hazard assessment addresses both the relative toxicity and the exposure potential of substances in the workplace. In addition to the worker-related hazards, estimates of process-related emissions and wastes are also provided.« less

  4. Uncertainty analysis of the radiological characteristics of radioactive waste using a method based on log-normal distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gigase, Yves

    2007-07-01

    Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide asmore » example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)« less

  5. Toxicity Estimation Software Tool (TEST)

    EPA Science Inventory

    The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...

  6. Historical baselines of coral cover on tropical reefs as estimated by expert opinion

    PubMed Central

    Cheung, William W.L.; Bruno, John F.

    2018-01-01

    Coral reefs are important habitats that represent global marine biodiversity hotspots and provide important benefits to people in many tropical regions. However, coral reefs are becoming increasingly threatened by climate change, overfishing, habitat destruction, and pollution. Historical baselines of coral cover are important to understand how much coral cover has been lost, e.g., to avoid the ‘shifting baseline syndrome’. There are few quantitative observations of coral reef cover prior to the industrial revolution, and therefore baselines of coral reef cover are difficult to estimate. Here, we use expert and ocean-user opinion surveys to estimate baselines of global coral reef cover. The overall mean estimated baseline coral cover was 59% (±19% standard deviation), compared to an average of 58% (±18% standard deviation) estimated by professional scientists. We did not find evidence of the shifting baseline syndrome, whereby respondents who first observed coral reefs more recently report lower estimates of baseline coral cover. These estimates of historical coral reef baseline cover are important for scientists, policy makers, and managers to understand the extent to which coral reefs have become depleted and to set appropriate recovery targets. PMID:29379692

  7. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  8. Historical baselines of coral cover on tropical reefs as estimated by expert opinion.

    PubMed

    Eddy, Tyler D; Cheung, William W L; Bruno, John F

    2018-01-01

    Coral reefs are important habitats that represent global marine biodiversity hotspots and provide important benefits to people in many tropical regions. However, coral reefs are becoming increasingly threatened by climate change, overfishing, habitat destruction, and pollution. Historical baselines of coral cover are important to understand how much coral cover has been lost, e.g., to avoid the 'shifting baseline syndrome'. There are few quantitative observations of coral reef cover prior to the industrial revolution, and therefore baselines of coral reef cover are difficult to estimate. Here, we use expert and ocean-user opinion surveys to estimate baselines of global coral reef cover. The overall mean estimated baseline coral cover was 59% (±19% standard deviation), compared to an average of 58% (±18% standard deviation) estimated by professional scientists. We did not find evidence of the shifting baseline syndrome, whereby respondents who first observed coral reefs more recently report lower estimates of baseline coral cover. These estimates of historical coral reef baseline cover are important for scientists, policy makers, and managers to understand the extent to which coral reefs have become depleted and to set appropriate recovery targets.

  9. Voronovskaja's theorem revisited

    NASA Astrophysics Data System (ADS)

    Tachev, Gancho T.

    2008-07-01

    We represent a new quantitative variant of Voronovskaja's theorem for Bernstein operator. This estimate improves the recent quantitative versions of Voronovskaja's theorem for certain Bernstein-type operators, obtained by H. Gonska, P. Pitul and I. Rasa in 2006.

  10. Demand theory of gene regulation. II. Quantitative application to the lactose and maltose operons of Escherichia coli.

    PubMed Central

    Savageau, M A

    1998-01-01

    Induction of gene expression can be accomplished either by removing a restraining element (negative mode of control) or by providing a stimulatory element (positive mode of control). According to the demand theory of gene regulation, which was first presented in qualitative form in the 1970s, the negative mode will be selected for the control of a gene whose function is in low demand in the organism's natural environment, whereas the positive mode will be selected for the control of a gene whose function is in high demand. This theory has now been further developed in a quantitative form that reveals the importance of two key parameters: cycle time C, which is the average time for a gene to complete an ON/OFF cycle, and demand D, which is the fraction of the cycle time that the gene is ON. Here we estimate nominal values for the relevant mutation rates and growth rates and apply the quantitative demand theory to the lactose and maltose operons of Escherichia coli. The results define regions of the C vs. D plot within which selection for the wild-type regulatory mechanisms is realizable, and these in turn provide the first estimates for the minimum and maximum values of demand that are required for selection of the positive and negative modes of gene control found in these systems. The ratio of mutation rate to selection coefficient is the most relevant determinant of the realizable region for selection, and the most influential parameter is the selection coefficient that reflects the reduction in growth rate when there is superfluous expression of a gene. The quantitative theory predicts the rate and extent of selection for each mode of control. It also predicts three critical values for the cycle time. The predicted maximum value for the cycle time C is consistent with the lifetime of the host. The predicted minimum value for C is consistent with the time for transit through the intestinal tract without colonization. Finally, the theory predicts an optimum value of C that is in agreement with the observed frequency for E. coli colonizing the human intestinal tract. PMID:9691028

  11. A unified material decomposition framework for quantitative dual- and triple-energy CT imaging.

    PubMed

    Zhao, Wei; Vernekohl, Don; Han, Fei; Han, Bin; Peng, Hao; Yang, Yong; Xing, Lei; Min, James K

    2018-04-21

    Many clinical applications depend critically on the accurate differentiation and classification of different types of materials in patient anatomy. This work introduces a unified framework for accurate nonlinear material decomposition and applies it, for the first time, in the concept of triple-energy CT (TECT) for enhanced material differentiation and classification as well as dual-energy CT (DECT). We express polychromatic projection into a linear combination of line integrals of material-selective images. The material decomposition is then turned into a problem of minimizing the least-squares difference between measured and estimated CT projections. The optimization problem is solved iteratively by updating the line integrals. The proposed technique is evaluated by using several numerical phantom measurements under different scanning protocols. The triple-energy data acquisition is implemented at the scales of micro-CT and clinical CT imaging with commercial "TwinBeam" dual-source DECT configuration and a fast kV switching DECT configuration. Material decomposition and quantitative comparison with a photon counting detector and with the presence of a bow-tie filter are also performed. The proposed method provides quantitative material- and energy-selective images examining realistic configurations for both DECT and TECT measurements. Compared to the polychromatic kV CT images, virtual monochromatic images show superior image quality. For the mouse phantom, quantitative measurements show that the differences between gadodiamide and iodine concentrations obtained using TECT and idealized photon counting CT (PCCT) are smaller than 8 and 1 mg/mL, respectively. TECT outperforms DECT for multicontrast CT imaging and is robust with respect to spectrum estimation. For the thorax phantom, the differences between the concentrations of the contrast map and the corresponding true reference values are smaller than 7 mg/mL for all of the realistic configurations. A unified framework for both DECT and TECT imaging has been established for the accurate extraction of material compositions using currently available commercial DECT configurations. The novel technique is promising to provide an urgently needed solution for several CT-based diagnostic and therapy applications, especially for the diagnosis of cardiovascular and abdominal diseases where multicontrast imaging is involved. © 2018 American Association of Physicists in Medicine.

  12. Lithospheric strength of Ganymede: Clues to early thermal profiles from extensional tectonic features

    NASA Technical Reports Server (NTRS)

    Golombek, M. P.; Banerdt, W. B.

    1985-01-01

    While it is generally agreed that the strength of a planet's lithosphere is controlled by a combination of brittle sliding and ductile flow laws, predicting the geometry and initial characteristics of faults due to failure from stresses imposed on the lithospheric strength envelope has not been thoroughly explored. Researchers used lithospheric strength envelopes to analyze the extensional features found on Ganymede. This application provides a quantitative means of estimating early thermal profiles on Ganymede, thereby constraining its early thermal evolution.

  13. Dual nozzle aerodynamic and cooling analysis study

    NASA Technical Reports Server (NTRS)

    Meagher, G. M.

    1981-01-01

    Analytical models to predict performance and operating characteristics of dual nozzle concepts were developed and improved. Aerodynamic models are available to define flow characteristics and bleed requirements for both the dual throat and dual expander concepts. Advanced analytical techniques were utilized to provide quantitative estimates of the bleed flow, boundary layer, and shock effects within dual nozzle engines. Thermal analyses were performed to define cooling requirements for baseline configurations, and special studies of unique dual nozzle cooling problems defined feasible means of achieving adequate cooling.

  14. Providing Additional Support for MNA by Including Quantitative Lines of Evidence for Abiotic Degradation and Co-metabolic Oxidation of Chlorinated Ethylenes

    DTIC Science & Technology

    2017-09-01

    environment outcome. The value is site specific. It may depend on the travel time of groundwater from a source to a property boundary, sentry well...for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data...47  Figure 4.3.4. Decline in Concentrations of PCE, TCE, and cDCE + t-DCE Over Time in Well

  15. Electronic structure and microscopic model of CoNb2O6

    NASA Astrophysics Data System (ADS)

    Molla, Kaimujjaman; Rahaman, Badiur

    2018-05-01

    We present the first principle density functional calculations to figure out the underlying spin model of CoNb2O6. The first principles calculations define the main paths of superexchange interaction between Co spins in this compound. We discuss the nature of the exchange paths and provide quantitative estimates of magnetic exchange couplings. A microscopic modeling based on analysis of the electronic structure of this system puts it in the interesting class of weakly couple geometrically frustrated isosceles triangular Ising antiferromagnet.

  16. X-ray vector radiography imaging for biomedical applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potdevin, Guillaume; Malecki, Andreas; Biernath, Thomas

    The non-invasive estimation of fracture risk in osteoporosis remains a challenge in the clinical routine and is mainly based on an assessment of bone density by dual X-ray absorption (DXA) although bone micro-architecture is known to play an important role for bone fragility. Here we report on 'X-ray vector Radiography' measurements able to provide a direct bone microstructure diagnostics on human bone samples, which we compare qualitatively and quantitatively with numerical analysis of high resolution radiographs.

  17. Effects of radiobiological uncertainty on vehicle and habitat shield design for missions to the moon and Mars

    NASA Technical Reports Server (NTRS)

    Wilson, John W.; Nealy, John E.; Schimmerling, Walter; Cucinotta, Francis A.; Wood, James S.

    1993-01-01

    Some consequences of uncertainties in radiobiological risk due to galactic cosmic ray (GCR) exposure are analyzed for their effect on engineering designs for the first lunar outpost and a mission to explore Mars. This report presents the plausible effect of biological uncertainties, the design changes necessary to reduce the uncertainties to acceptable levels for a safe mission, and an evaluation of the mission redesign cost. Estimates of the amount of shield mass required to compensate for radiobiological uncertainty are given for a simplified vehicle and habitat. The additional amount of shield mass required to provide a safety factor for uncertainty compensation is calculated from the expected response to GCR exposure. The amount of shield mass greatly increases in the estimated range of biological uncertainty, thus, escalating the estimated cost of the mission. The estimates are used as a quantitative example for the cost-effectiveness of research in radiation biophysics and radiation physics.

  18. Corruption costs lives: a cross-country study using an IV approach.

    PubMed

    Lio, Mon-Chi; Lee, Ming-Hsuan

    2016-04-01

    This study quantitatively estimates the effects of corruption on five major health indicators by using recent cross-country panel data covering 119 countries for the period of 2005-2011. The corruption indicators provided by the World Bank and Transparency International are used, and both the two-way fixed effect and the two-stage least squares approaches are employed for our estimation. The estimation results show that, in general, corruption is negatively associated with a country's health outcomes. A lower level of corruption or a better control of corruption in a country can lead to longer life expectancy, a lower infant mortality rate and a lower under-five mortality rate for citizens. However, our estimation finds no significant association between corruption and individual diseases including human immunodeficiency virus prevalence and tuberculosis incidence. The findings suggest that corruption reduction itself is an effective method to promote health. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  19. An optimized framework for quantitative magnetization transfer imaging of the cervical spinal cord in vivo.

    PubMed

    Battiston, Marco; Grussu, Francesco; Ianus, Andrada; Schneider, Torben; Prados, Ferran; Fairney, James; Ourselin, Sebastien; Alexander, Daniel C; Cercignani, Mara; Gandini Wheeler-Kingshott, Claudia A M; Samson, Rebecca S

    2018-05-01

    To develop a framework to fully characterize quantitative magnetization transfer indices in the human cervical cord in vivo within a clinically feasible time. A dedicated spinal cord imaging protocol for quantitative magnetization transfer was developed using a reduced field-of-view approach with echo planar imaging (EPI) readout. Sequence parameters were optimized based in the Cramer-Rao-lower bound. Quantitative model parameters (i.e., bound pool fraction, free and bound pool transverse relaxation times [ T2F, T2B], and forward exchange rate [k FB ]) were estimated implementing a numerical model capable of dealing with the novelties of the sequence adopted. The framework was tested on five healthy subjects. Cramer-Rao-lower bound minimization produces optimal sampling schemes without requiring the establishment of a steady-state MT effect. The proposed framework allows quantitative voxel-wise estimation of model parameters at the resolution typically used for spinal cord imaging (i.e. 0.75 × 0.75 × 5 mm 3 ), with a protocol duration of ∼35 min. Quantitative magnetization transfer parametric maps agree with literature values. Whole-cord mean values are: bound pool fraction = 0.11(±0.01), T2F = 46.5(±1.6) ms, T2B = 11.0(±0.2) µs, and k FB  = 1.95(±0.06) Hz. Protocol optimization has a beneficial effect on reproducibility, especially for T2B and k FB . The framework developed enables robust characterization of spinal cord microstructure in vivo using qMT. Magn Reson Med 79:2576-2588, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine.

  20. Spectral Feature Analysis for Quantitative Estimation of Cyanobacteria Chlorophyll-A

    NASA Astrophysics Data System (ADS)

    Lin, Yi; Ye, Zhanglin; Zhang, Yugan; Yu, Jie

    2016-06-01

    In recent years, lake eutrophication caused a large of Cyanobacteria bloom which not only brought serious ecological disaster but also restricted the sustainable development of regional economy in our country. Chlorophyll-a is a very important environmental factor to monitor water quality, especially for lake eutrophication. Remote sensed technique has been widely utilized in estimating the concentration of chlorophyll-a by different kind of vegetation indices and monitoring its distribution in lakes, rivers or along coastline. For each vegetation index, its quantitative estimation accuracy for different satellite data might change since there might be a discrepancy of spectral resolution and channel center between different satellites. The purpose this paper is to analyze the spectral feature of chlorophyll-a with hyperspectral data (totally 651 bands) and use the result to choose the optimal band combination for different satellites. The analysis method developed here in this study could be useful to recognize and monitor cyanobacteria bloom automatically and accrately. In our experiment, the reflectance (from 350nm to 1000nm) of wild cyanobacteria in different consistency (from 0 to 1362.11ug/L) and the corresponding chlorophyll-a concentration were measured simultaneously. Two kinds of hyperspectral vegetation indices were applied in this study: simple ratio (SR) and narrow band normalized difference vegetation index (NDVI), both of which consists of any two bands in the entire 651 narrow bands. Then multivariate statistical analysis was used to construct the linear, power and exponential models. After analyzing the correlation between chlorophyll-a and single band reflectance, SR, NDVI respetively, the optimal spectral index for quantitative estimation of cyanobacteria chlorophyll-a, as well corresponding central wavelength and band width were extracted. Results show that: Under the condition of water disturbance, SR and NDVI are both suitable for quantitative estimation of chlorophyll-a, and more effective than the traditional single band model; the best regression models for SR, NDVI with chlorophyll-a are linear and power, respectively. Under the condition without water disturbance, the single band model works the best. For the SR index, there are two optimal band combinations, which is comprised of infrared (700nm-900nm) and blue-green range (450nm-550nm), infrared and red range (600nm-650nm) respectively, with band width between 45nm to 125nm. For NDVI, the optimal band combination includes the range from 750nm to 900nm and 700nm to 750nm, with band width less than 30nm. For single band model, band center located between 733nm-935nm, and its width mustn't exceed the interval where band center located in. This study proved , as for SR or NDVI, the centers and widths are crucial factors for quantitative estimating chlorophyll-a. As for remote sensor, proper spectrum channel could not only improve the accuracy of recognizing cyanobacteria bloom, but reduce the redundancy of hyperspectral data. Those results will provide better reference for designing the suitable spectrum channel of customized sensors for cyanobacteria bloom monitoring at a low altitude. In other words, this study is also the basic research for developing the real-time remote sensing monitoring system with high time and high spatial resolution.

  1. CarbonTracker-Lagrange: A Framework for Greenhouse Gas Flux Estimation at Regional to Continental Scales

    NASA Astrophysics Data System (ADS)

    Andrews, A. E.

    2016-12-01

    CarbonTracker-Lagrange (CT-L) is a flexible modeling framework developed to take advantage of newly available atmospheric data for CO2 and other long-lived gases such as CH4 and N2O. The North American atmospheric CO2 measurement network has grown from three sites in 2004 to >100 sites in 2015. The US network includes tall tower, mountaintop, surface, and aircraft sites in the NOAA Global Greenhouse Gas Reference Network along with sites maintained by university, government and private sector researchers. The Canadian network is operated by Environment and Climate Change Canada. This unprecedented dataset can provide spatially and temporally resolved CO2 emissions and uptake flux estimates and quantitative information about drivers of variability, such as drought and temperature. CT-L is a platform for systematic comparison of data assimilation techniques and evaluation of assumed prior, model and observation errors. A novel feature of CT-L is the optimization of boundary values along with surface fluxes, leveraging vertically resolved data available from NOAA's aircraft sampling program. CT-L uses observation footprints (influence functions) from the Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) modeling system to relate atmospheric measurements to upwind fluxes and boundary values. Footprints are pre-computed and the optimization algorithms are efficient, so many variants of the calculation can be performed. Fluxes are adjusted using Bayesian or Geostatistical methods to provide optimal agreement with observations. Satellite measurements of CO2 and CH4 from GOSAT are available starting in July 2009 and from OCO-2 since September 2014. With support from the NASA Carbon Monitoring System, we are developing flux estimation strategies that use remote sensing and in situ data together, including geostatistical inversions using satellite retrievals of solar-induced chlorophyll fluorescence. CT-L enables quantitative investigation of what new measurements would best complement the existing carbon observing system. We are also working to implement multi-species inversions for CO2 flux estimation using CO2 data along with CO, δ13CO2, COS and radiocarbon observations and for CH4 flux estimation using data for various hydrocarbons.

  2. Spatial Distribution of Hydrologic Ecosystem Service Estimates: Comparing Two Models

    NASA Astrophysics Data System (ADS)

    Dennedy-Frank, P. J.; Ghile, Y.; Gorelick, S.; Logsdon, R. A.; Chaubey, I.; Ziv, G.

    2014-12-01

    We compare estimates of the spatial distribution of water quantity provided (annual water yield) from two ecohydrologic models: the widely-used Soil and Water Assessment Tool (SWAT) and the much simpler water models from the Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) toolbox. These two models differ significantly in terms of complexity, timescale of operation, effort, and data required for calibration, and so are often used in different management contexts. We compare two study sites in the US: the Wildcat Creek Watershed (2083 km2) in Indiana, a largely agricultural watershed in a cold aseasonal climate, and the Upper Upatoi Creek Watershed (876 km2) in Georgia, a mostly forested watershed in a temperate aseasonal climate. We evaluate (1) quantitative estimates of water yield to explore how well each model represents this process, and (2) ranked estimates of water yield to indicate how useful the models are for management purposes where other social and financial factors may play significant roles. The SWAT and InVEST models provide very similar estimates of the water yield of individual subbasins in the Wildcat Creek Watershed (Pearson r = 0.92, slope = 0.89), and a similar ranking of the relative water yield of those subbasins (Spearman r = 0.86). However, the two models provide relatively different estimates of the water yield of individual subbasins in the Upper Upatoi Watershed (Pearson r = 0.25, slope = 0.14), and very different ranking of the relative water yield of those subbasins (Spearman r = -0.10). The Upper Upatoi watershed has a significant baseflow contribution due to its sandy, well-drained soils. InVEST's simple seasonality terms, which assume no change in storage over the time of the model run, may not accurately estimate water yield processes when baseflow provides such a strong contribution. Our results suggest that InVEST users take care in situations where storage changes are significant.

  3. Uncertainty in Population Estimates for Endangered Animals and Improving the Recovery Process

    PubMed Central

    Haines, Aaron M.; Zak, Matthew; Hammond, Katie; Scott, J. Michael; Goble, Dale D.; Rachlow, Janet L.

    2013-01-01

    Simple Summary The objective of our study was to evaluate the mention of uncertainty (i.e., variance) associated with population size estimates within U.S. recovery plans for endangered animals. To do this we reviewed all finalized recovery plans for listed terrestrial vertebrate species. We found that more recent recovery plans reported more estimates of population size and uncertainty. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty. We recommend that updated recovery plans combine uncertainty of population size estimates with a minimum detectable difference to aid in successful recovery. Abstract United States recovery plans contain biological information for a species listed under the Endangered Species Act and specify recovery criteria to provide basis for species recovery. The objective of our study was to evaluate whether recovery plans provide uncertainty (e.g., variance) with estimates of population size. We reviewed all finalized recovery plans for listed terrestrial vertebrate species to record the following data: (1) if a current population size was given, (2) if a measure of uncertainty or variance was associated with current estimates of population size and (3) if population size was stipulated for recovery. We found that 59% of completed recovery plans specified a current population size, 14.5% specified a variance for the current population size estimate and 43% specified population size as a recovery criterion. More recent recovery plans reported more estimates of current population size, uncertainty and population size as a recovery criterion. Also, bird and mammal recovery plans reported more estimates of population size and uncertainty compared to reptiles and amphibians. We suggest the use of calculating minimum detectable differences to improve confidence when delisting endangered animals and we identified incentives for individuals to get involved in recovery planning to improve access to quantitative data. PMID:26479531

  4. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    PubMed

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  5. The value of health care information exchange and interoperability.

    PubMed

    Walker, Jan; Pan, Eric; Johnston, Douglas; Adler-Milstein, Julia; Bates, David W; Middleton, Blackford

    2005-01-01

    In this paper we assess the value of electronic health care information exchange and interoperability (HIEI) between providers (hospitals and medical group practices) and independent laboratories, radiology centers, pharmacies, payers, public health departments, and other providers. We have created an HIEI taxonomy and combined published evidence with expert opinion in a cost-benefit model. Fully standardized HIEI could yield a net value of dollar 77.8 billion per year once fully implemented. Nonstandardized HIEI offers smaller positive financial returns. The clinical impact of HIEI for which quantitative estimates cannot yet be made would likely add further value. A compelling business case exists for national implementation of fully standardized HIEI.

  6. Application of a multicompartment dynamical model to multimodal optical imaging for investigating individual cerebrovascular properties

    NASA Astrophysics Data System (ADS)

    Desjardins, Michèle; Gagnon, Louis; Gauthier, Claudine; Hoge, Rick D.; Dehaes, Mathieu; Desjardins-Crépeau, Laurence; Bherer, Louis; Lesage, Frédéric

    2009-02-01

    Biophysical models of hemodynamics provide a tool for quantitative multimodal brain imaging by allowing a deeper understanding of the interplay between neural activity and blood oxygenation, volume and flow responses to stimuli. Multicompartment dynamical models that describe the dynamics and interactions of the vascular and metabolic components of evoked hemodynamic responses have been developed in the literature. In this work, multimodal data using near-infrared spectroscopy (NIRS) and diffuse correlation flowmetry (DCF) is used to estimate total baseline hemoglobin concentration (HBT0) in 7 adult subjects. A validation of the model estimate and investigation of the partial volume effect is done by comparing with time-resolved spectroscopy (TRS) measures of absolute HBT0. Simultaneous NIRS and DCF measurements during hypercapnia are then performed, but are found to be hardly reproducible. The results raise questions about the feasibility of an all-optical model-based estimation of individual vascular properties.

  7. Tug-of-war lacunarity—A novel approach for estimating lacunarity

    NASA Astrophysics Data System (ADS)

    Reiss, Martin A.; Lemmerer, Birgit; Hanslmeier, Arnold; Ahammer, Helmut

    2016-11-01

    Modern instrumentation provides us with massive repositories of digital images that will likely only increase in the future. Therefore, it has become increasingly important to automatize the analysis of digital images, e.g., with methods from pattern recognition. These methods aim to quantify the visual appearance of captured textures with quantitative measures. As such, lacunarity is a useful multi-scale measure of texture's heterogeneity but demands high computational efforts. Here we investigate a novel approach based on the tug-of-war algorithm, which estimates lacunarity in a single pass over the image. We computed lacunarity for theoretical and real world sample images, and found that the investigated approach is able to estimate lacunarity with low uncertainties. We conclude that the proposed method combines low computational efforts with high accuracy, and that its application may have utility in the analysis of high-resolution images.

  8. Estimation of invertebrate production from patterns of fish predation in western Lake Superior

    USGS Publications Warehouse

    Johnson, Timothy B.; Mason, Doran M.; Bronte, Charles R.; Kitchell, James F.

    1998-01-01

    We used bioenergetic models for lake herring Coregonus artedi, bloater Coregonus hoyi, and rainbow smelt Osmerus mordax to estimate consumption of zooplankton,Mysis, andDiporeia in western Lake Superior for selected years between 1978 and 1995. Total invertebrate biomass consumed yearly ranged from 2.5 to 38 g/m2 with nearly 40% consumed between August and October in all years. Copepod zooplankton represented the largest proportion of biomass collectively consumed by the three species (81%), although rainbow smelt consumed almost twice as much Mysis as zooplankton. Growth efficiency was highest for rainbow smelt (3.84–16.64%) and lower for the coregonids (1.91–12.26%). In the absence of quantitative secondary production values, we suggest our estimates of predatory demand provide a conservative range of the minimum invertebrate production in western Lake Superior during the past 20 years.

  9. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  10. Direct Estimation of Optical Parameters From Photoacoustic Time Series in Quantitative Photoacoustic Tomography.

    PubMed

    Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja

    2016-11-01

    Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.

  11. Toward quantitative estimation of material properties with dynamic mode atomic force microscopy: a comparative study.

    PubMed

    Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti

    2017-08-11

    In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.

  12. Benchmarking on the evaluation of major accident-related risk assessment.

    PubMed

    Fabbri, Luciano; Contini, Sergio

    2009-03-15

    This paper summarises the main results of a European project BEQUAR (Benchmarking Exercise in Quantitative Area Risk Assessment in Central and Eastern European Countries). This project is among the first attempts to explore how independent evaluations of the same risk study associated with a certain chemical establishment could differ from each other and the consequent effects on the resulting area risk estimate. The exercise specifically aimed at exploring the manner and degree to which independent experts may disagree on the interpretation of quantitative risk assessments for the same entity. The project first compared the results of a number of independent expert evaluations of a quantitative risk assessment study for the same reference chemical establishment. This effort was then followed by a study of the impact of the different interpretations on the estimate of the overall risk on the area concerned. In order to improve the inter-comparability of the results, this exercise was conducted using a single tool for area risk assessment based on the ARIPAR methodology. The results of this study are expected to contribute to an improved understanding of the inspection criteria and practices used by the different national authorities responsible for the implementation of the Seveso II Directive in their countries. The activity was funded under the Enlargement and Integration Action of the Joint Research Centre (JRC), that aims at providing scientific and technological support for promoting integration of the New Member States and assisting the Candidate Countries on their way towards accession to the European Union.

  13. Investigation of BOLD fMRI Resonance Frequency Shifts and Quantitative Susceptibility Changes at 7 T

    PubMed Central

    Bianciardi, Marta; van Gelderen, Peter; Duyn, Jeff H.

    2013-01-01

    Although blood oxygenation level dependent (BOLD) functional magnetic resonance imaging (fMRI) experiments of brain activity generally rely on the magnitude of the signal, they also provide frequency information that can be derived from the phase of the signal. However, because of confounding effects of instrumental and physiological origin, BOLD related frequency information is difficult to extract and therefore rarely used. Here, we explored the use of high field (7 T) and dedicated signal processing methods to extract frequency information and use it to quantify and interpret blood oxygenation and blood volume changes. We found that optimized preprocessing improves detection of task-evoked and spontaneous changes in phase signals and resonance frequency shifts over large areas of the cortex with sensitivity comparable to that of magnitude signals. Moreover, our results suggest the feasibility of mapping BOLD quantitative susceptibility changes in at least part of the activated area and its largest draining veins. Comparison with magnitude data suggests that the observed susceptibility changes originate from neuronal activity through induced blood volume and oxygenation changes in pial and intracortical veins. Further, from frequency shifts and susceptibility values, we estimated that, relative to baseline, the fractional oxygen saturation in large vessels increased by 0.02–0.05 during stimulation, which is consistent to previously published estimates. Together, these findings demonstrate that valuable information can be derived from fMRI imaging of BOLD frequency shifts and quantitative susceptibility changes. PMID:23897623

  14. High temporal resolution dynamic contrast-enhanced MRI using compressed sensing-combined sequence in quantitative renal perfusion measurement.

    PubMed

    Chen, Bin; Zhao, Kai; Li, Bo; Cai, Wenchao; Wang, Xiaoying; Zhang, Jue; Fang, Jing

    2015-10-01

    To demonstrate the feasibility of the improved temporal resolution by using compressed sensing (CS) combined imaging sequence in dynamic contrast-enhanced MRI (DCE-MRI) of kidney, and investigate its quantitative effects on renal perfusion measurements. Ten rabbits were included in the accelerated scans with a CS-combined 3D pulse sequence. To evaluate the image quality, the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were compared between the proposed CS strategy and the conventional full sampling method. Moreover, renal perfusion was estimated by using the separable compartmental model in both CS simulation and realistic CS acquisitions. The CS method showed DCE-MRI images with improved temporal resolution and acceptable image contrast, while presenting significantly higher SNR than the fully sampled images (p<.01) at 2-, 3- and 4-X acceleration. In quantitative measurements, renal perfusion results were in good agreement with the fully sampled one (concordance correlation coefficient=0.95, 0.91, 0.88) at 2-, 3- and 4-X acceleration in CS simulation. Moreover, in realistic acquisitions, the estimated perfusion by the separable compartmental model exhibited no significant differences (p>.05) between each CS-accelerated acquisition and the full sampling method. The CS-combined 3D sequence could improve the temporal resolution for DCE-MRI in kidney while yielding diagnostically acceptable image quality, and it could provide effective measurements of renal perfusion. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. A method for estimating 2D Wrinkle Ridge Strain from application of fault displacement scaling to the Yakima Folds, Washington

    NASA Astrophysics Data System (ADS)

    Mège, Daniel; Reidel, Stephen P.

    The Yakima folds on the central Columbia Plateau are a succession of thrusted anticlines thought to be analogs of planetary wrinkle ridges. They provide a unique opportunity to understand wrinkle ridge structure. Field data and length-displacement scaling are used to demonstrate a method for estimating two-dimensional horizontal contractional strain at wrinkle ridges. Strain is given as a function of ridge length, and depends on other parameters that can be inferred from the Yakima folds and fault population displacement studies. Because ridge length can be readily obtained from orbital imagery, the method can be applied to any wrinkle ridge population, and helps constrain quantitative tectonic models on other planets.

  16. Heavy baryons in the large N c limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albertus, C.; Ruiz Arriola, Enrique; Fernando, Ishara P.

    It is shown that in the large N c limit heavy baryon masses can be estimated quantitatively in a 1/N c expansion using the Hartree approximation. The results are compared with available lattice calculations for different values of the ratio between the square root of the string tension and the heavy quark mass tension independent of N c. Using a potential adjusted to agree with the one obtained in lattice QCD, a variational analysis of the ground state spin averaged baryon mass is performed using Gaussian Hartree wave functions. Relativistic corrections through the quark kinetic energy are included. Lastly, themore » results provide good estimates for the first sub-leading in 1/N c corrections.« less

  17. Heavy baryons in the large N c limit

    DOE PAGES

    Albertus, C.; Ruiz Arriola, Enrique; Fernando, Ishara P.; ...

    2015-09-16

    It is shown that in the large N c limit heavy baryon masses can be estimated quantitatively in a 1/N c expansion using the Hartree approximation. The results are compared with available lattice calculations for different values of the ratio between the square root of the string tension and the heavy quark mass tension independent of N c. Using a potential adjusted to agree with the one obtained in lattice QCD, a variational analysis of the ground state spin averaged baryon mass is performed using Gaussian Hartree wave functions. Relativistic corrections through the quark kinetic energy are included. Lastly, themore » results provide good estimates for the first sub-leading in 1/N c corrections.« less

  18. Bond energy prediction of Curie temperature of lithium niobate crystals.

    PubMed

    Zhang, Xu; Xue, Dongfeng

    2007-03-15

    A general expression of the Curie temperature (Tc) and spontaneous polarization (Ps) of lithium niobate (LN) crystals is energetically proposed by employing the viewpoint of the bond energy of constituent chemical bonds within the LN crystallographic frame. The calculated Tc values of various pure and doped LN crystals are in a good agreement with those reported data. Ps values of these LN crystals can also be quantitatively estimated in this work. It is found that the Li site is a sensitive lattice position to dominate the ferroelectricity of LN crystals. This novel method provides us a good understanding of ferroelectric behaviors of LN crystals, which may be applicable to the estimation of ferroelectric behaviors of LN-type solids.

  19. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  20. A Web-based Tool to Aid the Identification of Chemicals Potentially Posing a Health Risk through Percutaneous Exposure.

    PubMed

    Gorman Ng, Melanie; Milon, Antoine; Vernez, David; Lavoué, Jérôme

    2016-04-01

    Occupational hygiene practitioners typically assess the risk posed by occupational exposure by comparing exposure measurements to regulatory occupational exposure limits (OELs). In most jurisdictions, OELs are only available for exposure by the inhalation pathway. Skin notations are used to indicate substances for which dermal exposure may lead to health effects. However, these notations are either present or absent and provide no indication of acceptable levels of exposure. Furthermore, the methodology and framework for assigning skin notation differ widely across jurisdictions resulting in inconsistencies in the substances that carry notations. The UPERCUT tool was developed in response to these limitations. It helps occupational health stakeholders to assess the hazard associated with dermal exposure to chemicals. UPERCUT integrates dermal quantitative structure-activity relationships (QSARs) and toxicological data to provide users with a skin hazard index called the dermal hazard ratio (DHR) for the substance and scenario of interest. The DHR is the ratio between the estimated 'received' dose and the 'acceptable' dose. The 'received' dose is estimated using physico-chemical data and information on the exposure scenario provided by the user (body parts exposure and exposure duration), and the 'acceptable' dose is estimated using inhalation OELs and toxicological data. The uncertainty surrounding the DHR is estimated with Monte Carlo simulation. Additional information on the selected substances includes intrinsic skin permeation potential of the substance and the existence of skin notations. UPERCUT is the only available tool that estimates the absorbed dose and compares this to an acceptable dose. In the absence of dermal OELs it provides a systematic and simple approach for screening dermal exposure scenarios for 1686 substances. © The Author 2015. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  1. Using confidence intervals to evaluate the focus alignment of spectrograph detector arrays.

    PubMed

    Sawyer, Travis W; Hawkins, Kyle S; Damento, Michael

    2017-06-20

    High-resolution spectrographs extract detailed spectral information of a sample and are frequently used in astronomy, laser-induced breakdown spectroscopy, and Raman spectroscopy. These instruments employ dispersive elements such as prisms and diffraction gratings to spatially separate different wavelengths of light, which are then detected by a charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) detector array. Precise alignment along the optical axis (focus position) of the detector array is critical to maximize the instrumental resolution; however, traditional approaches of scanning the detector through focus lack a quantitative measure of precision, limiting the repeatability and relying on one's experience. Here we propose a method to evaluate the focus alignment of spectrograph detector arrays by establishing confidence intervals to measure the alignment precision. We show that propagation of uncertainty can be used to estimate the variance in an alignment, thus providing a quantitative and repeatable means to evaluate the precision and confidence of an alignment. We test the approach by aligning the detector array of a prototype miniature echelle spectrograph. The results indicate that the procedure effectively quantifies alignment precision, enabling one to objectively determine when an alignment has reached an acceptable level. This quantitative approach also provides a foundation for further optimization, including automated alignment. Furthermore, the procedure introduced here can be extended to other alignment techniques that rely on numerically fitting data to a model, providing a general framework for evaluating the precision of alignment methods.

  2. Infectious reactivation of cytomegalovirus explaining age- and sex-specific patterns of seroprevalence.

    PubMed

    van Boven, Michiel; van de Kassteele, Jan; Korndewal, Marjolein J; van Dorp, Christiaan H; Kretzschmar, Mirjam; van der Klis, Fiona; de Melker, Hester E; Vossen, Ann C; van Baarle, Debbie

    2017-09-01

    Human cytomegalovirus (CMV) is a herpes virus with poorly understood transmission dynamics. Person-to-person transmission is thought to occur primarily through transfer of saliva or urine, but no quantitative estimates are available for the contribution of different infection routes. Using data from a large population-based serological study (n = 5,179), we provide quantitative estimates of key epidemiological parameters, including the transmissibility of primary infection, reactivation, and re-infection. Mixture models are fitted to age- and sex-specific antibody response data from the Netherlands, showing that the data can be described by a model with three distributions of antibody measurements, i.e. uninfected, infected, and infected with increased antibody concentration. Estimates of seroprevalence increase gradually with age, such that at 80 years 73% (95%CrI: 64%-78%) of females and 62% (95%CrI: 55%-68%) of males are infected, while 57% (95%CrI: 47%-67%) of females and 37% (95%CrI: 28%-46%) of males have increased antibody concentration. Merging the statistical analyses with transmission models, we find that models with infectious reactivation (i.e. reactivation that can lead to the virus being transmitted to a novel host) fit the data significantly better than models without infectious reactivation. Estimated reactivation rates increase from low values in children to 2%-4% per year in women older than 50 years. The results advance a hypothesis in which transmission from adults after infectious reactivation is a key driver of transmission. We discuss the implications for control strategies aimed at reducing CMV infection in vulnerable groups.

  3. A novel approach to non-biased systematic random sampling: a stereologic estimate of Purkinje cells in the human cerebellum.

    PubMed

    Agashiwala, Rajiv M; Louis, Elan D; Hof, Patrick R; Perl, Daniel P

    2008-10-21

    Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well.

  4. A novel approach to non-biased systematic random sampling: A stereologic estimate of Purkinje cells in the human cerebellum

    PubMed Central

    Agashiwala, Rajiv M.; Louis, Elan D.; Hof, Patrick R.; Perl, Daniel P.

    2010-01-01

    Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well. PMID:18725208

  5. Good practices for quantitative bias analysis.

    PubMed

    Lash, Timothy L; Fox, Matthew P; MacLehose, Richard F; Maldonado, George; McCandless, Lawrence C; Greenland, Sander

    2014-12-01

    Quantitative bias analysis serves several objectives in epidemiological research. First, it provides a quantitative estimate of the direction, magnitude and uncertainty arising from systematic errors. Second, the acts of identifying sources of systematic error, writing down models to quantify them, assigning values to the bias parameters and interpreting the results combat the human tendency towards overconfidence in research results, syntheses and critiques and the inferences that rest upon them. Finally, by suggesting aspects that dominate uncertainty in a particular research result or topic area, bias analysis can guide efficient allocation of sparse research resources. The fundamental methods of bias analyses have been known for decades, and there have been calls for more widespread use for nearly as long. There was a time when some believed that bias analyses were rarely undertaken because the methods were not widely known and because automated computing tools were not readily available to implement the methods. These shortcomings have been largely resolved. We must, therefore, contemplate other barriers to implementation. One possibility is that practitioners avoid the analyses because they lack confidence in the practice of bias analysis. The purpose of this paper is therefore to describe what we view as good practices for applying quantitative bias analysis to epidemiological data, directed towards those familiar with the methods. We focus on answering questions often posed to those of us who advocate incorporation of bias analysis methods into teaching and research. These include the following. When is bias analysis practical and productive? How does one select the biases that ought to be addressed? How does one select a method to model biases? How does one assign values to the parameters of a bias model? How does one present and interpret a bias analysis?. We hope that our guide to good practices for conducting and presenting bias analyses will encourage more widespread use of bias analysis to estimate the potential magnitude and direction of biases, as well as the uncertainty in estimates potentially influenced by the biases. © The Author 2014; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.

  6. Modeling Cape- and Ridge-Associated Marine Sand Deposits; A Focus on the U.S. Atlantic Continental Shelf

    USGS Publications Warehouse

    Bliss, James D.; Williams, S. Jeffress; Bolm, Karen S.

    2009-01-01

    Cape- and ridge-associated marine sand deposits, which accumulate on storm-dominated continental shelves that are undergoing Holocene marine transgression, are particularly notable in a segment of the U.S. Atlantic Continental Shelf that extends southward from the east tip of Long Island, N.Y., and eastward from Cape May at the south end of the New Jersey shoreline. These sand deposits commonly contain sand suitable for shore protection in the form of beach nourishment. Increasing demand for marine sand raises questions about both short- and long-term potential supply and the sustainability of beach nourishment with the prospects of accelerating sea-level rise and increasing storm activity. To address these important issues, quantitative assessments of the volume of marine sand resources are needed. Currently, the U.S. Geological Survey is undertaking these assessments through its national Marine Aggregates and Resources Program (URL http://woodshole.er.usgs.gov/project-pages/aggregates/). In this chapter, we present a hypothetical example of a quantitative assessment of cape-and ridge-associated marine sand deposits in the study area, using proven tools of mineral-resource assessment. Applying these tools requires new models that summarize essential data on the quantity and quality of these deposits. Two representative types of model are descriptive models, which consist of a narrative that allows for a consistent recognition of cape-and ridge-associated marine sand deposits, and quantitative models, which consist of empirical statistical distributions that describe significant deposit characteristics, such as volume and grain-size distribution. Variables of the marine sand deposits considered for quantitative modeling in this study include area, thickness, mean grain size, grain sorting, volume, proportion of sand-dominated facies, and spatial density, of which spatial density is particularly helpful in estimating the number of undiscovered deposits within an assessment area. A Monte Carlo simulation that combines the volume of sand-dominated-facies models with estimates of the hypothetical probable number of undiscovered deposits provides a probabilistic approach to estimating marine sand resources within parts of the U.S. Atlantic Continental Shelf and other comparable marine shelves worldwide.

  7. Trans-dimensional and hierarchical Bayesian approaches toward rigorous estimation of seismic sources and structures in the Northeast Asia

    NASA Astrophysics Data System (ADS)

    Kim, Seongryong; Tkalčić, Hrvoje; Mustać, Marija; Rhie, Junkee; Ford, Sean

    2016-04-01

    A framework is presented within which we provide rigorous estimations for seismic sources and structures in the Northeast Asia. We use Bayesian inversion methods, which enable statistical estimations of models and their uncertainties based on data information. Ambiguities in error statistics and model parameterizations are addressed by hierarchical and trans-dimensional (trans-D) techniques, which can be inherently implemented in the Bayesian inversions. Hence reliable estimation of model parameters and their uncertainties is possible, thus avoiding arbitrary regularizations and parameterizations. Hierarchical and trans-D inversions are performed to develop a three-dimensional velocity model using ambient noise data. To further improve the model, we perform joint inversions with receiver function data using a newly developed Bayesian method. For the source estimation, a novel moment tensor inversion method is presented and applied to regional waveform data of the North Korean nuclear explosion tests. By the combination of new Bayesian techniques and the structural model, coupled with meaningful uncertainties related to each of the processes, more quantitative monitoring and discrimination of seismic events is possible.

  8. Quantitative evaluation of dual-flip-angle T1 mapping on DCE-MRI kinetic parameter estimation in head and neck

    PubMed Central

    Chow, Steven Kwok Keung; Yeung, David Ka Wai; Ahuja, Anil T; King, Ann D

    2012-01-01

    Purpose To quantitatively evaluate the kinetic parameter estimation for head and neck (HN) dynamic contrast-enhanced (DCE) MRI with dual-flip-angle (DFA) T1 mapping. Materials and methods Clinical DCE-MRI datasets of 23 patients with HN tumors were included in this study. T1 maps were generated based on multiple-flip-angle (MFA) method and different DFA combinations. Tofts model parameter maps of kep, Ktrans and vp based on MFA and DFAs were calculated and compared. Fitted parameter by MFA and DFAs were quantitatively evaluated in primary tumor, salivary gland and muscle. Results T1 mapping deviations by DFAs produced remarkable kinetic parameter estimation deviations in head and neck tissues. In particular, the DFA of [2º, 7º] overestimated, while [7º, 12º] and [7º, 15º] underestimated Ktrans and vp, significantly (P<0.01). [2º, 15º] achieved the smallest but still statistically significant overestimation for Ktrans and vp in primary tumors, 32.1% and 16.2% respectively. kep fitting results by DFAs were relatively close to the MFA reference compared to Ktrans and vp. Conclusions T1 deviations induced by DFA could result in significant errors in kinetic parameter estimation, particularly Ktrans and vp, through Tofts model fitting. MFA method should be more reliable and robust for accurate quantitative pharmacokinetic analysis in head and neck. PMID:23289084

  9. A Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) Quantitative Analysis Method Based on the Auto-Selection of an Internal Reference Line and Optimized Estimation of Plasma Temperature.

    PubMed

    Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong

    2018-01-01

    The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.

  10. Bending spring rate investigation of nanopipette for cell injection.

    PubMed

    Shen, Yajing; Zhang, Zhenhai; Fukuda, Toshio

    2015-04-17

    Bending of nanopipette tips during cell penetration is a major cause of cell injection failure. However, the flexural rigidity of nanopipettes is little known due to their irregular structure. In this paper, we report a quantitative method to estimate the flexural rigidity of a nanopipette by investigating its bending spring rate. First nanopipettes with a tip size of 300 nm are fabricated from various glass tubes by laser pulling followed by focused ion beam (FIB) milling. Then the bending spring rate of the nanopipettes is investigated inside a scanning electron microscope (SEM). Finally, a yeast cell penetration test is performed on these nanopipettes, which have different bending spring rates. The results show that nanopipettes with a higher bending spring rate have better cell penetration capability, which confirms that the bending spring rate may well reflect the flexural rigidity of a nanopipette. This method provides a quantitative parameter for characterizing the mechanical property of a nanopipette that can be potentially taken as a standard specification in the future. This general method can also be used to estimate other one-dimensional structures for cell injection, which will greatly benefit basic cell biology research and clinical applications.

  11. Bending spring rate investigation of nanopipette for cell injection

    NASA Astrophysics Data System (ADS)

    Shen, Yajing; Zhang, Zhenhai; Fukuda, Toshio

    2015-04-01

    Bending of nanopipette tips during cell penetration is a major cause of cell injection failure. However, the flexural rigidity of nanopipettes is little known due to their irregular structure. In this paper, we report a quantitative method to estimate the flexural rigidity of a nanopipette by investigating its bending spring rate. First nanopipettes with a tip size of 300 nm are fabricated from various glass tubes by laser pulling followed by focused ion beam (FIB) milling. Then the bending spring rate of the nanopipettes is investigated inside a scanning electron microscope (SEM). Finally, a yeast cell penetration test is performed on these nanopipettes, which have different bending spring rates. The results show that nanopipettes with a higher bending spring rate have better cell penetration capability, which confirms that the bending spring rate may well reflect the flexural rigidity of a nanopipette. This method provides a quantitative parameter for characterizing the mechanical property of a nanopipette that can be potentially taken as a standard specification in the future. This general method can also be used to estimate other one-dimensional structures for cell injection, which will greatly benefit basic cell biology research and clinical applications.

  12. Simple, Rapid, and Highly Sensitive Detection of Diphosgene and Triphosgene by Spectrophotometric Methods

    PubMed Central

    Joy, Abraham; Anim-Danso, Emmanuel; Kohn, Joachim

    2009-01-01

    Methods for the detection and estimation of diphosgene and triphosgene are described. These compounds are widely used phosgene precursors which produce an intensely colored purple pentamethine oxonol dye when reacted with 1,3-dimethylbarbituric acid (DBA) and pyridine (or a pyridine derivative). Two quantitative methods are described, based on either UV absorbance or fluorescence of the oxonol dye. Detection limits are ~ 4 µmol/L by UV and <0.4 µmol/L by fluorescence. The third method is a test strip for the simple and rapid detection and semi-quantitative estimation of diphosgene and triphosgene, using a filter paper embedded with dimethylbarbituric acid and poly(4-vinylpyridine). Addition of a test solution to the paper causes a color change from white to light blue at low concentrations and to pink at higher concentrations of triphosgene. The test strip is useful for quick on-site detection of triphosgene and diphosgene in reaction mixtures. The test strip is easy to perform and provides clear signal readouts indicative of the presence of phosgene precursors. The utility of this method was demonstrated by the qualitative determination of residual triphosgene during the production of poly(Bisphenol A carbonate). PMID:19782219

  13. Cerebral capillary velocimetry based on temporal OCT speckle contrast.

    PubMed

    Choi, Woo June; Li, Yuandong; Qin, Wan; Wang, Ruikang K

    2016-12-01

    We propose a new optical coherence tomography (OCT) based method to measure red blood cell (RBC) velocities of single capillaries in the cortex of rodent brain. This OCT capillary velocimetry exploits quantitative laser speckle contrast analysis to estimate speckle decorrelation rate from the measured temporal OCT speckle signals, which is related to microcirculatory flow velocity. We hypothesize that OCT signal due to sub-surface capillary flow can be treated as the speckle signal in the single scattering regime and thus its time scale of speckle fluctuations can be subjected to single scattering laser speckle contrast analysis to derive characteristic decorrelation time. To validate this hypothesis, OCT measurements are conducted on a single capillary flow phantom operating at preset velocities, in which M-mode B-frames are acquired using a high-speed OCT system. Analysis is then performed on the time-varying OCT signals extracted at the capillary flow, exhibiting a typical inverse relationship between the estimated decorrelation time and absolute RBC velocity, which is then used to deduce the capillary velocities. We apply the method to in vivo measurements of mouse brain, demonstrating that the proposed approach provides additional useful information in the quantitative assessment of capillary hemodynamics, complementary to that of OCT angiography.

  14. Automatic detection and quantitative analysis of cells in the mouse primary motor cortex

    NASA Astrophysics Data System (ADS)

    Meng, Yunlong; He, Yong; Wu, Jingpeng; Chen, Shangbin; Li, Anan; Gong, Hui

    2014-09-01

    Neuronal cells play very important role on metabolism regulation and mechanism control, so cell number is a fundamental determinant of brain function. Combined suitable cell-labeling approaches with recently proposed three-dimensional optical imaging techniques, whole mouse brain coronal sections can be acquired with 1-μm voxel resolution. We have developed a completely automatic pipeline to perform cell centroids detection, and provided three-dimensional quantitative information of cells in the primary motor cortex of C57BL/6 mouse. It involves four principal steps: i) preprocessing; ii) image binarization; iii) cell centroids extraction and contour segmentation; iv) laminar density estimation. Investigations on the presented method reveal promising detection accuracy in terms of recall and precision, with average recall rate 92.1% and average precision rate 86.2%. We also analyze laminar density distribution of cells from pial surface to corpus callosum from the output vectorizations of detected cell centroids in mouse primary motor cortex, and find significant cellular density distribution variations in different layers. This automatic cell centroids detection approach will be beneficial for fast cell-counting and accurate density estimation, as time-consuming and error-prone manual identification is avoided.

  15. Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments

    PubMed Central

    Sun, Tongyang; Duan, Lihong; Wang, Yulong

    2017-01-01

    The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575

  16. A spatial Bayesian network model to assess the benefits of early warning for urban flood risk to people

    NASA Astrophysics Data System (ADS)

    Balbi, Stefano; Villa, Ferdinando; Mojtahed, Vahid; Hegetschweiler, Karin Tessa; Giupponi, Carlo

    2016-06-01

    This article presents a novel methodology to assess flood risk to people by integrating people's vulnerability and ability to cushion hazards through coping and adapting. The proposed approach extends traditional risk assessments beyond material damages; complements quantitative and semi-quantitative data with subjective and local knowledge, improving the use of commonly available information; and produces estimates of model uncertainty by providing probability distributions for all of its outputs. Flood risk to people is modeled using a spatially explicit Bayesian network model calibrated on expert opinion. Risk is assessed in terms of (1) likelihood of non-fatal physical injury, (2) likelihood of post-traumatic stress disorder and (3) likelihood of death. The study area covers the lower part of the Sihl valley (Switzerland) including the city of Zurich. The model is used to estimate the effect of improving an existing early warning system, taking into account the reliability, lead time and scope (i.e., coverage of people reached by the warning). Model results indicate that the potential benefits of an improved early warning in terms of avoided human impacts are particularly relevant in case of a major flood event.

  17. Predicting low-temperature free energy landscapes with flat-histogram Monte Carlo methods

    NASA Astrophysics Data System (ADS)

    Mahynski, Nathan A.; Blanco, Marco A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-02-01

    We present a method for predicting the free energy landscape of fluids at low temperatures from flat-histogram grand canonical Monte Carlo simulations performed at higher ones. We illustrate our approach for both pure and multicomponent systems using two different sampling methods as a demonstration. This allows us to predict the thermodynamic behavior of systems which undergo both first order and continuous phase transitions upon cooling using simulations performed only at higher temperatures. After surveying a variety of different systems, we identify a range of temperature differences over which the extrapolation of high temperature simulations tends to quantitatively predict the thermodynamic properties of fluids at lower ones. Beyond this range, extrapolation still provides a reasonably well-informed estimate of the free energy landscape; this prediction then requires less computational effort to refine with an additional simulation at the desired temperature than reconstruction of the surface without any initial estimate. In either case, this method significantly increases the computational efficiency of these flat-histogram methods when investigating thermodynamic properties of fluids over a wide range of temperatures. For example, we demonstrate how a binary fluid phase diagram may be quantitatively predicted for many temperatures using only information obtained from a single supercritical state.

  18. Analysis of Sequence Data Under Multivariate Trait-Dependent Sampling.

    PubMed

    Tao, Ran; Zeng, Donglin; Franceschini, Nora; North, Kari E; Boerwinkle, Eric; Lin, Dan-Yu

    2015-06-01

    High-throughput DNA sequencing allows for the genotyping of common and rare variants for genetic association studies. At the present time and for the foreseeable future, it is not economically feasible to sequence all individuals in a large cohort. A cost-effective strategy is to sequence those individuals with extreme values of a quantitative trait. We consider the design under which the sampling depends on multiple quantitative traits. Under such trait-dependent sampling, standard linear regression analysis can result in bias of parameter estimation, inflation of type I error, and loss of power. We construct a likelihood function that properly reflects the sampling mechanism and utilizes all available data. We implement a computationally efficient EM algorithm and establish the theoretical properties of the resulting maximum likelihood estimators. Our methods can be used to perform separate inference on each trait or simultaneous inference on multiple traits. We pay special attention to gene-level association tests for rare variants. We demonstrate the superiority of the proposed methods over standard linear regression through extensive simulation studies. We provide applications to the Cohorts for Heart and Aging Research in Genomic Epidemiology Targeted Sequencing Study and the National Heart, Lung, and Blood Institute Exome Sequencing Project.

  19. [Medical and ecological assessment of climate effects on urolithiasis morbidity in population of Primorsky territory].

    PubMed

    Koval'chuk, V K

    2004-01-01

    The article presents medicoecological estimation of quantitative relations between monsoon climate and urolithiasis primary morbidity in the Primorsky Territory. Quantitative estimation of the climate was performed by V. I. Rusanov (1973) who calculated daily meteorological data for 1 p.m. throughout 1991-1999. Primary urolithiasis morbidity for this period of time was provided by regional health department. The data were processed by methods of medical mapping and paired correlation analysis. In the Territory, mapping revealed the same location of the zones with high frequency of discomfortable weather of class V and VI causing chilblain in positive air temperatures and zones with elevated primary urolithiasis morbidity in children and adults. Correlation analysis confirmed mapping results and determined significant negative correlations between frequency of relatively comfortable moment weather classes II-IV and morbidity of children and adults, positive correlation between frequency of discomfortable class VI and adult morbidity. Thus, high frequency of days per year with discomfortable classes of moment weather in low positive air temperatures may be one of the factors of urolithiasis risk in population of the Primorsky Territory. Climatic factors should be taken into consideration in planning primary prophylaxis of this disease in the Primorsky Territory.

  20. A mathematical function for the description of nutrient-response curve

    PubMed Central

    Ahmadi, Hamed

    2017-01-01

    Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271

Top