The Mapping Model: A Cognitive Theory of Quantitative Estimation
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2008-01-01
How do people make quantitative estimations, such as estimating a car's selling price? Traditionally, linear-regression-type models have been used to answer this question. These models assume that people weight and integrate all information available to estimate a criterion. The authors propose an alternative cognitive theory for quantitative…
Mannetje, Andrea 't; Steenland, Kyle; Checkoway, Harvey; Koskela, Riitta-Sisko; Koponen, Matti; Attfield, Michael; Chen, Jingqiong; Hnizdo, Eva; DeKlerk, Nicholas; Dosemeci, Mustafa
2002-08-01
Comprehensive quantitative silica exposure estimates over time, measured in the same units across a number of cohorts, would make possible a pooled exposure-response analysis for lung cancer. Such an analysis would help clarify the continuing controversy regarding whether silica causes lung cancer. Existing quantitative exposure data for 10 silica-exposed cohorts were retrieved from the original investigators. Occupation- and time-specific exposure estimates were either adopted/adapted or developed for each cohort, and converted to milligram per cubic meter (mg/m(3)) respirable crystalline silica. Quantitative exposure assignments were typically based on a large number (thousands) of raw measurements, or otherwise consisted of exposure estimates by experts (for two cohorts). Median exposure level of the cohorts ranged between 0.04 and 0.59 mg/m(3) respirable crystalline silica. Exposure estimates were partially validated via their successful prediction of silicosis in these cohorts. Existing data were successfully adopted or modified to create comparable quantitative exposure estimates over time for 10 silica-exposed cohorts, permitting a pooled exposure-response analysis. The difficulties encountered in deriving common exposure estimates across cohorts are discussed. Copyright 2002 Wiley-Liss, Inc.
Security Events and Vulnerability Data for Cybersecurity Risk Estimation.
Allodi, Luca; Massacci, Fabio
2017-08-01
Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two-stage attacks whereby the attacker first breaches an Internet-facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of "weaponized" vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach. © 2017 Society for Risk Analysis.
Misconceptions of Astronomical Distances
ERIC Educational Resources Information Center
Miller, Brian W.; Brewer, William F.
2010-01-01
Previous empirical studies using multiple-choice procedures have suggested that there are misconceptions about the scale of astronomical distances. The present study provides a quantitative estimate of the nature of this misconception among US university students by asking them, in an open-ended response format, to make estimates of the distances…
Analysis and Management of Animal Populations: Modeling, Estimation and Decision Making
Williams, B.K.; Nichols, J.D.; Conroy, M.J.
2002-01-01
This book deals with the processes involved in making informed decisions about the management of animal populations. It covers the modeling of population responses to management actions, the estimation of quantities needed in the modeling effort, and the application of these estimates and models to the development of sound management decisions. The book synthesizes and integrates in a single volume the methods associated with these themes, as they apply to ecological assessment and conservation of animal populations. KEY FEATURES * Integrates population modeling, parameter estimation and * decision-theoretic approaches to management in a single, cohesive framework * Provides authoritative, state-of-the-art descriptions of quantitative * approaches to modeling, estimation and decision-making * Emphasizes the role of mathematical modeling in the conduct of science * and management * Utilizes a unifying biological context, consistent mathematical notation, * and numerous biological examples
Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment
NASA Technical Reports Server (NTRS)
Britton, Paul; Al Hassan, Mohammad; Ring, Robert
2017-01-01
Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
Wu, Chang-Guang; Li, Sheng; Ren, Hua-Dong; Yao, Xiao-Hua; Huang, Zi-Jie
2012-06-01
Soil loss prediction models such as universal soil loss equation (USLE) and its revised universal soil loss equation (RUSLE) are the useful tools for risk assessment of soil erosion and planning of soil conservation at regional scale. To make a rational estimation of vegetation cover and management factor, the most important parameters in USLE or RUSLE, is particularly important for the accurate prediction of soil erosion. The traditional estimation based on field survey and measurement is time-consuming, laborious, and costly, and cannot rapidly extract the vegetation cover and management factor at macro-scale. In recent years, the development of remote sensing technology has provided both data and methods for the estimation of vegetation cover and management factor over broad geographic areas. This paper summarized the research findings on the quantitative estimation of vegetation cover and management factor by using remote sensing data, and analyzed the advantages and the disadvantages of various methods, aimed to provide reference for the further research and quantitative estimation of vegetation cover and management factor at large scale.
Confidence estimation for quantitative photoacoustic imaging
NASA Astrophysics Data System (ADS)
Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena
2018-02-01
Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.
Non-structural carbohydrates in woody plants compared among laboratories
Audrey G. Quentin; Elizabeth A. Pinkard; Michael G. Ryan; David T. Tissue; L. Scott Baggett; Henry D. Adams; Pascale Maillard; Jacqueline Marchand; Simon M. Landhausser; Andre Lacointe; Yves Gibon; William R. L. Anderegg; Shinichi Asao; Owen K. Atkin; Marc Bonhomme; Caroline Claye; Pak S. Chow; Anne Clement-Vidal; Noel W. Davies; L. Turin Dickman; Rita Dumbur; David S. Ellsworth; Kristen Falk; Lucía Galiano; Jose M. Grunzweig; Henrik Hartmann; Gunter Hoch; Sharon Hood; Joanna E. Jones; Takayoshi Koike; Iris Kuhlmann; Francisco Lloret; Melchor Maestro; Shawn D. Mansfield; Jordi Martinez-Vilalta; Mickael Maucourt; Nathan G. McDowell; Annick Moing; Bertrand Muller; Sergio G. Nebauer; Ulo Niinemets; Sara Palacio; Frida Piper; Eran Raveh; Andreas Richter; Gaelle Rolland; Teresa Rosas; Brigitte Saint Joanis; Anna Sala; Renee A. Smith; Frank Sterck; Joseph R. Stinziano; Mari Tobias; Faride Unda; Makoto Watanabe; Danielle A. Way; Lasantha K. Weerasinghe; Birgit Wild; Erin Wiley; David R. Woodruff
2016-01-01
Non-structural carbohydrates (NSC) in plant tissue are frequently quantified to make inferences about plant responses to environmental conditions. Laboratories publishing estimates of NSC of woody plants use many different methods to evaluate NSC. We asked whether NSC estimates in the recent literature could be quantitatively compared among studies. We also...
Uncertainty Estimation Cheat Sheet for Probabilistic Risk Assessment
NASA Technical Reports Server (NTRS)
Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.
2017-01-01
"Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This paper will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
Lognormal Uncertainty Estimation for Failure Rates
NASA Technical Reports Server (NTRS)
Britton, Paul T.; Al Hassan, Mohammad; Ring, Robert W.
2017-01-01
"Uncertainty analysis itself is uncertain, therefore, you cannot evaluate it exactly," Source Uncertain. Quantitative results for aerospace engineering problems are influenced by many sources of uncertainty. Uncertainty analysis aims to make a technical contribution to decision-making through the quantification of uncertainties in the relevant variables as well as through the propagation of these uncertainties up to the result. Uncertainty can be thought of as a measure of the 'goodness' of a result and is typically represented as statistical dispersion. This presentation will explain common measures of centrality and dispersion; and-with examples-will provide guidelines for how they may be estimated to ensure effective technical contributions to decision-making.
Leaning to the left makes the Eiffel Tower seem smaller: posture-modulated estimation.
Eerland, Anita; Guadalupe, Tulio M; Zwaan, Rolf A
2011-12-01
In two experiments, we investigated whether body posture influences people's estimation of quantities. According to the mental-number-line theory, people mentally represent numbers along a line with smaller numbers on the left and larger numbers on the right. We hypothesized that surreptitiously making people lean to the right or to the left would affect their quantitative estimates. Participants answered estimation questions while standing on a Wii Balance Board. Posture was manipulated within subjects so that participants answered some questions while they leaned slightly to the left, some questions while they leaned slightly to the right, and some questions while they stood upright. Crucially, participants were not aware of this manipulation. Estimates were significantly smaller when participants leaned to the left than when they leaned to the right.
Quantitative method of medication system interface evaluation.
Pingenot, Alleene Anne; Shanteau, James; Pingenot, James D F
2007-01-01
The objective of this study was to develop a quantitative method of evaluating the user interface for medication system software. A detailed task analysis provided a description of user goals and essential activity. A structural fault analysis was used to develop a detailed description of the system interface. Nurses experienced with use of the system under evaluation provided estimates of failure rates for each point in this simplified fault tree. Means of estimated failure rates provided quantitative data for fault analysis. Authors note that, although failures of steps in the program were frequent, participants reported numerous methods of working around these failures so that overall system failure was rare. However, frequent process failure can affect the time required for processing medications, making a system inefficient. This method of interface analysis, called Software Efficiency Evaluation and Fault Identification Method, provides quantitative information with which prototypes can be compared and problems within an interface identified.
Sornborger, Andrew; Broder, Josef; Majumder, Anirban; Srinivasamoorthy, Ganesh; Porter, Erika; Reagin, Sean S; Keith, Charles; Lauderdale, James D
2008-09-01
Ratiometric fluorescent indicators are used for making quantitative measurements of a variety of physiological variables. Their utility is often limited by noise. This is the second in a series of papers describing statistical methods for denoising ratiometric data with the aim of obtaining improved quantitative estimates of variables of interest. Here, we outline a statistical optimization method that is designed for the analysis of ratiometric imaging data in which multiple measurements have been taken of systems responding to the same stimulation protocol. This method takes advantage of correlated information across multiple datasets for objectively detecting and estimating ratiometric signals. We demonstrate our method by showing results of its application on multiple, ratiometric calcium imaging experiments.
Simon, Aaron B.; Griffeth, Valerie E. M.; Wong, Eric C.; Buxton, Richard B.
2013-01-01
Simultaneous implementation of magnetic resonance imaging methods for Arterial Spin Labeling (ASL) and Blood Oxygenation Level Dependent (BOLD) imaging makes it possible to quantitatively measure the changes in cerebral blood flow (CBF) and cerebral oxygen metabolism (CMRO2) that occur in response to neural stimuli. To date, however, the range of neural stimuli amenable to quantitative analysis is limited to those that may be presented in a simple block or event related design such that measurements may be repeated and averaged to improve precision. Here we examined the feasibility of using the relationship between cerebral blood flow and the BOLD signal to improve dynamic estimates of blood flow fluctuations as well as to estimate metabolic-hemodynamic coupling under conditions where a stimulus pattern is unknown. We found that by combining the information contained in simultaneously acquired BOLD and ASL signals through a method we term BOLD Constrained Perfusion (BCP) estimation, we could significantly improve the precision of our estimates of the hemodynamic response to a visual stimulus and, under the conditions of a calibrated BOLD experiment, accurately determine the ratio of the oxygen metabolic response to the hemodynamic response. Importantly we were able to accomplish this without utilizing a priori knowledge of the temporal nature of the neural stimulus, suggesting that BOLD Constrained Perfusion estimation may make it feasible to quantitatively study the cerebral metabolic and hemodynamic responses to more natural stimuli that cannot be easily repeated or averaged. PMID:23382977
Sunderland, John J; Christian, Paul E
2015-01-01
The Clinical Trials Network (CTN) of the Society of Nuclear Medicine and Molecular Imaging (SNMMI) operates a PET/CT phantom imaging program using the CTN's oncology clinical simulator phantom, designed to validate scanners at sites that wish to participate in oncology clinical trials. Since its inception in 2008, the CTN has collected 406 well-characterized phantom datasets from 237 scanners at 170 imaging sites covering the spectrum of commercially available PET/CT systems. The combined and collated phantom data describe a global profile of quantitative performance and variability of PET/CT data used in both clinical practice and clinical trials. Individual sites filled and imaged the CTN oncology PET phantom according to detailed instructions. Standard clinical reconstructions were requested and submitted. The phantom itself contains uniform regions suitable for scanner calibration assessment, lung fields, and 6 hot spheric lesions with diameters ranging from 7 to 20 mm at a 4:1 contrast ratio with primary background. The CTN Phantom Imaging Core evaluated the quality of the phantom fill and imaging and measured background standardized uptake values to assess scanner calibration and maximum standardized uptake values of all 6 lesions to review quantitative performance. Scanner make-and-model-specific measurements were pooled and then subdivided by reconstruction to create scanner-specific quantitative profiles. Different makes and models of scanners predictably demonstrated different quantitative performance profiles including, in some cases, small calibration bias. Differences in site-specific reconstruction parameters increased the quantitative variability among similar scanners, with postreconstruction smoothing filters being the most influential parameter. Quantitative assessment of this intrascanner variability over this large collection of phantom data gives, for the first time, estimates of reconstruction variance introduced into trials from allowing trial sites to use their preferred reconstruction methodologies. Predictably, time-of-flight-enabled scanners exhibited less size-based partial-volume bias than non-time-of-flight scanners. The CTN scanner validation experience over the past 5 y has generated a rich, well-curated phantom dataset from which PET/CT make-and-model and reconstruction-dependent quantitative behaviors were characterized for the purposes of understanding and estimating scanner-based variances in clinical trials. These results should make it possible to identify and recommend make-and-model-specific reconstruction strategies to minimize measurement variability in cancer clinical trials. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
NASA Astrophysics Data System (ADS)
Filatov, I. E.; Uvarin, V. V.; Kuznetsov, D. L.
2018-05-01
The efficiency of removal of volatile organic impurities in air by a pulsed corona discharge is investigated using model mixtures. Based on the method of competing reactions, an approach to estimating the qualitative and quantitative parameters of the employed electrophysical technique is proposed. The concept of the "toluene coefficient" characterizing the relative reactivity of a component as compared to toluene is introduced. It is proposed that the energy efficiency of the electrophysical method be estimated using the concept of diversified yield of the removal process. Such an approach makes it possible to substantially intensify the determination of energy parameters of removal of impurities and can also serve as a criterion for estimating the effectiveness of various methods in which a nonequilibrium plasma is used for air cleaning from volatile impurities.
QFASAR: Quantitative fatty acid signature analysis with R
Bromaghin, Jeffrey F.
2017-01-01
Knowledge of predator diets provides essential insights into their ecology, yet diet estimation is challenging and remains an active area of research.Quantitative fatty acid signature analysis (QFASA) is a popular method of estimating diet composition that continues to be investigated and extended. However, software to implement QFASA has only recently become publicly available.I summarize a new R package, qfasar, for diet estimation using QFASA methods. The package also provides functionality to evaluate and potentially improve the performance of a library of prey signature data, compute goodness-of-fit diagnostics, and support simulation-based research. Several procedures in the package have not previously been published.qfasar makes traditional and recently published QFASA diet estimation methods accessible to ecologists for the first time. Use of the package is illustrated with signature data from Chukchi Sea polar bears and potential prey species.
Adversity magnifies the importance of social information in decision-making.
Pérez-Escudero, Alfonso; de Polavieja, Gonzalo G
2017-11-01
Decision-making theories explain animal behaviour, including human behaviour, as a response to estimations about the environment. In the case of collective behaviour, they have given quantitative predictions of how animals follow the majority option. However, they have so far failed to explain that in some species and contexts social cohesion increases when conditions become more adverse (i.e. individuals choose the majority option with higher probability when the estimated quality of all available options decreases). We have found that this failure is due to modelling simplifications that aided analysis, like low levels of stochasticity or the assumption that only one choice is the correct one. We provide a more general but simple geometric framework to describe optimal or suboptimal decisions in collectives that gives insight into three different mechanisms behind this effect. The three mechanisms have in common that the private information acts as a gain factor to social information: a decrease in the privately estimated quality of all available options increases the impact of social information, even when social information itself remains unchanged. This increase in the importance of social information makes it more likely that agents will follow the majority option. We show that these results quantitatively explain collective behaviour in fish and experiments of social influence in humans. © 2017 The Authors.
FPGA-based fused smart-sensor for tool-wear area quantitative estimation in CNC machine inserts.
Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto
2010-01-01
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used.
Agapova, Maria; Devine, Emily Beth; Bresnahan, Brian W; Higashi, Mitchell K; Garrison, Louis P
2014-09-01
Health agencies making regulatory marketing-authorization decisions use qualitative and quantitative approaches to assess expected benefits and expected risks associated with medical interventions. There is, however, no universal standard approach that regulatory agencies consistently use to conduct benefit-risk assessment (BRA) for pharmaceuticals or medical devices, including for imaging technologies. Economics, health services research, and health outcomes research use quantitative approaches to elicit preferences of stakeholders, identify priorities, and model health conditions and health intervention effects. Challenges to BRA in medical devices are outlined, highlighting additional barriers in radiology. Three quantitative methods--multi-criteria decision analysis, health outcomes modeling and stated-choice survey--are assessed using criteria that are important in balancing benefits and risks of medical devices and imaging technologies. To be useful in regulatory BRA, quantitative methods need to: aggregate multiple benefits and risks, incorporate qualitative considerations, account for uncertainty, and make clear whose preferences/priorities are being used. Each quantitative method performs differently across these criteria and little is known about how BRA estimates and conclusions vary by approach. While no specific quantitative method is likely to be the strongest in all of the important areas, quantitative methods may have a place in BRA of medical devices and radiology. Quantitative BRA approaches have been more widely applied in medicines, with fewer BRAs in devices. Despite substantial differences in characteristics of pharmaceuticals and devices, BRA methods may be as applicable to medical devices and imaging technologies as they are to pharmaceuticals. Further research to guide the development and selection of quantitative BRA methods for medical devices and imaging technologies is needed. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
A Demonstration on Every Exam.
ERIC Educational Resources Information Center
Julian, Glenn M.
1995-01-01
Argues that inclusion of demonstrations on examinations increases students' ability to observe carefully the physical world around them, translate from observation in terms of models, and make quantitative estimates and physicist-type "back-of-the-envelope" calculations. Presents demonstration ideas covering the following topics:…
Kimko, Holly; Berry, Seth; O'Kelly, Michael; Mehrotra, Nitin; Hutmacher, Matthew; Sethuraman, Venkat
2017-01-01
The application of modeling and simulation (M&S) methods to improve decision-making was discussed during the Trends & Innovations in Clinical Trial Statistics Conference held in Durham, North Carolina, USA on May 1-4, 2016. Uses of both pharmacometric and statistical M&S were presented during the conference, highlighting the diversity of the methods employed by pharmacometricians and statisticians to address a broad range of quantitative issues in drug development. Five presentations are summarized herein, which cover the development strategy of employing M&S to drive decision-making; European initiatives on best practice in M&S; case studies of pharmacokinetic/pharmacodynamics modeling in regulatory decisions; estimation of exposure-response relationships in the presence of confounding; and the utility of estimating the probability of a correct decision for dose selection when prior information is limited. While M&S has been widely used during the last few decades, it is expected to play an essential role as more quantitative assessments are employed in the decision-making process. By integrating M&S as a tool to compile the totality of evidence collected throughout the drug development program, more informed decisions will be made.
FPGA-Based Fused Smart-Sensor for Tool-Wear Area Quantitative Estimation in CNC Machine Inserts
Trejo-Hernandez, Miguel; Osornio-Rios, Roque Alfredo; de Jesus Romero-Troncoso, Rene; Rodriguez-Donate, Carlos; Dominguez-Gonzalez, Aurelio; Herrera-Ruiz, Gilberto
2010-01-01
Manufacturing processes are of great relevance nowadays, when there is a constant claim for better productivity with high quality at low cost. The contribution of this work is the development of a fused smart-sensor, based on FPGA to improve the online quantitative estimation of flank-wear area in CNC machine inserts from the information provided by two primary sensors: the monitoring current output of a servoamplifier, and a 3-axis accelerometer. Results from experimentation show that the fusion of both parameters makes it possible to obtain three times better accuracy when compared with the accuracy obtained from current and vibration signals, individually used. PMID:22319304
Databases Don't Measure Motivation
ERIC Educational Resources Information Center
Yeager, Joseph
2005-01-01
Automated persuasion is the Holy Grail of quantitatively biased data base designers. However, data base histories are, at best, probabilistic estimates of customer behavior and do not make use of more sophisticated qualitative motivational profiling tools. While usually absent from web designer thinking, qualitative motivational profiling can be…
Benefit transfer challenges: Perspectives from U.S. Practitioners
In the field of environmental economics, the use of benefit estimates reported in existing nonmarket valuation studies to calculate willingness to pay for new policy cases has come to be known as “benefit transfer.” To make quantitative statements about the likely effects of pub...
ERIC Educational Resources Information Center
Mayhew, Jerry L.
1981-01-01
Body composition refers to the types and amounts of tissues which make up the body. The most acceptable method for assessing body composition is underwater weighing. A subcutaneous skinfold provides a quantitative measurement of fat below the skin. The skinfold technique permits a valid estimate of the body's total fat content. (JN)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wohlfahrt, Georg; Gu, Lianhong
2015-06-25
different meanings by different communities. We review the history of this term and associated concepts to clarify the terminology and make recommendations about a consistent use of terms in accordance with photosynthetic theory. We show that a widely used eddy covariance CO2 flux partitioning approach yields estimates which are quantitatively closer to the definition of true photosynthesis despite aiming at estimating apparent photosynthesis.
Oxidative DNA damage background estimated by a system model of base excision repair
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sokhansanj, B A; Wilson, III, D M
Human DNA can be damaged by natural metabolism through free radical production. It has been suggested that the equilibrium between innate damage and cellular DNA repair results in an oxidative DNA damage background that potentially contributes to disease and aging. Efforts to quantitatively characterize the human oxidative DNA damage background level based on measuring 8-oxoguanine lesions as a biomarker have led to estimates varying over 3-4 orders of magnitude, depending on the method of measurement. We applied a previously developed and validated quantitative pathway model of human DNA base excision repair, integrating experimentally determined endogenous damage rates and model parametersmore » from multiple sources. Our estimates of at most 100 8-oxoguanine lesions per cell are consistent with the low end of data from biochemical and cell biology experiments, a result robust to model limitations and parameter variation. Our results show the power of quantitative system modeling to interpret composite experimental data and make biologically and physiologically relevant predictions for complex human DNA repair pathway mechanisms and capacity.« less
NASA Astrophysics Data System (ADS)
Escuder-Bueno, I.; Castillo-Rodríguez, J. T.; Zechner, S.; Jöbstl, C.; Perales-Momparler, S.; Petaccia, G.
2012-09-01
Risk analysis has become a top priority for authorities and stakeholders in many European countries, with the aim of reducing flooding risk, considering the population's needs and improving risk awareness. Within this context, two methodological pieces have been developed in the period 2009-2011 within the SUFRI project (Sustainable Strategies of Urban Flood Risk Management with non-structural measures to cope with the residual risk, 2nd ERA-Net CRUE Funding Initiative). First, the "SUFRI Methodology for pluvial and river flooding risk assessment in urban areas to inform decision-making" provides a comprehensive and quantitative tool for flood risk analysis. Second, the "Methodology for investigation of risk awareness of the population concerned" presents the basis to estimate current risk from a social perspective and identify tendencies in the way floods are understood by citizens. Outcomes of both methods are integrated in this paper with the aim of informing decision making on non-structural protection measures. The results of two case studies are shown to illustrate practical applications of this developed approach. The main advantage of applying the methodology herein presented consists in providing a quantitative estimation of flooding risk before and after investing in non-structural risk mitigation measures. It can be of great interest for decision makers as it provides rational and solid information.
Modified microplate method for rapid and efficient estimation of siderophore produced by bacteria.
Arora, Naveen Kumar; Verma, Maya
2017-12-01
In this study, siderophore production by various bacteria amongst the plant-growth-promoting rhizobacteria was quantified by a rapid and efficient method. In total, 23 siderophore-producing bacterial isolates/strains were taken to estimate their siderophore-producing ability by the standard method (chrome azurol sulphonate assay) as well as 96 well microplate method. Production of siderophore was estimated in percent siderophore unit by both the methods. It was observed that data obtained by both methods correlated positively with each other proving the correctness of microplate method. By the modified microplate method, siderophore production by several bacterial strains can be estimated both qualitatively and quantitatively at one go, saving time, chemicals, making it very less tedious, and also being cheaper in comparison with the method currently in use. The modified microtiter plate method as proposed here makes it far easier to screen the plant-growth-promoting character of plant-associated bacteria.
Linearization improves the repeatability of quantitative dynamic contrast-enhanced MRI.
Jones, Kyle M; Pagel, Mark D; Cárdenas-Rodríguez, Julio
2018-04-01
The purpose of this study was to compare the repeatabilities of the linear and nonlinear Tofts and reference region models (RRM) for dynamic contrast-enhanced MRI (DCE-MRI). Simulated and experimental DCE-MRI data from 12 rats with a flank tumor of C6 glioma acquired over three consecutive days were analyzed using four quantitative and semi-quantitative DCE-MRI metrics. The quantitative methods used were: 1) linear Tofts model (LTM), 2) non-linear Tofts model (NTM), 3) linear RRM (LRRM), and 4) non-linear RRM (NRRM). The following semi-quantitative metrics were used: 1) maximum enhancement ratio (MER), 2) time to peak (TTP), 3) initial area under the curve (iauc64), and 4) slope. LTM and NTM were used to estimate K trans , while LRRM and NRRM were used to estimate K trans relative to muscle (R Ktrans ). Repeatability was assessed by calculating the within-subject coefficient of variation (wSCV) and the percent intra-subject variation (iSV) determined with the Gage R&R analysis. The iSV for R Ktrans using LRRM was two-fold lower compared to NRRM at all simulated and experimental conditions. A similar trend was observed for the Tofts model, where LTM was at least 50% more repeatable than the NTM under all experimental and simulated conditions. The semi-quantitative metrics iauc64 and MER were as equally repeatable as K trans and R Ktrans estimated by LTM and LRRM respectively. The iSV for iauc64 and MER were significantly lower than the iSV for slope and TTP. In simulations and experimental results, linearization improves the repeatability of quantitative DCE-MRI by at least 30%, making it as repeatable as semi-quantitative metrics. Copyright © 2017 Elsevier Inc. All rights reserved.
Estimating market boundaries for health care facilities and services.
Massey, T K; Blake, F W
1987-09-01
Competition in the health care industry is intensifying. The changing environment is making it necessary for executives to integrate quantitative market identification methods into their strategic planning systems. The authors propose one such method that explicitly recognizes the relative strength of competition in the marketplace and offer two examples of its implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhl, D.E.
1976-08-05
During the thirteen year duration of this contract the goal has been to develop and apply computer based analysis of radionuclide scan data so as to make available improved diagnostic information based on a knowledge of localized quantitative estimates of radionuclide concentration. Results are summarized. (CH)
A Method for Qualitative Mapping of Thick Oil Spills Using Imaging Spectroscopy
Clark, Roger N.; Swayze, Gregg A.; Leifer, Ira; Livo, K. Eric; Lundeen, Sarah; Eastwood, Michael; Green, Robert O.; Kokaly, Raymond F.; Hoefen, Todd; Sarture, Charles; McCubbin, Ian; Roberts, Dar; Steele, Denis; Ryan, Thomas; Dominguez, Roseanne; Pearson, Neil; ,
2010-01-01
A method is described to create qualitative images of thick oil in oil spills on water using near-infrared imaging spectroscopy data. The method uses simple 'three-point-band depths' computed for each pixel in an imaging spectrometer image cube using the organic absorption features due to chemical bonds in aliphatic hydrocarbons at 1.2, 1.7, and 2.3 microns. The method is not quantitative because sub-pixel mixing and layering effects are not considered, which are necessary to make a quantitative volume estimate of oil.
Generalized likelihood ratios for quantitative diagnostic test scores.
Tandberg, D; Deely, J J; O'Malley, A J
1997-11-01
The reduction of quantitative diagnostic test scores to the dichotomous case is a wasteful and unnecessary simplification in the era of high-speed computing. Physicians could make better use of the information embedded in quantitative test results if modern generalized curve estimation techniques were applied to the likelihood functions of Bayes' theorem. Hand calculations could be completely avoided and computed graphical summaries provided instead. Graphs showing posttest probability of disease as a function of pretest probability with confidence intervals (POD plots) would enhance acceptance of these techniques if they were immediately available at the computer terminal when test results were retrieved. Such constructs would also provide immediate feedback to physicians when a valueless test had been ordered.
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim
2014-09-01
The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data. © The Author(s) 2014.
Determining the Strength of an Electromagnet through Damped Oscillations
ERIC Educational Resources Information Center
Thompson, Michael; Leung, Chi Fan
2011-01-01
This article describes a project designed to extend sixth-form pupils looking to further their knowledge and skill base in physics. This project involves a quantitative analysis of the decaying amplitude of a metal plate oscillating in a strong magnetic field; the decay of the amplitude is used to make estimates of the strength of the magnetic…
Pest management in Douglas-fir seed orchards: a microcomputer decision method
James B. Hoy; Michael I. Haverty
1988-01-01
The computer program described provides a Douglas-fir seed orchard manager (user) with a quantitative method for making insect pest management decisions on a desk-top computer. The decision system uses site-specific information such as estimates of seed crop size, insect attack rates, insecticide efficacy and application costs, weather, and crop value. At sites where...
Henshall, John M; Dierens, Leanne; Sellars, Melony J
2014-09-02
While much attention has focused on the development of high-density single nucleotide polymorphism (SNP) assays, the costs of developing and running low-density assays have fallen dramatically. This makes it feasible to develop and apply SNP assays for agricultural species beyond the major livestock species. Although low-cost low-density assays may not have the accuracy of the high-density assays widely used in human and livestock species, we show that when combined with statistical analysis approaches that use quantitative instead of discrete genotypes, their utility may be improved. The data used in this study are from a 63-SNP marker Sequenom® iPLEX Platinum panel for the Black Tiger shrimp, for which high-density SNP assays are not currently available. For quantitative genotypes that could be estimated, in 5% of cases the most likely genotype for an individual at a SNP had a probability of less than 0.99. Matrix formulations of maximum likelihood equations for parentage assignment were developed for the quantitative genotypes and also for discrete genotypes perturbed by an assumed error term. Assignment rates that were based on maximum likelihood with quantitative genotypes were similar to those based on maximum likelihood with perturbed genotypes but, for more than 50% of cases, the two methods resulted in individuals being assigned to different families. Treating genotypes as quantitative values allows the same analysis framework to be used for pooled samples of DNA from multiple individuals. Resulting correlations between allele frequency estimates from pooled DNA and individual samples were consistently greater than 0.90, and as high as 0.97 for some pools. Estimates of family contributions to the pools based on quantitative genotypes in pooled DNA had a correlation of 0.85 with estimates of contributions from DNA-derived pedigree. Even with low numbers of SNPs of variable quality, parentage testing and family assignment from pooled samples are sufficiently accurate to provide useful information for a breeding program. Treating genotypes as quantitative values is an alternative to perturbing genotypes using an assumed error distribution, but can produce very different results. An understanding of the distribution of the error is required for SNP genotyping platforms.
Light scattering application for quantitative estimation of apoptosis
NASA Astrophysics Data System (ADS)
Bilyy, Rostyslav O.; Stoika, Rostyslav S.; Getman, Vasyl B.; Bilyi, Olexander I.
2004-05-01
Estimation of cell proliferation and apoptosis are in focus of instrumental methods used in modern biomedical sciences. Present study concerns monitoring of functional state of cells, specifically the development of their programmed death or apoptosis. The available methods for such purpose are either very expensive, or require time-consuming operations. Their specificity and sensitivity are frequently not sufficient for making conclusions which could be used in diagnostics or treatment monitoring. We propose a novel method for apoptosis measurement based on quantitative determination of cellular functional state taking into account their physical characteristics. This method uses the patented device -- laser microparticle analyser PRM-6 -- for analyzing light scattering by the microparticles, including cells. The method gives an opportunity for quick, quantitative, simple (without complicated preliminary cell processing) and relatively cheap measurement of apoptosis in cellular population. The elaborated method was used for studying apoptosis expression in murine leukemia cells of L1210 line and human lymphoblastic leukemia cells of K562 line. The results obtained by the proposed method permitted measuring cell number in tested sample, detecting and quantitative characterization of functional state of cells, particularly measuring the ratio of the apoptotic cells in suspension.
Grinstein, Amir; Kodra, Evan; Chen, Stone; Sheldon, Seth; Zik, Ory
2018-01-01
Individuals must have a quantitative understanding of the carbon footprint tied to their everyday decisions to make efficient sustainable decisions. We report research of the innumeracy of individuals as it relates to their carbon footprint. In three studies that varied in terms of scale and sample, respondents estimate the quantity of CO2 released when combusting a gallon of gasoline in comparison to several well-known metrics including food calories and travel distance. Consistently, respondents estimated the quantity of CO2 from gasoline compared to other metrics with significantly less accuracy while exhibiting a tendency to underestimate CO2. Such relative absence of carbon numeracy of even a basic consumption habit may limit the effectiveness of environmental policies and campaigns aimed at changing individual behavior. We discuss several caveats as well as opportunities for policy design that could aid the improvement of people's quantitative understanding of their carbon footprint.
Gunawardena, Harsha P; O'Brien, Jonathon; Wrobel, John A; Xie, Ling; Davies, Sherri R; Li, Shunqiang; Ellis, Matthew J; Qaqish, Bahjat F; Chen, Xian
2016-02-01
Single quantitative platforms such as label-based or label-free quantitation (LFQ) present compromises in accuracy, precision, protein sequence coverage, and speed of quantifiable proteomic measurements. To maximize the quantitative precision and the number of quantifiable proteins or the quantifiable coverage of tissue proteomes, we have developed a unified approach, termed QuantFusion, that combines the quantitative ratios of all peptides measured by both LFQ and label-based methodologies. Here, we demonstrate the use of QuantFusion in determining the proteins differentially expressed in a pair of patient-derived tumor xenografts (PDXs) representing two major breast cancer (BC) subtypes, basal and luminal. Label-based in-spectra quantitative peptides derived from amino acid-coded tagging (AACT, also known as SILAC) of a non-malignant mammary cell line were uniformly added to each xenograft with a constant predefined ratio, from which Ratio-of-Ratio estimates were obtained for the label-free peptides paired with AACT peptides in each PDX tumor. A mixed model statistical analysis was used to determine global differential protein expression by combining complementary quantifiable peptide ratios measured by LFQ and Ratio-of-Ratios, respectively. With minimum number of replicates required for obtaining the statistically significant ratios, QuantFusion uses the distinct mechanisms to "rescue" the missing data inherent to both LFQ and label-based quantitation. Combined quantifiable peptide data from both quantitative schemes increased the overall number of peptide level measurements and protein level estimates. In our analysis of the PDX tumor proteomes, QuantFusion increased the number of distinct peptide ratios by 65%, representing differentially expressed proteins between the BC subtypes. This quantifiable coverage improvement, in turn, not only increased the number of measurable protein fold-changes by 8% but also increased the average precision of quantitative estimates by 181% so that some BC subtypically expressed proteins were rescued by QuantFusion. Thus, incorporating data from multiple quantitative approaches while accounting for measurement variability at both the peptide and global protein levels make QuantFusion unique for obtaining increased coverage and quantitative precision for tissue proteomes. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Scatter and veiling glare corrections for quantitative digital subtraction angiography
NASA Astrophysics Data System (ADS)
Ersahin, Atila; Molloi, Sabee Y.; Qian, Yao-Jin
1994-05-01
In order to quantitate anatomical and physiological parameters such as vessel dimensions and volumetric blood flow, it is necessary to make corrections for scatter and veiling glare (SVG), which are the major sources of nonlinearities in videodensitometric digital subtraction angiography (DSA). A convolution filtering technique has been investigated to estimate SVG distribution in DSA images without the need to sample the SVG for each patient. This technique utilizes exposure parameters and image gray levels to estimate SVG intensity by predicting the total thickness for every pixel in the image. At this point, corrections were also made for variation of SVG fraction with beam energy and field size. To test its ability to estimate SVG intensity, the correction technique was applied to images of a Lucite step phantom, anthropomorphic chest phantom, head phantom, and animal models at different thicknesses, projections, and beam energies. The root-mean-square (rms) percentage error of these estimates were obtained by comparison with direct SVG measurements made behind a lead strip. The average rms percentage errors in the SVG estimate for the 25 phantom studies and for the 17 animal studies were 6.22% and 7.96%, respectively. These results indicate that the SVG intensity can be estimated for a wide range of thicknesses, projections, and beam energies.
Saarman, Emily T.; Owens, Brian; Murray, Steven N.; Weisberg, Stephen B.; Field, John C.; Nielsen, Karina J.
2018-01-01
There are numerous reasons to conduct scientific research within protected areas, but research activities may also negatively impact organisms and habitats, and thus conflict with a protected area’s conservation goals. We developed a quantitative ecological decision-support framework that estimates these potential impacts so managers can weigh costs and benefits of proposed research projects and make informed permitting decisions. The framework generates quantitative estimates of the ecological impacts of the project and the cumulative impacts of the proposed project and all other projects in the protected area, and then compares the estimated cumulative impacts of all projects with policy-based acceptable impact thresholds. We use a series of simplified equations (models) to assess the impacts of proposed research to: a) the population of any targeted species, b) the major ecological assemblages that make up the community, and c) the physical habitat that supports protected area biota. These models consider both targeted and incidental impacts to the ecosystem and include consideration of the vulnerability of targeted species, assemblages, and habitats, based on their recovery time and ecological role. We parameterized the models for a wide variety of potential research activities that regularly occur in the study area using a combination of literature review and expert judgment with a precautionary approach to uncertainty. We also conducted sensitivity analyses to examine the relationships between model input parameters and estimated impacts to understand the dominant drivers of the ecological impact estimates. Although the decision-support framework was designed for and adopted by the California Department of Fish and Wildlife for permitting scientific studies in the state-wide network of marine protected areas (MPAs), the framework can readily be adapted for terrestrial and freshwater protected areas. PMID:29920527
Linkage disequilibrium interval mapping of quantitative trait loci.
Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte
2006-03-16
For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.
Randolph, S E; Craine, N G
1995-11-01
Models of tick-borne diseases must take account of the particular biological features of ticks that contrast with those of insect vectors. A general framework is proposed that identifies the parameters of the transmission dynamics of tick-borne diseases to allow a quantitative assessment of the relative contributions of different host species and alternative transmission routes to the basic reproductive number, Ro, of such diseases. Taking the particular case of the transmission of the Lyme borreliosis spirochaete, Borrelia burgdorferi, by Ixodes ticks in Europe, and using the best, albeit still inadequate, estimates of the parameter values and a set of empirical data from Thetford Forest, England, we show that squirrels and the transovarial transmission route make quantitatively very significant contributions to Ro. This approach highlights the urgent need for more robust estimates of certain crucial parameter values, particularly the coefficients of transmission between ticks and vertebrates, before we can progress to full models that incorporate seasonality and heterogeneity among host populations for the natural dynamics of transmission of borreliosis and other tick-borne diseases.
Influence of entanglements on glass transition temperature of polystyrene
NASA Astrophysics Data System (ADS)
Ougizawa, Toshiaki; Kinugasa, Yoshinori
2013-03-01
Chain entanglement is essential behavior of polymeric molecules and it seems to affect many physical properties such as not only viscosity of melt state but also glass transition temperature (Tg). But we have not attained the quantitative estimation because the entanglement density is considered as an intrinsic value of the polymer at melt state depending on the chemical structure. Freeze-drying method is known as one of the few ways to make different entanglement density sample from dilute solution. In this study, the influence of entanglements on Tg of polystyrene obtained by the freeze-dried method was estimated quantitatively. The freeze-dried samples showed Tg depression with decreasing the concentration of precursor solution due to the lower entanglement density and their depressed Tg would be saturated when the almost no intermolecular entanglement was formed. The molecular weight dependence of the maximum value of Tg depression was discussed.
Kodra, Evan; Chen, Stone; Sheldon, Seth; Zik, Ory
2018-01-01
Individuals must have a quantitative understanding of the carbon footprint tied to their everyday decisions to make efficient sustainable decisions. We report research of the innumeracy of individuals as it relates to their carbon footprint. In three studies that varied in terms of scale and sample, respondents estimate the quantity of CO2 released when combusting a gallon of gasoline in comparison to several well-known metrics including food calories and travel distance. Consistently, respondents estimated the quantity of CO2 from gasoline compared to other metrics with significantly less accuracy while exhibiting a tendency to underestimate CO2. Such relative absence of carbon numeracy of even a basic consumption habit may limit the effectiveness of environmental policies and campaigns aimed at changing individual behavior. We discuss several caveats as well as opportunities for policy design that could aid the improvement of people’s quantitative understanding of their carbon footprint. PMID:29723206
Optimally weighted least-squares steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2007-02-01
Quantitative steganalysis aims to estimate the amount of payload in a stego object, and such estimators seem to arise naturally in steganalysis of Least Significant Bit (LSB) replacement in digital images. However, as with all steganalysis, the estimators are subject to errors, and their magnitude seems heavily dependent on properties of the cover. In very recent work we have given the first derivation of estimation error, for a certain method of steganalysis (the Least-Squares variant of Sample Pairs Analysis) of LSB replacement steganography in digital images. In this paper we make use of our theoretical results to find an improved estimator and detector. We also extend the theoretical analysis to another (more accurate) steganalysis estimator (Triples Analysis) and hence derive an improved version of that estimator too. Experimental results show that the new steganalyzers have improved accuracy, particularly in the difficult case of never-compressed covers.
Statistical, economic and other tools for assessing natural aggregate
Bliss, J.D.; Moyle, P.R.; Bolm, K.S.
2003-01-01
Quantitative aggregate resource assessment provides resource estimates useful for explorationists, land managers and those who make decisions about land allocation, which may have long-term implications concerning cost and the availability of aggregate resources. Aggregate assessment needs to be systematic and consistent, yet flexible enough to allow updating without invalidating other parts of the assessment. Evaluators need to use standard or consistent aggregate classification and statistic distributions or, in other words, models with geological, geotechnical and economic variables or interrelationships between these variables. These models can be used with subjective estimates, if needed, to estimate how much aggregate may be present in a region or country using distributions generated by Monte Carlo computer simulations.
Dynamical Stochastic Processes of Returns in Financial Markets
NASA Astrophysics Data System (ADS)
Kim, Kyungsik; Kim, Soo Yong; Lim, Gyuchang; Zhou, Junyuan; Yoon, Seung-Min
2006-03-01
We show how the evolution of probability distribution functions of the returns from the tick data of the Korean treasury bond futures (KTB) and the S&P 500 stock index can be described by means of the Fokker-Planck equation. We derive the Fokker- Planck equation from the estimated Kramers-Moyal coefficients estimated directly from the empirical data. By analyzing the statistics of the returns, we present the quantitative deterministic and random influences on both financial time series, for which we can give a simple physical interpretation. Finally, we remark that the diffusion coefficient should be significantly considered to make a portfolio.
Uncertainty Analysis of Radar and Gauge Rainfall Estimates in the Russian River Basin
NASA Astrophysics Data System (ADS)
Cifelli, R.; Chen, H.; Willie, D.; Reynolds, D.; Campbell, C.; Sukovich, E.
2013-12-01
Radar Quantitative Precipitation Estimation (QPE) has been a very important application of weather radar since it was introduced and made widely available after World War II. Although great progress has been made over the last two decades, it is still a challenging process especially in regions of complex terrain such as the western U.S. It is also extremely difficult to make direct use of radar precipitation data in quantitative hydrologic forecasting models. To improve the understanding of rainfall estimation and distributions in the NOAA Hydrometeorology Testbed in northern California (HMT-West), extensive evaluation of radar and gauge QPE products has been performed using a set of independent rain gauge data. This study focuses on the rainfall evaluation in the Russian River Basin. The statistical properties of the different gridded QPE products will be compared quantitatively. The main emphasis of this study will be on the analysis of uncertainties of the radar and gauge rainfall products that are subject to various sources of error. The spatial variation analysis of the radar estimates is performed by measuring the statistical distribution of the radar base data such as reflectivity and by the comparison with a rain gauge cluster. The application of mean field bias values to the radar rainfall data will also be described. The uncertainty analysis of the gauge rainfall will be focused on the comparison of traditional kriging and conditional bias penalized kriging (Seo 2012) methods. This comparison is performed with the retrospective Multisensor Precipitation Estimator (MPE) system installed at the NOAA Earth System Research Laboratory. The independent gauge set will again be used as the verification tool for the newly generated rainfall products.
Post-decision biases reveal a self-consistency principle in perceptual inference.
Luu, Long; Stocker, Alan A
2018-05-15
Making a categorical judgment can systematically bias our subsequent perception of the world. We show that these biases are well explained by a self-consistent Bayesian observer whose perceptual inference process is causally conditioned on the preceding choice. We quantitatively validated the model and its key assumptions with a targeted set of three psychophysical experiments, focusing on a task sequence where subjects first had to make a categorical orientation judgment before estimating the actual orientation of a visual stimulus. Subjects exhibited a high degree of consistency between categorical judgment and estimate, which is difficult to reconcile with alternative models in the face of late, memory related noise. The observed bias patterns resemble the well-known changes in subjective preferences associated with cognitive dissonance, which suggests that the brain's inference processes may be governed by a universal self-consistency constraint that avoids entertaining 'dissonant' interpretations of the evidence. © 2018, Luu et al.
Fernandes, F L; Picanço, M C; Campos, S O; Bastos, C S; Chediak, M; Guedes, R N C; Silva, R S
2011-12-01
The currently existing sample procedures available for decision-making regarding the control of the coffee berry borer Hypothenemus hampei (Ferrari) (Coleoptera: Curculionidae: Scolytinae) are time-consuming, expensive, and difficult to perform, compromising their adoption. In addition, the damage functions incorporated in such decision levels only consider the quantitative losses, while dismissing the qualitative losses. Traps containing ethanol, methanol, and benzaldehyde may allow cheap and easy decision-making. Our objective was to determine the economic injury level (EIL) for the adults of the coffee berry borer by using attractant-baited traps. We considered both qualitative and quantitative losses caused by the coffee borer in estimating the EILs. These EILs were determined for conventional and organic coffee under high and average plant yield. When the quantitative losses caused by H. hampei were considered alone, the EILs ranged from 7.9 to 23.7% of bored berries for high and average-yield conventional crops, respectively. For high and average-yield organic coffee the ELs varied from 24.4 to 47.6% of bored berries, respectively. When qualitative and quantitative losses caused by the pest were considered together, the EIL was 4.3% of bored berries for both conventional and organic coffee. The EILs for H. hampei associated to the coffee plants in the flowering, pinhead fruit, and ripening fruit stages were 426, 85, and 28 adults per attractive trap, respectively.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, J.; Gardner, B.; Lucherini, M.
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October-December 2006 and April-June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture-recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km 2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74-0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species. ?? 2011 American Society of Mammalogists.
Estimating detection and density of the Andean cat in the high Andes
Reppucci, Juan; Gardner, Beth; Lucherini, Mauro
2011-01-01
The Andean cat (Leopardus jacobita) is one of the most endangered, yet least known, felids. Although the Andean cat is considered at risk of extinction, rigorous quantitative population studies are lacking. Because physical observations of the Andean cat are difficult to make in the wild, we used a camera-trapping array to photo-capture individuals. The survey was conducted in northwestern Argentina at an elevation of approximately 4,200 m during October–December 2006 and April–June 2007. In each year we deployed 22 pairs of camera traps, which were strategically placed. To estimate detection probability and density we applied models for spatial capture–recapture using a Bayesian framework. Estimated densities were 0.07 and 0.12 individual/km2 for 2006 and 2007, respectively. Mean baseline detection probability was estimated at 0.07. By comparison, densities of the Pampas cat (Leopardus colocolo), another poorly known felid that shares its habitat with the Andean cat, were estimated at 0.74–0.79 individual/km2 in the same study area for 2006 and 2007, and its detection probability was estimated at 0.02. Despite having greater detectability, the Andean cat is rarer in the study region than the Pampas cat. Properly accounting for the detection probability is important in making reliable estimates of density, a key parameter in conservation and management decisions for any species.
2015-01-01
Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7–30.6), 0.6 (0.3–0.9), and 4.7 (2.1–7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches. PMID:26504832
Mansfield, Theodore J; MacDonald Gibson, Jacqueline
2015-01-01
Recently, two quantitative tools have emerged for predicting the health impacts of projects that change population physical activity: the Health Economic Assessment Tool (HEAT) and Dynamic Modeling for Health Impact Assessment (DYNAMO-HIA). HEAT has been used to support health impact assessments of transportation infrastructure projects, but DYNAMO-HIA has not been previously employed for this purpose nor have the two tools been compared. To demonstrate the use of DYNAMO-HIA for supporting health impact assessments of transportation infrastructure projects, we employed the model in three communities (urban, suburban, and rural) in North Carolina. We also compared DYNAMO-HIA and HEAT predictions in the urban community. Using DYNAMO-HIA, we estimated benefit-cost ratios of 20.2 (95% C.I.: 8.7-30.6), 0.6 (0.3-0.9), and 4.7 (2.1-7.1) for the urban, suburban, and rural projects, respectively. For a 40-year time period, the HEAT predictions of deaths avoided by the urban infrastructure project were three times as high as DYNAMO-HIA's predictions due to HEAT's inability to account for changing population health characteristics over time. Quantitative health impact assessment coupled with economic valuation is a powerful tool for integrating health considerations into transportation decision-making. However, to avoid overestimating benefits, such quantitative HIAs should use dynamic, rather than static, approaches.
Time-to-contact estimation of accelerated stimuli is based on first-order information.
Benguigui, Nicolas; Ripoll, Hubert; Broderick, Michael P
2003-12-01
The goal of this study was to test whether 1st-order information, which does not account for acceleration, is used (a) to estimate the time to contact (TTC) of an accelerated stimulus after the occlusion of a final part of its trajectory and (b) to indirectly intercept an accelerated stimulus with a thrown projectile. Both tasks require the production of an action on the basis of predictive information acquired before the arrival of the stimulus at the target and allow the experimenter to make quantitative predictions about the participants' use (or nonuse) of 1st-order information. The results show that participants do not use information about acceleration and that they commit errors that rely quantitatively on 1st-order information even when acceleration is psychophysically detectable. In the indirect interceptive task, action is planned about 200 ms before the initiation of the movement, at which time the 1st-order TTC attains a critical value. ((c) 2003 APA, all rights reserved)
The drift diffusion model as the choice rule in reinforcement learning.
Pedersen, Mads Lund; Frank, Michael J; Biele, Guido
2017-08-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyperactivity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups.
The drift diffusion model as the choice rule in reinforcement learning
Frank, Michael J.
2017-01-01
Current reinforcement-learning models often assume simplified decision processes that do not fully reflect the dynamic complexities of choice processes. Conversely, sequential-sampling models of decision making account for both choice accuracy and response time, but assume that decisions are based on static decision values. To combine these two computational models of decision making and learning, we implemented reinforcement-learning models in which the drift diffusion model describes the choice process, thereby capturing both within- and across-trial dynamics. To exemplify the utility of this approach, we quantitatively fit data from a common reinforcement-learning paradigm using hierarchical Bayesian parameter estimation, and compared model variants to determine whether they could capture the effects of stimulant medication in adult patients with attention-deficit hyper-activity disorder (ADHD). The model with the best relative fit provided a good description of the learning process, choices, and response times. A parameter recovery experiment showed that the hierarchical Bayesian modeling approach enabled accurate estimation of the model parameters. The model approach described here, using simultaneous estimation of reinforcement-learning and drift diffusion model parameters, shows promise for revealing new insights into the cognitive and neural mechanisms of learning and decision making, as well as the alteration of such processes in clinical groups. PMID:27966103
Chai, Chen; Wong, Yiik Diew; Wang, Xuesong
2017-07-01
This paper proposes a simulation-based approach to estimate safety impact of driver cognitive failures and driving errors. Fuzzy Logic, which involves linguistic terms and uncertainty, is incorporated with Cellular Automata model to simulate decision-making process of right-turn filtering movement at signalized intersections. Simulation experiments are conducted to estimate the relationships between cognitive failures and driving errors with safety performance. Simulation results show Different types of cognitive failures are found to have varied relationship with driving errors and safety performance. For right-turn filtering movement, cognitive failures are more likely to result in driving errors with denser conflicting traffic stream. Moreover, different driving errors are found to have different safety impacts. The study serves to provide a novel approach to linguistically assess cognitions and replicate decision-making procedures of the individual driver. Compare to crash analysis, the proposed FCA model allows quantitative estimation of particular cognitive failures, and the impact of cognitions on driving errors and safety performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
McGarry, Bryony L; Rogers, Harriet J; Knight, Michael J; Jokivarsi, Kimmo T; Sierra, Alejandra; Gröhn, Olli Hj; Kauppinen, Risto A
2016-08-01
Quantitative T2 relaxation magnetic resonance imaging allows estimation of stroke onset time. We aimed to examine the accuracy of quantitative T1 and quantitative T2 relaxation times alone and in combination to provide estimates of stroke onset time in a rat model of permanent focal cerebral ischemia and map the spatial distribution of elevated quantitative T1 and quantitative T2 to assess tissue status. Permanent middle cerebral artery occlusion was induced in Wistar rats. Animals were scanned at 9.4T for quantitative T1, quantitative T2, and Trace of Diffusion Tensor (Dav) up to 4 h post-middle cerebral artery occlusion. Time courses of differentials of quantitative T1 and quantitative T2 in ischemic and non-ischemic contralateral brain tissue (ΔT1, ΔT2) and volumes of tissue with elevated T1 and T2 relaxation times (f1, f2) were determined. TTC staining was used to highlight permanent ischemic damage. ΔT1, ΔT2, f1, f2, and the volume of tissue with both elevated quantitative T1 and quantitative T2 (V(Overlap)) increased with time post-middle cerebral artery occlusion allowing stroke onset time to be estimated. V(Overlap) provided the most accurate estimate with an uncertainty of ±25 min. At all times-points regions with elevated relaxation times were smaller than areas with Dav defined ischemia. Stroke onset time can be determined by quantitative T1 and quantitative T2 relaxation times and tissue volumes. Combining quantitative T1 and quantitative T2 provides the most accurate estimate and potentially identifies irreversibly damaged brain tissue. © 2016 World Stroke Organization.
NASA Astrophysics Data System (ADS)
Brugués, Jan; Needleman, Daniel J.
2010-02-01
Metaphase spindles are highly dynamic, nonequilibrium, steady-state structures. We study the internal fluctuations of spindles by computing spatio-temporal correlation functions of movies obtained from quantitative polarized light microscopy. These correlation functions are only physically meaningful if corrections are made for the net motion of the spindle. We describe our image registration algorithm in detail and we explore its robustness. Finally, we discuss the expression used for the estimation of the correlation function in terms of the nematic order of the microtubules which make up the spindle. Ultimately, studying the form of these correlation functions will provide a quantitative test of the validity of coarse-grained models of spindle structure inspired from liquid crystal physics.
NASA Technical Reports Server (NTRS)
Frouin, Robert
1993-01-01
Current satellite algorithms to estimate photosynthetically available radiation (PAR) at the earth' s surface are reviewed. PAR is deduced either from an insolation estimate or obtained directly from top-of-atmosphere solar radiances. The characteristics of both approaches are contrasted and typical results are presented. The inaccuracies reported, about 10 percent and 6 percent on daily and monthly time scales, respectively, are useful to model oceanic and terrestrial primary productivity. At those time scales variability due to clouds in the ratio of PAR and insolation is reduced, making it possible to deduce PAR directly from insolation climatologies (satellite or other) that are currently available or being produced. Improvements, however, are needed in conditions of broken cloudiness and over ice/snow. If not addressed properly, calibration/validation issues may prevent quantitative use of the PAR estimates in studies of climatic change. The prospects are good for an accurate, long-term climatology of PAR over the globe.
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, in a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation; testing results, and other information. Where appropriate, actual performance history was used for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to verify compliance with requirements and to highlight design or performance shortcomings for further decision-making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability and maintainability analysis, and present findings and observation based on analysis leading to the Ground Systems Preliminary Design Review milestone.
Geiss, S; Einax, J W
2001-07-01
Detection limit, reporting limit and limit of quantitation are analytical parameters which describe the power of analytical methods. These parameters are used for internal quality assurance and externally for competing, especially in the case of trace analysis in environmental compartments. The wide variety of possibilities for computing or obtaining these measures in literature and in legislative rules makes any comparison difficult. Additionally, a host of terms have been used within the analytical community to describe detection and quantitation capabilities. Without trying to create an order for the variety of terms, this paper is aimed at providing a practical proposal for answering the main questions for the analysts concerning quality measures above. These main questions and related parameters were explained and graphically demonstrated. Estimation and verification of these parameters are the two steps to get real measures. A rule for a practical verification is given in a table, where the analyst can read out what to measure, what to estimate and which criteria have to be fulfilled. In this manner verified parameters detection limit, reporting limit and limit of quantitation now are comparable and the analyst himself is responsible to the unambiguity and reliability of these measures.
A feeling for the numbers in biology
Phillips, Rob; Milo, Ron
2009-01-01
Although the quantitative description of biological systems has been going on for centuries, recent advances in the measurement of phenomena ranging from metabolism to gene expression to signal transduction have resulted in a new emphasis on biological numeracy. This article describes the confluence of two different approaches to biological numbers. First, an impressive array of quantitative measurements make it possible to develop intuition about biological numbers ranging from how many gigatons of atmospheric carbon are fixed every year in the process of photosynthesis to the number of membrane transporters needed to provide sugars to rapidly dividing Escherichia coli cells. As a result of the vast array of such quantitative data, the BioNumbers web site has recently been developed as a repository for biology by the numbers. Second, a complementary and powerful tradition of numerical estimates familiar from the physical sciences and canonized in the so-called “Fermi problems” calls for efforts to estimate key biological quantities on the basis of a few foundational facts and simple ideas from physics and chemistry. In this article, we describe these two approaches and illustrate their synergism in several particularly appealing case studies. These case studies reveal the impact that an emphasis on numbers can have on important biological questions. PMID:20018695
Polak, Louisa; Green, Judith
2015-04-01
A large literature informs guidance for GPs about communicating quantitative risk information so as to facilitate shared decision making. However, relatively little has been written about how patients utilise such information in practice. To understand the role of quantitative risk information in patients' accounts of decisions about taking statins. This was a qualitative study, with participants recruited and interviewed in community settings. Semi-structured interviews were conducted with 34 participants aged >50 years, all of whom had been offered statins. Data were analysed thematically, using elements of the constant comparative method. Interviewees drew frequently on numerical test results to explain their decisions about preventive medication. In contrast, they seldom mentioned quantitative risk information, and never offered it as a rationale for action. Test results were spoken of as objects of concern despite an often-explicit absence of understanding, so lack of understanding seems unlikely to explain the non-use of risk estimates. Preventive medication was seen as 'necessary' either to treat test results, or because of personalised, unequivocal advice from a doctor. This study's findings call into question the assumption that people will heed and use numerical risk information once they understand it; these data highlight the need to consider the ways in which different kinds of knowledge are used in practice in everyday contexts. There was little evidence from this study that understanding probabilistic risk information was a necessary or valued condition for making decisions about statin use. © British Journal of General Practice 2015.
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
Ice nucleation by particles immersed in supercooled cloud droplets.
Murray, B J; O'Sullivan, D; Atkinson, J D; Webb, M E
2012-10-07
The formation of ice particles in the Earth's atmosphere strongly affects the properties of clouds and their impact on climate. Despite the importance of ice formation in determining the properties of clouds, the Intergovernmental Panel on Climate Change (IPCC, 2007) was unable to assess the impact of atmospheric ice formation in their most recent report because our basic knowledge is insufficient. Part of the problem is the paucity of quantitative information on the ability of various atmospheric aerosol species to initiate ice formation. Here we review and assess the existing quantitative knowledge of ice nucleation by particles immersed within supercooled water droplets. We introduce aerosol species which have been identified in the past as potentially important ice nuclei and address their ice-nucleating ability when immersed in a supercooled droplet. We focus on mineral dusts, biological species (pollen, bacteria, fungal spores and plankton), carbonaceous combustion products and volcanic ash. In order to make a quantitative comparison we first introduce several ways of describing ice nucleation and then summarise the existing information according to the time-independent (singular) approximation. Using this approximation in combination with typical atmospheric loadings, we estimate the importance of ice nucleation by different aerosol types. According to these estimates we find that ice nucleation below about -15 °C is dominated by soot and mineral dusts. Above this temperature the only materials known to nucleate ice are biological, with quantitative data for other materials absent from the literature. We conclude with a summary of the challenges our community faces.
White, Paul A; Johnson, George E
2016-05-01
Applied genetic toxicology is undergoing a transition from qualitative hazard identification to quantitative dose-response analysis and risk assessment. To facilitate this change, the Health and Environmental Sciences Institute (HESI) Genetic Toxicology Technical Committee (GTTC) sponsored a workshop held in Lancaster, UK on July 10-11, 2014. The event included invited speakers from several institutions and the contents was divided into three themes-1: Point-of-departure Metrics for Quantitative Dose-Response Analysis in Genetic Toxicology; 2: Measurement and Estimation of Exposures for Better Extrapolation to Humans and 3: The Use of Quantitative Approaches in Genetic Toxicology for human health risk assessment (HHRA). A host of pertinent issues were discussed relating to the use of in vitro and in vivo dose-response data, the development of methods for in vitro to in vivo extrapolation and approaches to use in vivo dose-response data to determine human exposure limits for regulatory evaluations and decision-making. This Special Issue, which was inspired by the workshop, contains a series of papers that collectively address topics related to the aforementioned themes. The Issue includes contributions that collectively evaluate, describe and discuss in silico, in vitro, in vivo and statistical approaches that are facilitating the shift from qualitative hazard evaluation to quantitative risk assessment. The use and application of the benchmark dose approach was a central theme in many of the workshop presentations and discussions, and the Special Issue includes several contributions that outline novel applications for the analysis and interpretation of genetic toxicity data. Although the contents of the Special Issue constitutes an important step towards the adoption of quantitative methods for regulatory assessment of genetic toxicity, formal acceptance of quantitative methods for HHRA and regulatory decision-making will require consensus regarding the relationships between genetic damage and disease, and the concomitant ability to use genetic toxicity results per se. © Her Majesty the Queen in Right of Canada 2016. Reproduced with the permission of the Minister of Health.
Koloušková, Pavla; Stone, James D.
2017-01-01
Accurate gene expression measurements are essential in studies of both crop and wild plants. Reverse transcription quantitative real-time PCR (RT-qPCR) has become a preferred tool for gene expression estimation. A selection of suitable reference genes for the normalization of transcript levels is an essential prerequisite of accurate RT-qPCR results. We evaluated the expression stability of eight candidate reference genes across roots, leaves, flower buds and pollen of Silene vulgaris (bladder campion), a model plant for the study of gynodioecy. As random priming of cDNA is recommended for the study of organellar transcripts and poly(A) selection is indicated for nuclear transcripts, we estimated gene expression with both random-primed and oligo(dT)-primed cDNA. Accordingly, we determined reference genes that perform well with oligo(dT)- and random-primed cDNA, making it possible to estimate levels of nucleus-derived transcripts in the same cDNA samples as used for organellar transcripts, a key benefit in studies of cyto-nuclear interactions. Gene expression variance was estimated by RefFinder, which integrates four different analytical tools. The SvACT and SvGAPDH genes were the most stable candidates across various organs of S. vulgaris, regardless of whether pollen was included or not. PMID:28817728
Quantitative measurement of protein digestion in simulated gastric fluid.
Herman, Rod A; Korjagin, Valerie A; Schafer, Barry W
2005-04-01
The digestibility of novel proteins in simulated gastric fluid is considered to be an indicator of reduced risk of allergenic potential in food, and estimates of digestibility for transgenic proteins expressed in crops are required for making a human-health risk assessment by regulatory authorities. The estimation of first-order rate constants for digestion under conditions of low substrate concentration was explored for two protein substrates (azocoll and DQ-ovalbumin). Data conformed to first-order kinetics, and half-lives were relatively insensitive to significant variations in both substrate and pepsin concentration when high purity pepsin preparations were used. Estimation of digestion efficiency using densitometric measurements of relative protein concentration based on SDS-PAGE corroborated digestion estimates based on measurements of dye or fluorescence release from the labeled substrates. The suitability of first-order rate constants for estimating the efficiency of the pepsin digestion of novel proteins is discussed. Results further support a kinetic approach as appropriate for comparing the digestibility of proteins in simulated gastric fluid.
HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.
Wiecki, Thomas V; Sofer, Imri; Frank, Michael J
2013-01-01
The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/
Arab, Lenore; Khan, Faraz; Lam, Helen
2013-01-01
A systematic literature review of human studies relating caffeine or caffeine-rich beverages to cognitive decline reveals only 6 studies that have collected and analyzed cognition data in a prospective fashion that enables study of decline across the spectrum of cognition. These 6 studies, in general, evaluate cognitive function using the Mini Mental State Exam and base their beverage data on FFQs. Studies included in our review differed in their source populations, duration of study, and most dramatically in how their analyses were done, disallowing direct quantitative comparisons of their effect estimates. Only one of the studies reported on all 3 exposures, coffee, tea, and caffeine, making comparisons of findings across studies more difficult. However, in general, it can be stated that for all studies of tea and most studies of coffee and caffeine, the estimates of cognitive decline were lower among consumers, although there is a lack of a distinct dose response. Only a few measures showed a quantitative significance and, interestingly, studies indicate a stronger effect among women than men. PMID:23319129
Arab, Lenore; Khan, Faraz; Lam, Helen
2013-01-01
A systematic literature review of human studies relating caffeine or caffeine-rich beverages to cognitive decline reveals only 6 studies that have collected and analyzed cognition data in a prospective fashion that enables study of decline across the spectrum of cognition. These 6 studies, in general, evaluate cognitive function using the Mini Mental State Exam and base their beverage data on FFQs. Studies included in our review differed in their source populations, duration of study, and most dramatically in how their analyses were done, disallowing direct quantitative comparisons of their effect estimates. Only one of the studies reported on all 3 exposures, coffee, tea, and caffeine, making comparisons of findings across studies more difficult. However, in general, it can be stated that for all studies of tea and most studies of coffee and caffeine, the estimates of cognitive decline were lower among consumers, although there is a lack of a distinct dose response. Only a few measures showed a quantitative significance and, interestingly, studies indicate a stronger effect among women than men.
Williams, C R; Johnson, P H; Ball, T S; Ritchie, S A
2013-09-01
New mosquito control strategies centred on the modifying of populations require knowledge of existing population densities at release sites and an understanding of breeding site ecology. Using a quantitative pupal survey method, we investigated production of the dengue vector Aedes aegypti (L.) (Stegomyia aegypti) (Diptera: Culicidae) in Cairns, Queensland, Australia, and found that garden accoutrements represented the most common container type. Deliberately placed 'sentinel' containers were set at seven houses and sampled for pupae over 10 weeks during the wet season. Pupal production was approximately constant; tyres and buckets represented the most productive container types. Sentinel tyres produced the largest female mosquitoes, but were relatively rare in the field survey. We then used field-collected data to make estimates of per premises population density using three different approaches. Estimates of female Ae. aegypti abundance per premises made using the container-inhabiting mosquito simulation (CIMSiM) model [95% confidence interval (CI) 18.5-29.1 females] concorded reasonably well with estimates obtained using a standing crop calculation based on pupal collections (95% CI 8.8-22.5) and using BG-Sentinel traps and a sampling rate correction factor (95% CI 6.2-35.2). By first describing local Ae. aegypti productivity, we were able to compare three separate population density estimates which provided similar results. We anticipate that this will provide researchers and health officials with several tools with which to make estimates of population densities. © 2012 The Royal Entomological Society.
Response time distributions in rapid chess: a large-scale decision making experiment.
Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A
2010-01-01
Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation.
Response Time Distributions in Rapid Chess: A Large-Scale Decision Making Experiment
Sigman, Mariano; Etchemendy, Pablo; Slezak, Diego Fernández; Cecchi, Guillermo A.
2010-01-01
Rapid chess provides an unparalleled laboratory to understand decision making in a natural environment. In a chess game, players choose consecutively around 40 moves in a finite time budget. The goodness of each choice can be determined quantitatively since current chess algorithms estimate precisely the value of a position. Web-based chess produces vast amounts of data, millions of decisions per day, incommensurable with traditional psychological experiments. We generated a database of response times (RTs) and position value in rapid chess games. We measured robust emergent statistical observables: (1) RT distributions are long-tailed and show qualitatively distinct forms at different stages of the game, (2) RT of successive moves are highly correlated both for intra- and inter-player moves. These findings have theoretical implications since they deny two basic assumptions of sequential decision making algorithms: RTs are not stationary and can not be generated by a state-function. Our results also have practical implications. First, we characterized the capacity of blunders and score fluctuations to predict a player strength, which is yet an open problem in chess softwares. Second, we show that the winning likelihood can be reliably estimated from a weighted combination of remaining times and position evaluation. PMID:21031032
Airborne electromagnetic mapping of the base of aquifer in areas of western Nebraska
Abraham, Jared D.; Cannia, James C.; Bedrosian, Paul A.; Johnson, Michaela R.; Ball, Lyndsay B.; Sibray, Steven S.
2012-01-01
Airborne geophysical surveys of selected areas of the North and South Platte River valleys of Nebraska, including Lodgepole Creek valley, collected data to map aquifers and bedrock topography and thus improve the understanding of groundwater - surface-water relationships to be used in water-management decisions. Frequency-domain helicopter electromagnetic surveys, using a unique survey flight-line design, collected resistivity data that can be related to lithologic information for refinement of groundwater model inputs. To make the geophysical data useful to multidimensional groundwater models, numerical inversion converted measured data into a depth-dependent subsurface resistivity model. The inverted resistivity model, along with sensitivity analyses and test-hole information, is used to identify hydrogeologic features such as bedrock highs and paleochannels, to improve estimates of groundwater storage. The two- and three-dimensional interpretations provide the groundwater modeler with a high-resolution hydrogeologic framework and a quantitative estimate of framework uncertainty. The new hydrogeologic frameworks improve understanding of the flow-path orientation by refining the location of paleochannels and associated base of aquifer highs. These interpretations provide resource managers high-resolution hydrogeologic frameworks and quantitative estimates of framework uncertainty. The improved base of aquifer configuration represents the hydrogeology at a level of detail not achievable with previously available data.
Estimation of hydrolysis rate constants for carbamates ...
Cheminformatics based tools, such as the Chemical Transformation Simulator under development in EPA’s Office of Research and Development, are being increasingly used to evaluate chemicals for their potential to degrade in the environment or be transformed through metabolism. Hydrolysis represents a major environmental degradation pathway; unfortunately, only a small fraction of hydrolysis rates for about 85,000 chemicals on the Toxic Substances Control Act (TSCA) inventory are in public domain, making it critical to develop in silico approaches to estimate hydrolysis rate constants. In this presentation, we compare three complementary approaches to estimate hydrolysis rates for carbamates, an important chemical class widely used in agriculture as pesticides, herbicides and fungicides. Fragment-based Quantitative Structure Activity Relationships (QSARs) using Hammett-Taft sigma constants are widely published and implemented for relatively simple functional groups such as carboxylic acid esters, phthalate esters, and organophosphate esters, and we extend these to carbamates. We also develop a pKa based model and a quantitative structure property relationship (QSPR) model, and evaluate them against measured rate constants using R square and root mean square (RMS) error. Our work shows that for our relatively small sample size of carbamates, a Hammett-Taft based fragment model performs best, followed by a pKa and a QSPR model. This presentation compares three comp
Zavřel, Tomáš; Knoop, Henning; Steuer, Ralf; Jones, Patrik R; Červený, Jan; Trtílek, Martin
2016-02-01
The prediction of the world's future energy consumption and global climate change makes it desirable to identify new technologies to replace or augment fossil fuels by environmentally sustainable alternatives. One appealing sustainable energy concept is harvesting solar energy via photosynthesis coupled to conversion of CO2 into chemical feedstock and fuel. In this work, the production of ethylene, the most widely used petrochemical produced exclusively from fossil fuels, in the model cyanobacterium Synechocystis sp. PCC 6803 is studied. A novel instrumentation setup for quantitative monitoring of ethylene production using a combination of flat-panel photobioreactor coupled to a membrane-inlet mass spectrometer is introduced. Carbon partitioning is estimated using a quantitative model of cyanobacterial metabolism. The results show that ethylene is produced under a wide range of light intensities with an optimum at modest irradiances. The results allow production conditions to be optimized in a highly controlled setup. Copyright © 2015 Elsevier Ltd. All rights reserved.
Short Course Introduction to Quantitative Mineral Resource Assessments
Singer, Donald A.
2007-01-01
This is an abbreviated text supplementing the content of three sets of slides used in a short course that has been presented by the author at several workshops. The slides should be viewed in the order of (1) Introduction and models, (2) Delineation and estimation, and (3) Combining estimates and summary. References cited in the slides are listed at the end of this text. The purpose of the three-part form of mineral resource assessments discussed in the accompanying slides is to make unbiased quantitative assessments in a format needed in decision-support systems so that consequences of alternative courses of action can be examined. The three-part form of mineral resource assessments was developed to assist policy makers evaluate the consequences of alternative courses of action with respect to land use and mineral-resource development. The audience for three-part assessments is a governmental or industrial policy maker, a manager of exploration, a planner of regional development, or similar decision-maker. Some of the tools and models presented here will be useful for selection of exploration sites, but that is a side benefit, not the goal. To provide unbiased information, we recommend the three-part form of mineral resource assessments where general locations of undiscovered deposits are delineated from a deposit type's geologic setting, frequency distributions of tonnages and grades of well-explored deposits serve as models of grades and tonnages of undiscovered deposits, and number of undiscovered deposits are estimated probabilistically by type. The internally consistent descriptive, grade and tonnage, deposit density, and economic models used in the design of the three-part form of assessments reduce the chances of biased estimates of the undiscovered resources. What and why quantitative resource assessments: The kind of assessment recommended here is founded in decision analysis in order to provide a framework for making decisions concerning mineral resources under conditions of uncertainty. What this means is that we start with the question of what kinds of questions is the decision maker trying to resolve and what forms of information would aid in resolving these questions. Some applications of mineral resource assessments: To plan and guide exploration programs, to assist in land use planning, to plan the location of infrastructure, to estimate mineral endowment, and to identify deposits that present special environmental challenges. Why not just rank prospects / areas? Need for financial analysis, need for comparison with other land uses, need for comparison with distant tracts of land, need to know how uncertain the estimates are, need for consideration of economic and environmental consequences of possible development. Our goal is to provide unbiased information useful to decision-makers.
NASA Astrophysics Data System (ADS)
Drury, Luke O.'C.; Strong, Andrew W.
2017-01-01
We make quantitative estimates of the power supplied to the Galactic cosmic ray population by second-order Fermi acceleration in the interstellar medium, or as it is usually termed in cosmic ray propagation studies, diffusive reacceleration. Using recent results on the local interstellar spectrum, following Voyager 1's crossing of the heliopause, we show that for parameter values, in particular the Alfvén speed, typically used in propagation codes such as GALPROP to fit the B/C ratio, the power contributed by diffusive reacceleration is significant and can be of order 50% of the total Galactic cosmic ray power. The implications for the damping of interstellar turbulence are briefly considered.
Wignall, Jessica A; Muratov, Eugene; Sedykh, Alexander; Guyton, Kathryn Z; Tropsha, Alexander; Rusyn, Ivan; Chiu, Weihsueh A
2018-05-01
Human health assessments synthesize human, animal, and mechanistic data to produce toxicity values that are key inputs to risk-based decision making. Traditional assessments are data-, time-, and resource-intensive, and they cannot be developed for most environmental chemicals owing to a lack of appropriate data. As recommended by the National Research Council, we propose a solution for predicting toxicity values for data-poor chemicals through development of quantitative structure-activity relationship (QSAR) models. We used a comprehensive database of chemicals with existing regulatory toxicity values from U.S. federal and state agencies to develop quantitative QSAR models. We compared QSAR-based model predictions to those based on high-throughput screening (HTS) assays. QSAR models for noncancer threshold-based values and cancer slope factors had cross-validation-based Q 2 of 0.25-0.45, mean model errors of 0.70-1.11 log 10 units, and applicability domains covering >80% of environmental chemicals. Toxicity values predicted from QSAR models developed in this study were more accurate and precise than those based on HTS assays or mean-based predictions. A publicly accessible web interface to make predictions for any chemical of interest is available at http://toxvalue.org. An in silico tool that can predict toxicity values with an uncertainty of an order of magnitude or less can be used to quickly and quantitatively assess risks of environmental chemicals when traditional toxicity data or human health assessments are unavailable. This tool can fill a critical gap in the risk assessment and management of data-poor chemicals. https://doi.org/10.1289/EHP2998.
Krajbich, Ian; Rangel, Antonio
2011-08-16
How do we make decisions when confronted with several alternatives (e.g., on a supermarket shelf)? Previous work has shown that accumulator models, such as the drift-diffusion model, can provide accurate descriptions of the psychometric data for binary value-based choices, and that the choice process is guided by visual attention. However, the computational processes used to make choices in more complicated situations involving three or more options are unknown. We propose a model of trinary value-based choice that generalizes what is known about binary choice, and test it using an eye-tracking experiment. We find that the model provides a quantitatively accurate description of the relationship between choice, reaction time, and visual fixation data using the same parameters that were estimated in previous work on binary choice. Our findings suggest that the brain uses similar computational processes to make binary and trinary choices.
A Benefit-Risk Analysis Approach to Capture Regulatory Decision-Making: Multiple Myeloma.
Raju, G K; Gurumurthi, Karthik; Domike, Reuben; Kazandjian, Dickran; Landgren, Ola; Blumenthal, Gideon M; Farrell, Ann; Pazdur, Richard; Woodcock, Janet
2018-01-01
Drug regulators around the world make decisions about drug approvability based on qualitative benefit-risk analysis. In this work, a quantitative benefit-risk analysis approach captures regulatory decision-making about new drugs to treat multiple myeloma (MM). MM assessments have been based on endpoints such as time to progression (TTP), progression-free survival (PFS), and objective response rate (ORR) which are different than benefit-risk analysis based on overall survival (OS). Twenty-three FDA decisions on MM drugs submitted to FDA between 2003 and 2016 were identified and analyzed. The benefits and risks were quantified relative to comparators (typically the control arm of the clinical trial) to estimate whether the median benefit-risk was positive or negative. A sensitivity analysis was demonstrated using ixazomib to explore the magnitude of uncertainty. FDA approval decision outcomes were consistent and logical using this benefit-risk framework. © 2017 American Society for Clinical Pharmacology and Therapeutics.
NASA Technical Reports Server (NTRS)
Gernand, Jeffrey L.; Gillespie, Amanda M.; Monaghan, Mark W.; Cummings, Nicholas H.
2010-01-01
Success of the Constellation Program's lunar architecture requires successfully launching two vehicles, Ares I/Orion and Ares V/Altair, within a very limited time period. The reliability and maintainability of flight vehicles and ground systems must deliver a high probability of successfully launching the second vehicle in order to avoid wasting the on-orbit asset launched by the first vehicle. The Ground Operations Project determined which ground subsystems had the potential to affect the probability of the second launch and allocated quantitative availability requirements to these subsystems. The Ground Operations Project also developed a methodology to estimate subsystem reliability, availability, and maintainability to ensure that ground subsystems complied with allocated launch availability and maintainability requirements. The verification analysis developed quantitative estimates of subsystem availability based on design documentation, testing results, and other information. Where appropriate, actual performance history was used to calculate failure rates for legacy subsystems or comparative components that will support Constellation. The results of the verification analysis will be used to assess compliance with requirements and to highlight design or performance shortcomings for further decision making. This case study will discuss the subsystem requirements allocation process, describe the ground systems methodology for completing quantitative reliability, availability, and maintainability analysis, and present findings and observation based on analysis leading to the Ground Operations Project Preliminary Design Review milestone.
Using multi-species occupancy models in structured decision making on managed lands
Sauer, John R.; Blank, Peter J.; Zipkin, Elise F.; Fallon, Jane E.; Fallon, Frederick W.
2013-01-01
Land managers must balance the needs of a variety of species when manipulating habitats. Structured decision making provides a systematic means of defining choices and choosing among alternative management options; implementation of a structured decision requires quantitative approaches to predicting consequences of management on the relevant species. Multi-species occupancy models provide a convenient framework for making structured decisions when the management objective is focused on a collection of species. These models use replicate survey data that are often collected on managed lands. Occupancy can be modeled for each species as a function of habitat and other environmental features, and Bayesian methods allow for estimation and prediction of collective responses of groups of species to alternative scenarios of habitat management. We provide an example of this approach using data from breeding bird surveys conducted in 2008 at the Patuxent Research Refuge in Laurel, Maryland, evaluating the effects of eliminating meadow and wetland habitats on scrub-successional and woodland-breeding bird species using summed total occupancy of species as an objective function. Removal of meadows and wetlands decreased value of an objective function based on scrub-successional species by 23.3% (95% CI: 20.3–26.5), but caused only a 2% (0.5, 3.5) increase in value of an objective function based on woodland species, documenting differential effects of elimination of meadows and wetlands on these groups of breeding birds. This approach provides a useful quantitative tool for managers interested in structured decision making.
McElvaine, M D; McDowell, R M; Fite, R W; Miller, L
1993-12-01
The United States Department of Agriculture, Animal and Plant Health Inspection Service (USDA-APHIS) has been exploring methods of quantitative risk assessment to support decision-making, provide risk management options and identify research needs. With current changes in world trade, regulatory decisions must have a scientific basis which is transparent, consistent, documentable and defensible. These quantitative risk assessment methods are described in an accompanying paper in this issue. In the present article, the authors provide an illustration by presenting an application of these methods. Prior to proposing changes in regulations, USDA officials requested an assessment of the risk of introduction of foreign animal disease to the United States of America through garbage from Alaskan cruise ships. The risk assessment team used a combination of quantitative and qualitative methods to evaluate this question. Quantitative risk assessment methods were used to estimate the amount of materials of foreign origin being sent to Alaskan landfills. This application of quantitative risk assessment illustrates the flexibility of the methods in addressing specific questions. By applying these methods, specific areas were identified where more scientific information and research were needed. Even with limited information, the risk assessment provided APHIS management with a scientific basis for a regulatory decision.
Mechanistic and quantitative insight into cell surface targeted molecular imaging agent design.
Zhang, Liang; Bhatnagar, Sumit; Deschenes, Emily; Thurber, Greg M
2016-05-05
Molecular imaging agent design involves simultaneously optimizing multiple probe properties. While several desired characteristics are straightforward, including high affinity and low non-specific background signal, in practice there are quantitative trade-offs between these properties. These include plasma clearance, where fast clearance lowers background signal but can reduce target uptake, and binding, where high affinity compounds sometimes suffer from lower stability or increased non-specific interactions. Further complicating probe development, many of the optimal parameters vary depending on both target tissue and imaging agent properties, making empirical approaches or previous experience difficult to translate. Here, we focus on low molecular weight compounds targeting extracellular receptors, which have some of the highest contrast values for imaging agents. We use a mechanistic approach to provide a quantitative framework for weighing trade-offs between molecules. Our results show that specific target uptake is well-described by quantitative simulations for a variety of targeting agents, whereas non-specific background signal is more difficult to predict. Two in vitro experimental methods for estimating background signal in vivo are compared - non-specific cellular uptake and plasma protein binding. Together, these data provide a quantitative method to guide probe design and focus animal work for more cost-effective and time-efficient development of molecular imaging agents.
Measurement Error and Environmental Epidemiology: A Policy Perspective
Edwards, Jessie K.; Keil, Alexander P.
2017-01-01
Purpose of review Measurement error threatens public health by producing bias in estimates of the population impact of environmental exposures. Quantitative methods to account for measurement bias can improve public health decision making. Recent findings We summarize traditional and emerging methods to improve inference under a standard perspective, in which the investigator estimates an exposure response function, and a policy perspective, in which the investigator directly estimates population impact of a proposed intervention. Summary Under a policy perspective, the analysis must be sensitive to errors in measurement of factors that modify the effect of exposure on outcome, must consider whether policies operate on the true or measured exposures, and may increasingly need to account for potentially dependent measurement error of two or more exposures affected by the same policy or intervention. Incorporating approaches to account for measurement error into such a policy perspective will increase the impact of environmental epidemiology. PMID:28138941
Hattis, Dale; Goble, Robert; Chu, Margaret
2005-01-01
In an earlier report we developed a quantitative likelihood-based analysis of the differences in sensitivity of rodents to mutagenic carcinogens across three life stages (fetal, birth to weaning, and weaning to 60 days) relative to exposures in adult life. Here we draw implications for assessing human risks for full lifetime exposures, taking into account three types of uncertainties in making projections from the rodent data: uncertainty in the central estimates of the life-stage–specific sensitivity factors estimated earlier, uncertainty from chemical-to-chemical differences in life-stage–specific sensitivities for carcinogenesis, and uncertainty in the mapping of rodent life stages to human ages/exposure periods. Among the uncertainties analyzed, the mapping of rodent life stages to human ages/exposure periods is most important quantitatively (a range of several-fold in estimates of the duration of the human equivalent of the highest sensitivity “birth to weaning” period in rodents). The combined effects of these uncertainties are estimated with Monte Carlo analyses. Overall, the estimated population arithmetic mean risk from lifetime exposures at a constant milligrams per kilogram body weight level to a generic mutagenic carcinogen is about 2.8-fold larger than expected from adult-only exposure with 5–95% confidence limits of 1.5-to 6-fold. The mean estimates for the 0- to 2-year and 2- to 15-year periods are about 35–55% larger than the 10- and 3-fold sensitivity factor adjustments recently proposed by the U.S. Environmental Protection Agency. The present results are based on data for only nine chemicals, including five mutagens. Risk inferences will be altered as data become available for other chemicals. PMID:15811844
Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L.
2013-01-01
Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays. PMID:23737753
Zaitlen, Noah; Kraft, Peter; Patterson, Nick; Pasaniuc, Bogdan; Bhatia, Gaurav; Pollack, Samuela; Price, Alkes L
2013-05-01
Important knowledge about the determinants of complex human phenotypes can be obtained from the estimation of heritability, the fraction of phenotypic variation in a population that is determined by genetic factors. Here, we make use of extensive phenotype data in Iceland, long-range phased genotypes, and a population-wide genealogical database to examine the heritability of 11 quantitative and 12 dichotomous phenotypes in a sample of 38,167 individuals. Most previous estimates of heritability are derived from family-based approaches such as twin studies, which may be biased upwards by epistatic interactions or shared environment. Our estimates of heritability, based on both closely and distantly related pairs of individuals, are significantly lower than those from previous studies. We examine phenotypic correlations across a range of relationships, from siblings to first cousins, and find that the excess phenotypic correlation in these related individuals is predominantly due to shared environment as opposed to dominance or epistasis. We also develop a new method to jointly estimate narrow-sense heritability and the heritability explained by genotyped SNPs. Unlike existing methods, this approach permits the use of information from both closely and distantly related pairs of individuals, thereby reducing the variance of estimates of heritability explained by genotyped SNPs while preventing upward bias. Our results show that common SNPs explain a larger proportion of the heritability than previously thought, with SNPs present on Illumina 300K genotyping arrays explaining more than half of the heritability for the 23 phenotypes examined in this study. Much of the remaining heritability is likely to be due to rare alleles that are not captured by standard genotyping arrays.
Assessment of Thematic Mapper Band-to-band Registration by the Block Correlation Method
NASA Technical Reports Server (NTRS)
Card, D. H.; Wrigley, R. C.; Mertz, F. C.; Hall, J. R.
1984-01-01
The design of the Thematic Mapper (TM) multispectral radiometer makes it susceptible to band-to-band misregistration. To estimate band-to-band misregistration a block correlation method is employed. This method is chosen over other possible techniques (band differencing and flickering) because quantitative results are produced. The method correlates rectangular blocks of pixels from one band against blocks centered on identical pixels from a second band. The block pairs are shifted in pixel increments both vertically and horizontally with respect to each other and the correlation coefficient for each shift position is computed. The displacement corresponding to the maximum correlation is taken as the best estimate of registration error for each block pair. Subpixel shifts are estimated by a bi-quadratic interpolation of the correlation values surrounding the maximum correlation. To obtain statistical summaries for each band combination post processing of the block correlation results performed. The method results in estimates of registration error that are consistent with expectations.
Errors in retarding potential analyzers caused by nonuniformity of the grid-plane potential.
NASA Technical Reports Server (NTRS)
Hanson, W. B.; Frame, D. R.; Midgley, J. E.
1972-01-01
One aspect of the degradation in performance of retarding potential analyzers caused by potential depressions in the retarding grid is quantitatively estimated from laboratory measurements and theoretical calculations. A simple expression is obtained that permits the use of laboratory measurements of grid properties to make first-order corrections to flight data. Systematic positive errors in ion temperature of approximately 16% for the Ogo 4 instrument and 3% for the Ogo 6 instrument are deduced. The effects of the transverse electric fields arising from the grid potential depressions are not treated.
Reducing misfocus-related motion artefacts in laser speckle contrast imaging.
Ringuette, Dene; Sigal, Iliya; Gad, Raanan; Levi, Ofer
2015-01-01
Laser Speckle Contrast Imaging (LSCI) is a flexible, easy-to-implement technique for measuring blood flow speeds in-vivo. In order to obtain reliable quantitative data from LSCI the object must remain in the focal plane of the imaging system for the duration of the measurement session. However, since LSCI suffers from inherent frame-to-frame noise, it often requires a moving average filter to produce quantitative results. This frame-to-frame noise also makes the implementation of rapid autofocus system challenging. In this work, we demonstrate an autofocus method and system based on a novel measure of misfocus which serves as an accurate and noise-robust feedback mechanism. This measure of misfocus is shown to enable the localization of best focus with sub-depth-of-field sensitivity, yielding more accurate estimates of blood flow speeds and blood vessel diameters.
García-González, Diego L; Sedman, Jacqueline; van de Voort, Frederik R
2013-04-01
Spectral reconstitution (SR) is a dilution technique developed to facilitate the rapid, automated, and quantitative analysis of viscous oil samples by Fourier transform infrared spectroscopy (FT-IR). This technique involves determining the dilution factor through measurement of an absorption band of a suitable spectral marker added to the diluent, and then spectrally removing the diluent from the sample and multiplying the resulting spectrum to compensate for the effect of dilution on the band intensities. The facsimile spectrum of the neat oil thus obtained can then be qualitatively or quantitatively analyzed for the parameter(s) of interest. The quantitative performance of the SR technique was examined with two transition-metal carbonyl complexes as spectral markers, chromium hexacarbonyl and methylcyclopentadienyl manganese tricarbonyl. The estimation of the volume fraction (VF) of the diluent in a model system, consisting of canola oil diluted to various extents with odorless mineral spirits, served as the basis for assessment of these markers. The relationship between the VF estimates and the true volume fraction (VF(t)) was found to be strongly dependent on the dilution ratio and also depended, to a lesser extent, on the spectral resolution. These dependences are attributable to the effect of changes in matrix polarity on the bandwidth of the ν(CO) marker bands. Excellent VF(t) estimates were obtained by making a polarity correction devised with a variance-spectrum-delineated correction equation. In the absence of such a correction, SR was shown to introduce only a minor and constant bias, provided that polarity differences among all the diluted samples analyzed were minimal. This bias can be built into the calibration of a quantitative FT-IR analytical method by subjecting appropriate calibration standards to the same SR procedure as the samples to be analyzed. The primary purpose of the SR technique is to simplify preparation of diluted samples such that only approximate proportions need to be adhered to, rather than using exact weights or volumes, the marker accounting for minor variations. Additional applications discussed include the use of the SR technique in extraction-based, quantitative, automated FT-IR methods for the determination of moisture, acid number, and base number in lubricating oils, as well as of moisture content in edible oils.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
Are power calculations useful? A multicentre neuroimaging study
Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward
2014-01-01
There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267
Michael J. Firko; Jane Leslie Hayes
1990-01-01
Quantitative genetic studies of resistance can provide estimates of genetic parameters not available with other types of genetic analyses. Three methods are discussed for estimating the amount of additive genetic variation in resistance to individual insecticides and subsequent estimation of heritability (h2) of resistance. Sibling analysis and...
ITALICS: an algorithm for normalization and DNA copy number calling for Affymetrix SNP arrays.
Rigaill, Guillem; Hupé, Philippe; Almeida, Anna; La Rosa, Philippe; Meyniel, Jean-Philippe; Decraene, Charles; Barillot, Emmanuel
2008-03-15
Affymetrix SNP arrays can be used to determine the DNA copy number measurement of 11 000-500 000 SNPs along the genome. Their high density facilitates the precise localization of genomic alterations and makes them a powerful tool for studies of cancers and copy number polymorphism. Like other microarray technologies it is influenced by non-relevant sources of variation, requiring correction. Moreover, the amplitude of variation induced by non-relevant effects is similar or greater than the biologically relevant effect (i.e. true copy number), making it difficult to estimate non-relevant effects accurately without including the biologically relevant effect. We addressed this problem by developing ITALICS, a normalization method that estimates both biological and non-relevant effects in an alternate, iterative manner, accurately eliminating irrelevant effects. We compared our normalization method with other existing and available methods, and found that ITALICS outperformed these methods for several in-house datasets and one public dataset. These results were validated biologically by quantitative PCR. The R package ITALICS (ITerative and Alternative normaLIzation and Copy number calling for affymetrix Snp arrays) has been submitted to Bioconductor.
Infrared thermography for wood density estimation
NASA Astrophysics Data System (ADS)
López, Gamaliel; Basterra, Luis-Alfonso; Acuña, Luis
2018-03-01
Infrared thermography (IRT) is becoming a commonly used technique to non-destructively inspect and evaluate wood structures. Based on the radiation emitted by all objects, this technique enables the remote visualization of the surface temperature without making contact using a thermographic device. The process of transforming radiant energy into temperature depends on many parameters, and interpreting the results is usually complicated. However, some works have analyzed the operation of IRT and expanded its applications, as found in the latest literature. This work analyzes the effect of density on the thermodynamic behavior of timber to be determined by IRT. The cooling of various wood samples has been registered, and a statistical procedure that enables one to quantitatively estimate the density of timber has been designed. This procedure represents a new method to physically characterize this material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gigase, Yves
2007-07-01
Available in abstract form only. Full text of publication follows: The uncertainty on characteristics of radioactive LILW waste packages is difficult to determine and often very large. This results from a lack of knowledge of the constitution of the waste package and of the composition of the radioactive sources inside. To calculate a quantitative estimate of the uncertainty on a characteristic of a waste package one has to combine these various uncertainties. This paper discusses an approach to this problem, based on the use of the log-normal distribution, which is both elegant and easy to use. It can provide asmore » example quantitative estimates of uncertainty intervals that 'make sense'. The purpose is to develop a pragmatic approach that can be integrated into existing characterization methods. In this paper we show how our method can be applied to the scaling factor method. We also explain how it can be used when estimating other more complex characteristics such as the total uncertainty of a collection of waste packages. This method could have applications in radioactive waste management, more in particular in those decision processes where the uncertainty on the amount of activity is considered to be important such as in probability risk assessment or the definition of criteria for acceptance or categorization. (author)« less
An accessible method for implementing hierarchical models with spatio-temporal abundance data
Ross, Beth E.; Hooten, Melvin B.; Koons, David N.
2012-01-01
A common goal in ecology and wildlife management is to determine the causes of variation in population dynamics over long periods of time and across large spatial scales. Many assumptions must nevertheless be overcome to make appropriate inference about spatio-temporal variation in population dynamics, such as autocorrelation among data points, excess zeros, and observation error in count data. To address these issues, many scientists and statisticians have recommended the use of Bayesian hierarchical models. Unfortunately, hierarchical statistical models remain somewhat difficult to use because of the necessary quantitative background needed to implement them, or because of the computational demands of using Markov Chain Monte Carlo algorithms to estimate parameters. Fortunately, new tools have recently been developed that make it more feasible for wildlife biologists to fit sophisticated hierarchical Bayesian models (i.e., Integrated Nested Laplace Approximation, ‘INLA’). We present a case study using two important game species in North America, the lesser and greater scaup, to demonstrate how INLA can be used to estimate the parameters in a hierarchical model that decouples observation error from process variation, and accounts for unknown sources of excess zeros as well as spatial and temporal dependence in the data. Ultimately, our goal was to make unbiased inference about spatial variation in population trends over time.
Achieving across-laboratory replicability in psychophysical scaling
Ward, Lawrence M.; Baumann, Michael; Moffat, Graeme; Roberts, Larry E.; Mori, Shuji; Rutledge-Taylor, Matthew; West, Robert L.
2015-01-01
It is well known that, although psychophysical scaling produces good qualitative agreement between experiments, precise quantitative agreement between experimental results, such as that routinely achieved in physics or biology, is rarely or never attained. A particularly galling example of this is the fact that power function exponents for the same psychological continuum, measured in different laboratories but ostensibly using the same scaling method, magnitude estimation, can vary by a factor of three. Constrained scaling (CS), in which observers first learn a standardized meaning for a set of numerical responses relative to a standard sensory continuum and then make magnitude judgments of other sensations using the learned response scale, has produced excellent quantitative agreement between individual observers’ psychophysical functions. Theoretically it could do the same for across-laboratory comparisons, although this needs to be tested directly. We compared nine different experiments from four different laboratories as an example of the level of across experiment and across-laboratory agreement achievable using CS. In general, we found across experiment and across-laboratory agreement using CS to be significantly superior to that typically obtained with conventional magnitude estimation techniques, although some of its potential remains to be realized. PMID:26191019
Fetus dose estimation in thyroid cancer post-surgical radioiodine therapy.
Mianji, Fereidoun A; Diba, Jila Karimi; Babakhani, Asad
2015-01-01
Unrecognised pregnancy during radioisotope therapy of thyroid cancer results in hardly definable embryo/fetus exposures, particularly when the thyroid gland is already removed. Sources of such difficulty include uncertainty in data like pregnancy commencing time, amount and distribution of metastasized thyroid cells in body, effect of the thyroidectomy on the fetus dose coefficient etc. Despite all these uncertainties, estimation of the order of the fetus dose in most cases is enough for medical and legal decision-making purposes. A model for adapting the dose coefficients recommended by the well-known methods to the problem of fetus dose assessment in athyrotic patients is proposed. The model defines a correction factor for the problem and ensures that the fetus dose in athyrotic pregnant patients is less than the normal patients. A case of pregnant patient undergone post-surgical therapy by I-131 is then studied for quantitative comparison of the methods. The results draw a range for the fetus dose in athyrotic patients using the derived factor. This reduces the concerns on under- or over-estimation of the embryo/fetus dose and is helpful for personal and/or legal decision-making on abortion. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Dai, Huanping; Micheyl, Christophe
2012-11-01
Psychophysical "reverse-correlation" methods allow researchers to gain insight into the perceptual representations and decision weighting strategies of individual subjects in perceptual tasks. Although these methods have gained momentum, until recently their development was limited to experiments involving only two response categories. Recently, two approaches for estimating decision weights in m-alternative experiments have been put forward. One approach extends the two-category correlation method to m > 2 alternatives; the second uses multinomial logistic regression (MLR). In this article, the relative merits of the two methods are discussed, and the issues of convergence and statistical efficiency of the methods are evaluated quantitatively using Monte Carlo simulations. The results indicate that, for a range of values of the number of trials, the estimated weighting patterns are closer to their asymptotic values for the correlation method than for the MLR method. Moreover, for the MLR method, weight estimates for different stimulus components can exhibit strong correlations, making the analysis and interpretation of measured weighting patterns less straightforward than for the correlation method. These and other advantages of the correlation method, which include computational simplicity and a close relationship to other well-established psychophysical reverse-correlation methods, make it an attractive tool to uncover decision strategies in m-alternative experiments.
A NOVEL TECHNIQUE FOR QUANTITATIVE ESTIMATION OF UPTAKE OF DIESEL EXHAUST PARTICLES BY LUNG CELLS
While airborne particulates like diesel exhaust particulates (DEP) exert significant toxicological effects on lungs, quantitative estimation of accumulation of DEP inside lung cells has not been reported due to a lack of an accurate and quantitative technique for this purpose. I...
de Greef-van der Sandt, I; Newgreen, D; Schaddelee, M; Dorrepaal, C; Martina, R; Ridder, A; van Maanen, R
2016-04-01
A multicriteria decision analysis (MCDA) approach was developed and used to estimate the benefit-risk of solifenacin and mirabegron and their combination in the treatment of overactive bladder (OAB). The objectives were 1) to develop an MCDA tool to compare drug effects in OAB quantitatively, 2) to establish transparency in the evaluation of the benefit-risk profile of various dose combinations, and 3) to quantify the added value of combination use compared to monotherapies. The MCDA model was developed using efficacy, safety, and tolerability attributes and the results of a phase II factorial design combination study were evaluated. Combinations of solifenacin 5 mg and mirabegron 25 mg and mirabegron 50 (5+25 and 5+50) scored the highest clinical utility and supported combination therapy development of solifenacin and mirabegron for phase III clinical development at these dose regimens. This case study underlines the benefit of using a quantitative approach in clinical drug development programs. © 2015 The American Society for Clinical Pharmacology and Therapeutics.
Wengert, G J; Helbich, T H; Woitek, R; Kapetas, P; Clauser, P; Baltzer, P A; Vogl, W-D; Weber, M; Meyer-Baese, A; Pinker, Katja
2016-11-01
To evaluate the inter-/intra-observer agreement of BI-RADS-based subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observer-independent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BI-RADS 1 and 2) were included in this institutional review board (IRB)-approved prospective study. All women underwent un-enhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BI-RADS. Automated observer-independent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter-/intra-observer agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter-/intra-observer agreement and experienced readers a substantial inter- and perfect intra-observer agreement for subjective visual estimation of FGT. Practice and experience reduced observer-dependency. Automated observer-independent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.209-0.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra-/inter-observer agreement, which can be improved by practice and experience. Automated observer-independent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. • Subjective FGT estimation with MRI shows moderate intra-/inter-observer agreement in inexperienced readers. • Inter-observer agreement can be improved by practice and experience. • Automated observer-independent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.
NASA Technical Reports Server (NTRS)
Poultney, S. K.; Brumfield, M. L.; Siviter, J. S.
1975-01-01
Typical pollutant gas concentrations at the stack exits of stationary sources can be estimated to be about 500 ppm under the present emission standards. Raman lidar has a number of advantages which makes it a valuable tool for remote measurements of these stack emissions. Tests of the Langley Research Center Raman lidar at a calibration tank indicate that night measurements of SO2 concentrations and stack opacity are possible. Accuracies of 10 percent are shown to be achievable from a distance of 300 m within 30 min integration times for 500 ppm SO2 at the stack exits. All possible interferences were examined quantitatively (except for the fluorescence of aerosols in actual stack emissions) and found to have negligible effect on the measurements. An early test at an instrumented stack is strongly recommended.
A practical guide to value of information analysis.
Wilson, Edward C F
2015-02-01
Value of information analysis is a quantitative method to estimate the return on investment in proposed research projects. It can be used in a number of ways. Funders of research may find it useful to rank projects in terms of the expected return on investment from a variety of competing projects. Alternatively, trialists can use the principles to identify the efficient sample size of a proposed study as an alternative to traditional power calculations, and finally, a value of information analysis can be conducted alongside an economic evaluation as a quantitative adjunct to the 'future research' or 'next steps' section of a study write up. The purpose of this paper is to present a brief introduction to the methods, a step-by-step guide to calculation and a discussion of issues that arise in their application to healthcare decision making. Worked examples are provided in the accompanying online appendices as Microsoft Excel spreadsheets.
A Backscatter-Lidar Forward-Operator
NASA Astrophysics Data System (ADS)
Geisinger, Armin; Behrendt, Andreas; Wulfmeyer, Volker; Vogel, Bernhard; Mattis, Ina; Flentje, Harald; Förstner, Jochen; Potthast, Roland
2015-04-01
We have developed a forward-operator which is capable of calculating virtual lidar profiles from atmospheric state simulations. The operator allows us to compare lidar measurements and model simulations based on the same measurement parameter: the lidar backscatter profile. This method simplifies qualitative comparisons and also makes quantitative comparisons possible, including statistical error quantification. Implemented into an aerosol-capable model system, the operator will act as a component to assimilate backscatter-lidar measurements. As many weather services maintain already networks of backscatter-lidars, such data are acquired already in an operational manner. To estimate and quantify errors due to missing or uncertain aerosol information, we started sensitivity studies about several scattering parameters such as the aerosol size and both the real and imaginary part of the complex index of refraction. Furthermore, quantitative and statistical comparisons between measurements and virtual measurements are shown in this study, i.e. applying the backscatter-lidar forward-operator on model output.
Bayesian resolution of TEM, CSEM and MT soundings: a comparative study
NASA Astrophysics Data System (ADS)
Blatter, D. B.; Ray, A.; Key, K.
2017-12-01
We examine the resolution of three electromagnetic exploration methods commonly used to map the electrical conductivity of the shallow crust - the magnetotelluric (MT) method, the controlled-source electromagnetic (CSEM) method and the transient electromagnetic (TEM) method. TEM and CSEM utilize an artificial source of EM energy, while MT makes use of natural variations in the Earth's electromagnetic field. For a given geological setting and acquisition parameters, each of these methods will have a different resolution due to differences in the source field polarization and the frequency range of the measurements. For example, the MT and TEM methods primarily rely on induced horizontal currents and are most sensitive to conductive layers while the CSEM method generates vertical loops of current and is more sensitive to resistive features. Our study seeks to provide a robust resolution comparison that can help inform exploration geophysicists about which technique is best suited for a particular target. While it is possible to understand and describe a difference in resolution qualitatively, it remains challenging to fully describe it quantitatively using optimization based approaches. Part of the difficulty here stems from the standard electromagnetic inversion toolkit, which makes heavy use of regularization (often in the form of smoothing) to constrain the non-uniqueness inherent in the inverse problem. This regularization makes it difficult to accurately estimate the uncertainty in estimated model parameters - and therefore obscures their true resolution. To overcome this difficulty, we compare the resolution of CSEM, airborne TEM, and MT data quantitatively using a Bayesian trans-dimensional Markov chain Monte Carlo (McMC) inversion scheme. Noisy synthetic data for this study are computed from various representative 1D test models: a conductive anomaly under a conductive/resistive overburden; and a resistive anomaly under a conductive/resistive overburden. In addition to obtaining the full posterior probability density function of the model parameters, we develop a metric to more directly compare the resolution of each method as a function of depth.
A collimator optimization method for quantitative imaging: application to Y-90 bremsstrahlung SPECT.
Rong, Xing; Frey, Eric C
2013-08-01
Post-therapy quantitative 90Y bremsstrahlung single photon emission computed tomography (SPECT) has shown great potential to provide reliable activity estimates, which are essential for dose verification. Typically 90Y imaging is performed with high- or medium-energy collimators. However, the energy spectrum of 90Y bremsstrahlung photons is substantially different than typical for these collimators. In addition, dosimetry requires quantitative images, and collimators are not typically optimized for such tasks. Optimizing a collimator for 90Y imaging is both novel and potentially important. Conventional optimization methods are not appropriate for 90Y bremsstrahlung photons, which have a continuous and broad energy distribution. In this work, the authors developed a parallel-hole collimator optimization method for quantitative tasks that is particularly applicable to radionuclides with complex emission energy spectra. The authors applied the proposed method to develop an optimal collimator for quantitative 90Y bremsstrahlung SPECT in the context of microsphere radioembolization. To account for the effects of the collimator on both the bias and the variance of the activity estimates, the authors used the root mean squared error (RMSE) of the volume of interest activity estimates as the figure of merit (FOM). In the FOM, the bias due to the null space of the image formation process was taken in account. The RMSE was weighted by the inverse mass to reflect the application to dosimetry; for a different application, more relevant weighting could easily be adopted. The authors proposed a parameterization for the collimator that facilitates the incorporation of the important factors (geometric sensitivity, geometric resolution, and septal penetration fraction) determining collimator performance, while keeping the number of free parameters describing the collimator small (i.e., two parameters). To make the optimization results for quantitative 90Y bremsstrahlung SPECT more general, the authors simulated multiple tumors of various sizes in the liver. The authors realistically simulated human anatomy using a digital phantom and the image formation process using a previously validated and computationally efficient method for modeling the image-degrading effects including object scatter, attenuation, and the full collimator-detector response (CDR). The scatter kernels and CDR function tables used in the modeling method were generated using a previously validated Monte Carlo simulation code. The hole length, hole diameter, and septal thickness of the obtained optimal collimator were 84, 3.5, and 1.4 mm, respectively. Compared to a commercial high-energy general-purpose collimator, the optimal collimator improved the resolution and FOM by 27% and 18%, respectively. The proposed collimator optimization method may be useful for improving quantitative SPECT imaging for radionuclides with complex energy spectra. The obtained optimal collimator provided a substantial improvement in quantitative performance for the microsphere radioembolization task considered.
Development of quantitative screen for 1550 chemicals with GC-MS.
Bergmann, Alan J; Points, Gary L; Scott, Richard P; Wilson, Glenn; Anderson, Kim A
2018-05-01
With hundreds of thousands of chemicals in the environment, effective monitoring requires high-throughput analytical techniques. This paper presents a quantitative screening method for 1550 chemicals based on statistical modeling of responses with identification and integration performed using deconvolution reporting software. The method was evaluated with representative environmental samples. We tested biological extracts, low-density polyethylene, and silicone passive sampling devices spiked with known concentrations of 196 representative chemicals. A multiple linear regression (R 2 = 0.80) was developed with molecular weight, logP, polar surface area, and fractional ion abundance to predict chemical responses within a factor of 2.5. Linearity beyond the calibration had R 2 > 0.97 for three orders of magnitude. Median limits of quantitation were estimated to be 201 pg/μL (1.9× standard deviation). The number of detected chemicals and the accuracy of quantitation were similar for environmental samples and standard solutions. To our knowledge, this is the most precise method for the largest number of semi-volatile organic chemicals lacking authentic standards. Accessible instrumentation and software make this method cost effective in quantifying a large, customizable list of chemicals. When paired with silicone wristband passive samplers, this quantitative screen will be very useful for epidemiology where binning of concentrations is common. Graphical abstract A multiple linear regression of chemical responses measured with GC-MS allowed quantitation of 1550 chemicals in samples such as silicone wristbands.
Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.
Obuchowski, Nancy A; Bullen, Jennifer
2017-01-01
Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.
Energy Education: The Quantitative Voice
NASA Astrophysics Data System (ADS)
Wolfson, Richard
2010-02-01
A serious study of energy use and its consequences has to be quantitative. It makes little sense to push your favorite renewable energy source if it can't provide enough energy to make a dent in humankind's prodigious energy consumption. Conversely, it makes no sense to dismiss alternatives---solar in particular---that supply Earth with energy at some 10,000 times our human energy consumption rate. But being quantitative---especially with nonscience students or the general public---is a delicate business. This talk draws on the speaker's experience presenting energy issues to diverse audiences through single lectures, entire courses, and a textbook. The emphasis is on developing a quick, ``back-of-the-envelope'' approach to quantitative understanding of energy issues. )
Burch, Tucker R.; Spencer, Susan K.; Stokdyk, Joel P.; Kieke, Burney A.; Larson, Rebecca A.; Firnstahl, Aaron D.; Rule, Ana M.
2017-01-01
Background: Spray irrigation for land-applying livestock manure is increasing in the United States as farms become larger and economies of scale make manure irrigation affordable. Human health risks from exposure to zoonotic pathogens aerosolized during manure irrigation are not well understood. Objectives: We aimed to a) estimate human health risks due to aerosolized zoonotic pathogens downwind of spray-irrigated dairy manure; and b) determine which factors (e.g., distance, weather conditions) have the greatest influence on risk estimates. Methods: We sampled downwind air concentrations of manure-borne fecal indicators and zoonotic pathogens during 21 full-scale dairy manure irrigation events at three farms. We fit these data to hierarchical empirical models and used model outputs in a quantitative microbial risk assessment (QMRA) to estimate risk [probability of acute gastrointestinal illness (AGI)] for individuals exposed to spray-irrigated dairy manure containing Campylobacter jejuni, enterohemorrhagic Escherichia coli (EHEC), or Salmonella spp. Results: Median risk estimates from Monte Carlo simulations ranged from 10−5 to 10−2 and decreased with distance from the source. Risk estimates for Salmonella or EHEC-related AGI were most sensitive to the assumed level of pathogen prevalence in dairy manure, while risk estimates for C. jejuni were not sensitive to any single variable. Airborne microbe concentrations were negatively associated with distance and positively associated with wind speed, both of which were retained in models as a significant predictor more often than relative humidity, solar irradiation, or temperature. Conclusions: Our model-based estimates suggest that reducing pathogen prevalence and concentration in source manure would reduce the risk of AGI from exposure to manure irrigation, and that increasing the distance from irrigated manure (i.e., setbacks) and limiting irrigation to times of low wind speed may also reduce risk. https://doi.org/10.1289/EHP283 PMID:28885976
Burch, Tucker R; Spencer, Susan K.; Stokdyk, Joel; Kieke, Burney A; Larson, Rebecca A; Firnstahl, Aaron; Rule, Ana M; Borchardt, Mark A.
2017-01-01
BACKGROUND: Spray irrigation for land-applying livestock manure is increasing in the United States as farms become larger and economies of scale make manure irrigation affordable. Human health risks from exposure to zoonotic pathogens aerosolized during manure irrigation are not well understood. OBJECTIVES: We aimed to a) estimate human health risks due to aerosolized zoonotic pathogens downwind of spray-irrigated dairy manure; and b) determine which factors (e.g., distance, weather conditions) have the greatest influence on risk estimates. METHODS: We sampled downwind air concentrations of manure-borne fecal indicators and zoonotic pathogens during 21 full-scale dairy manure irri- gation events at three farms. We fit these data to hierarchical empirical models and used model outputs in a quantitative microbial risk assessment (QMRA) to estimate risk [probability of acute gastrointestinal illness (AGI)] for individuals exposed to spray-irrigated dairy manure containing Campylobacter jejuni, enterohemorrhagic Escherichia coli (EHEC), or Salmonella spp. RESULTS: Median risk estimates from Monte Carlo simulations ranged from 10−5 to 10−2 and decreased with distance from the source. Risk estimates for Salmonella or EHEC-related AGI were most sensitive to the assumed level of pathogen prevalence in dairy manure, while risk estimates for C. jejuni were not sensitive to any single variable. Airborne microbe concentrations were negatively associated with distance and positively associated with wind speed, both of which were retained in models as a significant predictor more often than relative humidity, solar irradiation, or temperature. CONCLUSIONS: Our model-based estimates suggest that reducing pathogen prevalence and concentration in source manure would reduce the risk of AGI from exposure to manure irrigation, and that increasing the distance from irrigated manure (i.e., setbacks) and limiting irrigation to times of low wind speed may also reduce risk.
Analysis and Modeling of Ground Operations at Hub Airports
NASA Technical Reports Server (NTRS)
Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.
2000-01-01
Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.
TOXNET: Toxicology Data Network
... 4. Supporting Data for Carcinogenicity Expand II.B. Quantitative Estimate of Carcinogenic Risk from Oral Exposure II. ... of Confidence (Carcinogenicity, Oral Exposure) Expand II.C. Quantitative Estimate of Carcinogenic Risk from Inhalation Exposure II. ...
Trevena, Lyndal J; Zikmund-Fisher, Brian J; Edwards, Adrian; Gaissmaier, Wolfgang; Galesic, Mirta; Han, Paul K J; King, John; Lawson, Margaret L; Linder, Suzanne K; Lipkus, Isaac; Ozanne, Elissa; Peters, Ellen; Timmermans, Danielle; Woloshin, Steven
2013-01-01
Making evidence-based decisions often requires comparison of two or more options. Research-based evidence may exist which quantifies how likely the outcomes are for each option. Understanding these numeric estimates improves patients' risk perception and leads to better informed decision making. This paper summarises current "best practices" in communication of evidence-based numeric outcomes for developers of patient decision aids (PtDAs) and other health communication tools. An expert consensus group of fourteen researchers from North America, Europe, and Australasia identified eleven main issues in risk communication. Two experts for each issue wrote a "state of the art" summary of best evidence, drawing on the PtDA, health, psychological, and broader scientific literature. In addition, commonly used terms were defined and a set of guiding principles and key messages derived from the results. The eleven key components of risk communication were: 1) Presenting the chance an event will occur; 2) Presenting changes in numeric outcomes; 3) Outcome estimates for test and screening decisions; 4) Numeric estimates in context and with evaluative labels; 5) Conveying uncertainty; 6) Visual formats; 7) Tailoring estimates; 8) Formats for understanding outcomes over time; 9) Narrative methods for conveying the chance of an event; 10) Important skills for understanding numerical estimates; and 11) Interactive web-based formats. Guiding principles from the evidence summaries advise that risk communication formats should reflect the task required of the user, should always define a relevant reference class (i.e., denominator) over time, should aim to use a consistent format throughout documents, should avoid "1 in x" formats and variable denominators, consider the magnitude of numbers used and the possibility of format bias, and should take into account the numeracy and graph literacy of the audience. A substantial and rapidly expanding evidence base exists for risk communication. Developers of tools to facilitate evidence-based decision making should apply these principles to improve the quality of risk communication in practice.
2013-01-01
Background Making evidence-based decisions often requires comparison of two or more options. Research-based evidence may exist which quantifies how likely the outcomes are for each option. Understanding these numeric estimates improves patients’ risk perception and leads to better informed decision making. This paper summarises current “best practices” in communication of evidence-based numeric outcomes for developers of patient decision aids (PtDAs) and other health communication tools. Method An expert consensus group of fourteen researchers from North America, Europe, and Australasia identified eleven main issues in risk communication. Two experts for each issue wrote a “state of the art” summary of best evidence, drawing on the PtDA, health, psychological, and broader scientific literature. In addition, commonly used terms were defined and a set of guiding principles and key messages derived from the results. Results The eleven key components of risk communication were: 1) Presenting the chance an event will occur; 2) Presenting changes in numeric outcomes; 3) Outcome estimates for test and screening decisions; 4) Numeric estimates in context and with evaluative labels; 5) Conveying uncertainty; 6) Visual formats; 7) Tailoring estimates; 8) Formats for understanding outcomes over time; 9) Narrative methods for conveying the chance of an event; 10) Important skills for understanding numerical estimates; and 11) Interactive web-based formats. Guiding principles from the evidence summaries advise that risk communication formats should reflect the task required of the user, should always define a relevant reference class (i.e., denominator) over time, should aim to use a consistent format throughout documents, should avoid “1 in x” formats and variable denominators, consider the magnitude of numbers used and the possibility of format bias, and should take into account the numeracy and graph literacy of the audience. Conclusion A substantial and rapidly expanding evidence base exists for risk communication. Developers of tools to facilitate evidence-based decision making should apply these principles to improve the quality of risk communication in practice. PMID:24625237
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
A Benefit-Risk Analysis Approach to Capture Regulatory Decision-Making: Non-Small Cell Lung Cancer.
Raju, G K; Gurumurthi, K; Domike, R; Kazandjian, D; Blumenthal, G; Pazdur, R; Woodcock, J
2016-12-01
Drug regulators around the world make decisions about drug approvability based on qualitative benefit-risk analyses. There is much interest in quantifying regulatory approaches to benefit and risk. In this work the use of a quantitative benefit-risk analysis was applied to regulatory decision-making about new drugs to treat advanced non-small cell lung cancer (NSCLC). Benefits and risks associated with 20 US Food and Drug Administration (FDA) decisions associated with a set of candidate treatments submitted between 2003 and 2015 were analyzed. For benefit analysis, the median overall survival (OS) was used where available. When not available, OS was estimated based on overall response rate (ORR) or progression-free survival (PFS). Risks were analyzed based on magnitude (or severity) of harm and likelihood of occurrence. Additionally, a sensitivity analysis was explored to demonstrate analysis of systematic uncertainty. FDA approval decision outcomes considered were found to be consistent with the benefit-risk logic. © 2016 American Society for Clinical Pharmacology and Therapeutics.
Patterns, Probabilities, and People: Making Sense of Quantitative Change in Complex Systems
ERIC Educational Resources Information Center
Wilkerson-Jerde, Michelle Hoda; Wilensky, Uri J.
2015-01-01
The learning sciences community has made significant progress in understanding how people think and learn about complex systems. But less is known about how people make sense of the quantitative patterns and mathematical formalisms often used to study these systems. In this article, we make a case for attending to and supporting connections…
Lean Keng, Soon; AlQudah, Hani Nawaf Ibrahim
2017-02-01
To raise awareness of critical care nurses' cognitive bias in decision-making, its relationship with leadership styles and its impact on care delivery. The relationship between critical care nurses' decision-making and leadership styles in hospitals has been widely studied, but the influence of cognitive bias on decision-making and leadership styles in critical care environments remains poorly understood, particularly in Jordan. Two-phase mixed methods sequential explanatory design and grounded theory. critical care unit, Prince Hamza Hospital, Jordan. Participant sampling: convenience sampling Phase 1 (quantitative, n = 96), purposive sampling Phase 2 (qualitative, n = 20). Pilot tested quantitative survey of 96 critical care nurses in 2012. Qualitative in-depth interviews, informed by quantitative results, with 20 critical care nurses in 2013. Descriptive and simple linear regression quantitative data analyses. Thematic (constant comparative) qualitative data analysis. Quantitative - correlations found between rationality and cognitive bias, rationality and task-oriented leadership styles, cognitive bias and democratic communication styles and cognitive bias and task-oriented leadership styles. Qualitative - 'being competent', 'organizational structures', 'feeling self-confident' and 'being supported' in the work environment identified as key factors influencing critical care nurses' cognitive bias in decision-making and leadership styles. Two-way impact (strengthening and weakening) of cognitive bias in decision-making and leadership styles on critical care nurses' practice performance. There is a need to heighten critical care nurses' consciousness of cognitive bias in decision-making and leadership styles and its impact and to develop organization-level strategies to increase non-biased decision-making. © 2016 John Wiley & Sons Ltd.
Methodology and Estimates of Scour at Selected Bridge Sites in Alaska
Heinrichs, Thomas A.; Kennedy, Ben W.; Langley, Dustin E.; Burrows, Robert L.
2001-01-01
The U.S. Geological Survey estimated scour depths at 325 bridges in Alaska as part of a cooperative agreement with the Alaska Department of Transportation and Public Facilities. The department selected these sites from approximately 806 State-owned bridges as potentially susceptible to scour during extreme floods. Pier scour and contraction scour were computed for the selected bridges by using methods recommended by the Federal Highway Administration. The U.S. Geological Survey used a four-step procedure to estimate scour: (1) Compute magnitudes of the 100- and 500-year floods. (2) Determine cross-section geometry and hydraulic properties for each bridge site. (3) Compute the water-surface profile for the 100- and 500-year floods. (4) Compute contraction and pier scour. This procedure is unique because the cross sections were developed from existing data on file to make a quantitative estimate of scour. This screening method has the advantage of providing scour depths and bed elevations for comparison with bridge-foundation elevations without the time and expense of a field survey. Four examples of bridge-scour analyses are summarized in the appendix.
Predicting Ideological Prejudice
Brandt, Mark J.
2017-01-01
A major shortcoming of current models of ideological prejudice is that although they can anticipate the direction of the association between participants’ ideology and their prejudice against a range of target groups, they cannot predict the size of this association. I developed and tested models that can make specific size predictions for this association. A quantitative model that used the perceived ideology of the target group as the primary predictor of the ideology-prejudice relationship was developed with a representative sample of Americans (N = 4,940) and tested against models using the perceived status of and choice to belong to the target group as predictors. In four studies (total N = 2,093), ideology-prejudice associations were estimated, and these observed estimates were compared with the models’ predictions. The model that was based only on perceived ideology was the most parsimonious with the smallest errors. PMID:28394693
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-01-01
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison. PMID:29614028
Semantic Edge Based Disparity Estimation Using Adaptive Dynamic Programming for Binocular Sensors.
Zhu, Dongchen; Li, Jiamao; Wang, Xianshun; Peng, Jingquan; Shi, Wenjun; Zhang, Xiaolin
2018-04-03
Disparity calculation is crucial for binocular sensor ranging. The disparity estimation based on edges is an important branch in the research of sparse stereo matching and plays an important role in visual navigation. In this paper, we propose a robust sparse stereo matching method based on the semantic edges. Some simple matching costs are used first, and then a novel adaptive dynamic programming algorithm is proposed to obtain optimal solutions. This algorithm makes use of the disparity or semantic consistency constraint between the stereo images to adaptively search parameters, which can improve the robustness of our method. The proposed method is compared quantitatively and qualitatively with the traditional dynamic programming method, some dense stereo matching methods, and the advanced edge-based method respectively. Experiments show that our method can provide superior performance on the above comparison.
The new statistics: why and how.
Cumming, Geoff
2014-01-01
We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include prespecification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.
Three methods for estimating a range of vehicular interactions
NASA Astrophysics Data System (ADS)
Krbálek, Milan; Apeltauer, Jiří; Apeltauer, Tomáš; Szabová, Zuzana
2018-02-01
We present three different approaches how to estimate the number of preceding cars influencing a decision-making procedure of a given driver moving in saturated traffic flows. The first method is based on correlation analysis, the second one evaluates (quantitatively) deviations from the main assumption in the convolution theorem for probability, and the third one operates with advanced instruments of the theory of counting processes (statistical rigidity). We demonstrate that universally-accepted premise on short-ranged traffic interactions may not be correct. All methods introduced have revealed that minimum number of actively-followed vehicles is two. It supports an actual idea that vehicular interactions are, in fact, middle-ranged. Furthermore, consistency between the estimations used is surprisingly credible. In all cases we have found that the interaction range (the number of actively-followed vehicles) drops with traffic density. Whereas drivers moving in congested regimes with lower density (around 30 vehicles per kilometer) react on four or five neighbors, drivers moving in high-density flows respond to two predecessors only.
Estimating weak ratiometric signals in imaging data. I. Dual-channel data.
Broder, Josef; Majumder, Anirban; Porter, Erika; Srinivasamoorthy, Ganesh; Keith, Charles; Lauderdale, James; Sornborger, Andrew
2007-09-01
Ratiometric fluorescent indicators are becoming increasingly prevalent in many areas of biology. They are used for making quantitative measurements of intracellular free calcium both in vitro and in vivo, as well as measuring membrane potentials, pH, and other important physiological variables of interest to researchers in many subfields. Often, functional changes in the fluorescent yield of ratiometric indicators are small, and the signal-to-noise ratio (SNR) is of order unity or less. In particular, variability in the denominator of the ratio can lead to very poor ratio estimates. We present a statistical optimization method for objectively detecting and estimating ratiometric signals in dual-wavelength measurements of fluorescent, ratiometric indicators that improves on standard methods. With the use of an appropriate statistical model for ratiometric signals and by taking the pixel-pixel covariance of an imaging dataset into account, we are able to extract user-independent spatiotemporal information that retains high resolution in both space and time.
2012-01-01
Background We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Methods Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. Results We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data. PMID:22420565
Araújo, Thiago Antonio Sousa; Almeida, Alyson Luiz Santos; Melo, Joabe Gomes; Medeiros, Maria Franco Trindade; Ramos, Marcelo Alves; Silva, Rafael Ricardo Vasconcelos; Almeida, Cecília Fátima Castelo Branco Rangel; Albuquerque, Ulysses Paulino
2012-03-15
We propose a new quantitative measure that enables the researcher to make decisions and test hypotheses about the distribution of knowledge in a community and estimate the richness and sharing of information among informants. In our study, this measure has two levels of analysis: intracultural and intrafamily. Using data collected in northeastern Brazil, we evaluated how these new estimators of richness and sharing behave for different categories of use. We observed trends in the distribution of the characteristics of informants. We were also able to evaluate how outliers interfere with these analyses and how other analyses may be conducted using these indices, such as determining the distance between the knowledge of a community and that of experts, as well as exhibiting the importance of these individuals' communal information of biological resources. One of the primary applications of these indices is to supply the researcher with an objective tool to evaluate the scope and behavior of the collected data.
On sweat analysis for quantitative estimation of dehydration during physical exercise.
Ring, Matthias; Lohmueller, Clemens; Rauh, Manfred; Eskofier, Bjoern M
2015-08-01
Quantitative estimation of water loss during physical exercise is of importance because dehydration can impair both muscular strength and aerobic endurance. A physiological indicator for deficit of total body water (TBW) might be the concentration of electrolytes in sweat. It has been shown that concentrations differ after physical exercise depending on whether water loss was replaced by fluid intake or not. However, to the best of our knowledge, this fact has not been examined for its potential to quantitatively estimate TBW loss. Therefore, we conducted a study in which sweat samples were collected continuously during two hours of physical exercise without fluid intake. A statistical analysis of these sweat samples revealed significant correlations between chloride concentration in sweat and TBW loss (r = 0.41, p <; 0.01), and between sweat osmolality and TBW loss (r = 0.43, p <; 0.01). A quantitative estimation of TBW loss resulted in a mean absolute error of 0.49 l per estimation. Although the precision has to be improved for practical applications, the present results suggest that TBW loss estimation could be realizable using sweat samples.
NASA Astrophysics Data System (ADS)
Fergusson-Kolmes, L. A.
2016-02-01
Plastic pollution in the ocean is a critical issue. The high profile of this issue in the popular media makes it an opportune vehicle for promoting deeper understanding of the topic while also advancing student learning in the core competency areas identified in the NSF's Vision and Change document: integration of the process of science, quantitative reasoning, modeling and simulation, and an understanding of the relationship between science and society. This is a challenging task in an introductory non-majors class where the students may have very limited math skills and no prior science background. In this case activities are described that ask students to use an understanding of density to make predictions and test them as they consider the fate of different kinds of plastics in the marine environment. A comparison of the results from different sampling regimes introduces students to the difficulties of carrying out scientific investigations in the complex marine environment as well as building quantitative literacy skills. Activities that call on students to make connections between global issues of plastic pollution and personal actions include extraction of microplastic from personal care products, inventories of local plastic-recycling options and estimations of contributions to the waste stream on an individual level. This combination of hands-on-activities in an accessible context serves to help students appreciate the immediacy of the threat of plastic pollution and calls them to reflect on possible solutions.
Prescott, Jeffrey William
2013-02-01
The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.
Data analysis in emission tomography using emission-count posteriors
NASA Astrophysics Data System (ADS)
Sitek, Arkadiusz
2012-11-01
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
Bayes` theorem and quantitative risk assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaplan, S.
1994-12-31
This paper argues that for a quantitative risk analysis (QRA) to be useful for public and private decision making, and for rallying the support necessary to implement those decisions, it is necessary that the QRA results be ``trustable.`` Trustable means that the results are based solidly and logically on all the relevant evidence available. This, in turn, means that the quantitative results must be derived from the evidence using Bayes` theorem. Thus, it argues that one should strive to make their QRAs more clearly and explicitly Bayesian, and in this way make them more ``evidence dependent`` than ``personality dependent.``
Li, Zhigang; Wang, Qiaoyun; Lv, Jiangtao; Ma, Zhenhe; Yang, Linjuan
2015-06-01
Spectroscopy is often applied when a rapid quantitative analysis is required, but one challenge is the translation of raw spectra into a final analysis. Derivative spectra are often used as a preliminary preprocessing step to resolve overlapping signals, enhance signal properties, and suppress unwanted spectral features that arise due to non-ideal instrument and sample properties. In this study, to improve quantitative analysis of near-infrared spectra, derivatives of noisy raw spectral data need to be estimated with high accuracy. A new spectral estimator based on singular perturbation technique, called the singular perturbation spectra estimator (SPSE), is presented, and the stability analysis of the estimator is given. Theoretical analysis and simulation experimental results confirm that the derivatives can be estimated with high accuracy using this estimator. Furthermore, the effectiveness of the estimator for processing noisy infrared spectra is evaluated using the analysis of beer spectra. The derivative spectra of the beer and the marzipan are used to build the calibration model using partial least squares (PLS) modeling. The results show that the PLS based on the new estimator can achieve better performance compared with the Savitzky-Golay algorithm and can serve as an alternative choice for quantitative analytical applications.
Quantitative Doppler Analysis Using Conventional Color Flow Imaging Acquisitions.
Karabiyik, Yucel; Ekroll, Ingvild Kinn; Eik-Nes, Sturla H; Lovstakken, Lasse
2018-05-01
Interleaved acquisitions used in conventional triplex mode result in a tradeoff between the frame rate and the quality of velocity estimates. On the other hand, workflow becomes inefficient when the user has to switch between different modes, and measurement variability is increased. This paper investigates the use of power spectral Capon estimator in quantitative Doppler analysis using data acquired with conventional color flow imaging (CFI) schemes. To preserve the number of samples used for velocity estimation, only spatial averaging was utilized, and clutter rejection was performed after spectral estimation. The resulting velocity spectra were evaluated in terms of spectral width using a recently proposed spectral envelope estimator. The spectral envelopes were also used for Doppler index calculations using in vivo and string phantom acquisitions. In vivo results demonstrated that the Capon estimator can provide spectral estimates with sufficient quality for quantitative analysis using packet-based CFI acquisitions. The calculated Doppler indices were similar to the values calculated using spectrograms estimated on a commercial ultrasound scanner.
Crisp, Adam; Miller, Sam; Thompson, Douglas; Best, Nicky
2018-04-10
All clinical trials are designed for success of their primary objectives. Hence, evaluating the probability of success (PoS) should be a key focus at the design stage both to support funding approval from sponsor governance boards and to inform trial design itself. Use of assurance-that is, expected success probability averaged over a prior probability distribution for the treatment effect-to quantify PoS of a planned study has grown across the industry in recent years, and has now become routine within the authors' company. In this paper, we illustrate some of the benefits of systematically adopting assurance as a quantitative framework to support decision making in drug development through several case-studies where evaluation of assurance has proved impactful in terms of trial design and in supporting governance-board reviews of project proposals. In addition, we describe specific features of how the assurance framework has been implemented within our company, highlighting the critical role that prior elicitation plays in this process, and illustrating how the overall assurance calculation may be decomposed into a sequence of conditional PoS estimates which can provide greater insight into how and when different development options are able to discharge risk. Copyright © 2018 John Wiley & Sons, Ltd.
Optimization of a middle atmosphere diagnostic scheme
NASA Astrophysics Data System (ADS)
Akmaev, Rashid A.
1997-06-01
A new assimilative diagnostic scheme based on the use of a spectral model was recently tested on the CIRA-86 empirical model. It reproduced the observed climatology with an annual global rms temperature deviation of 3.2 K in the 15-110 km layer. The most important new component of the scheme is that the zonal forcing necessary to maintain the observed climatology is diagnosed from empirical data and subsequently substituted into the simulation model at the prognostic stage of the calculation in an annual cycle mode. The simulation results are then quantitatively compared with the empirical model, and the above mentioned rms temperature deviation provides an objective measure of the `distance' between the two climatologies. This quantitative criterion makes it possible to apply standard optimization procedures to the whole diagnostic scheme and/or the model itself. The estimates of the zonal drag have been improved in this study by introducing a nudging (Newtonian-cooling) term into the thermodynamic equation at the diagnostic stage. A proper optimal adjustment of the strength of this term makes it possible to further reduce the rms temperature deviation of simulations down to approximately 2.7 K. These results suggest that direct optimization can successfully be applied to atmospheric model parameter identification problems of moderate dimensionality.
Smile line assessment comparing quantitative measurement and visual estimation.
Van der Geld, Pieter; Oosterveld, Paul; Schols, Jan; Kuijpers-Jagtman, Anne Marie
2011-02-01
Esthetic analysis of dynamic functions such as spontaneous smiling is feasible by using digital videography and computer measurement for lip line height and tooth display. Because quantitative measurements are time-consuming, digital videography and semiquantitative (visual) estimation according to a standard categorization are more practical for regular diagnostics. Our objective in this study was to compare 2 semiquantitative methods with quantitative measurements for reliability and agreement. The faces of 122 male participants were individually registered by using digital videography. Spontaneous and posed smiles were captured. On the records, maxillary lip line heights and tooth display were digitally measured on each tooth and also visually estimated according to 3-grade and 4-grade scales. Two raters were involved. An error analysis was performed. Reliability was established with kappa statistics. Interexaminer and intraexaminer reliability values were high, with median kappa values from 0.79 to 0.88. Agreement of the 3-grade scale estimation with quantitative measurement showed higher median kappa values (0.76) than the 4-grade scale estimation (0.66). Differentiating high and gummy smile lines (4-grade scale) resulted in greater inaccuracies. The estimation of a high, average, or low smile line for each tooth showed high reliability close to quantitative measurements. Smile line analysis can be performed reliably with a 3-grade scale (visual) semiquantitative estimation. For a more comprehensive diagnosis, additional measuring is proposed, especially in patients with disproportional gingival display. Copyright © 2011 American Association of Orthodontists. Published by Mosby, Inc. All rights reserved.
Teixeira, Juliana Araujo; Baggio, Maria Luiza; Fisberg, Regina Mara; Marchioni, Dirce Maria Lobo
2010-12-01
The objective of this study was to estimate the regressions calibration for the dietary data that were measured using the quantitative food frequency questionnaire (QFFQ) in the Natural History of HPV Infection in Men: the HIM Study in Brazil. A sample of 98 individuals from the HIM study answered one QFFQ and three 24-hour recalls (24HR) at interviews. The calibration was performed using linear regression analysis in which the 24HR was the dependent variable and the QFFQ was the independent variable. Age, body mass index, physical activity, income and schooling were used as adjustment variables in the models. The geometric means between the 24HR and the calibration-corrected QFFQ were statistically equal. The dispersion graphs between the instruments demonstrate increased correlation after making the correction, although there is greater dispersion of the points with worse explanatory power of the models. Identification of the regressions calibration for the dietary data of the HIM study will make it possible to estimate the effect of the diet on HPV infection, corrected for the measurement error of the QFFQ.
Exact comprehensive equations for the photon management properties of silicon nanowire
Li, Yingfeng; Li, Meicheng; Li, Ruike; Fu, Pengfei; Wang, Tai; Luo, Younan; Mbengue, Joseph Michel; Trevor, Mwenya
2016-01-01
Unique photon management (PM) properties of silicon nanowire (SiNW) make it an attractive building block for a host of nanowire photonic devices including photodetectors, chemical and gas sensors, waveguides, optical switches, solar cells, and lasers. However, the lack of efficient equations for the quantitative estimation of the SiNW’s PM properties limits the rational design of such devices. Herein, we establish comprehensive equations to evaluate several important performance features for the PM properties of SiNW, based on theoretical simulations. Firstly, the relationships between the resonant wavelengths (RW), where SiNW can harvest light most effectively, and the size of SiNW are formulized. Then, equations for the light-harvesting efficiency at RW, which determines the single-frequency performance limit of SiNW-based photonic devices, are established. Finally, equations for the light-harvesting efficiency of SiNW in full-spectrum, which are of great significance in photovoltaics, are established. Furthermore, using these equations, we have derived four extra formulas to estimate the optimal size of SiNW in light-harvesting. These equations can reproduce majority of the reported experimental and theoretical results with only ~5% error deviations. Our study fills up a gap in quantitatively predicting the SiNW’s PM properties, which will contribute significantly to its practical applications. PMID:27103087
Macarthur, Roy; Feinberg, Max; Bertheau, Yves
2010-01-01
A method is presented for estimating the size of uncertainty associated with the measurement of products derived from genetically modified organisms (GMOs). The method is based on the uncertainty profile, which is an extension, for the estimation of uncertainty, of a recent graphical statistical tool called an accuracy profile that was developed for the validation of quantitative analytical methods. The application of uncertainty profiles as an aid to decision making and assessment of fitness for purpose is also presented. Results of the measurement of the quantity of GMOs in flour by PCR-based methods collected through a number of interlaboratory studies followed the log-normal distribution. Uncertainty profiles built using the results generally give an expected range for measurement results of 50-200% of reference concentrations for materials that contain at least 1% GMO. This range is consistent with European Network of GM Laboratories and the European Union (EU) Community Reference Laboratory validation criteria and can be used as a fitness for purpose criterion for measurement methods. The effect on the enforcement of EU labeling regulations is that, in general, an individual analytical result needs to be < 0.45% to demonstrate compliance, and > 1.8% to demonstrate noncompliance with a labeling threshold of 0.9%.
Estimation of mortality for stage-structured zooplankton populations: What is to be done?
NASA Astrophysics Data System (ADS)
Ohman, Mark D.
2012-05-01
Estimation of zooplankton mortality rates in field populations is a challenging task that some contend is inherently intractable. This paper examines several of the objections that are commonly raised to efforts to estimate mortality. We find that there are circumstances in the field where it is possible to sequentially sample the same population and to resolve biologically caused mortality, albeit with error. Precision can be improved with sampling directed by knowledge of the physical structure of the water column, combined with adequate sample replication. Intercalibration of sampling methods can make it possible to sample across the life history in a quantitative manner. Rates of development can be constrained by laboratory-based estimates of stage durations from temperature- and food-dependent functions, mesocosm studies of molting rates, or approximation of development rates from growth rates, combined with the vertical distributions of organisms in relation to food and temperature gradients. Careful design of field studies guided by the assumptions of specific estimation models can lead to satisfactory mortality estimates, but model uncertainty also needs to be quantified. We highlight additional issues requiring attention to further advance the field, including the need for linked cooperative studies of the rates and causes of mortality of co-occurring holozooplankton and ichthyoplankton.
NASA Astrophysics Data System (ADS)
Xie, Yijing; Thom, Maria; Ebner, Michael; Wykes, Victoria; Desjardins, Adrien; Miserocchi, Anna; Ourselin, Sebastien; McEvoy, Andrew W.; Vercauteren, Tom
2017-11-01
In high-grade glioma surgery, tumor resection is often guided by intraoperative fluorescence imaging. 5-aminolevulinic acid-induced protoporphyrin IX (PpIX) provides fluorescent contrast between normal brain tissue and glioma tissue, thus achieving improved tumor delineation and prolonged patient survival compared with conventional white-light-guided resection. However, commercially available fluorescence imaging systems rely solely on visual assessment of fluorescence patterns by the surgeon, which makes the resection more subjective than necessary. We developed a wide-field spectrally resolved fluorescence imaging system utilizing a Generation II scientific CMOS camera and an improved computational model for the precise reconstruction of the PpIX concentration map. In our model, the tissue's optical properties and illumination geometry, which distort the fluorescent emission spectra, are considered. We demonstrate that the CMOS-based system can detect low PpIX concentration at short camera exposure times, while providing high-pixel resolution wide-field images. We show that total variation regularization improves the contrast-to-noise ratio of the reconstructed quantitative concentration map by approximately twofold. Quantitative comparison between the estimated PpIX concentration and tumor histopathology was also investigated to further evaluate the system.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-24
... this location in 2008. No quantitative estimate of the size of this remaining population is available... observed in 1998. No quantitative estimates of the size of the extant populations are available. Howarth...
Method for matching customer and manufacturer positions for metal product parameters standardization
NASA Astrophysics Data System (ADS)
Polyakova, Marina; Rubin, Gennadij; Danilova, Yulija
2018-04-01
Decision making is the main stage of regulation the relations between customer and manufacturer during the design the demands of norms in standards. It is necessary to match the positions of the negotiating sides in order to gain the consensus. In order to take into consideration the differences of customer and manufacturer estimation of the object under standardization process it is obvious to use special methods of analysis. It is proposed to establish relationships between product properties and its functions using functional-target analysis. The special feature of this type of functional analysis is the consideration of the research object functions and properties. It is shown on the example of hexagonal head crew the possibility to establish links between its functions and properties. Such approach allows obtaining a quantitative assessment of the closeness the positions of customer and manufacturer at decision making during the standard norms establishment.
Aquatic effects assessment: needs and tools.
Marchini, Silvia
2002-01-01
In the assessment of the adverse effects pollutants can produce on exposed ecosystems, different approaches can be followed depending on the quality and quantity of information available, whose advantages and limits are discussed with reference to the aquatic compartment. When experimental data are lacking, a predictive approach can be pursued by making use of validated quantitative structure-activity relationships (QSARs), which provide reliable ecotoxicity estimates only if appropriate models are applied. The experimental approach is central to any environmental hazard assessment procedure, although many uncertainties underlying the extrapolation from a limited set of single species laboratory data to the complexity of the ecosystem (e.g., the limitations of common summary statistics, the variability of species sensitivity, the need to consider alterations at higher level of integration) make the task difficult. When adequate toxicity information are available, the statistical extrapolation approach can be used to predict environmental compatible concentrations.
Using CTX Image Features to Predict HiRISE-Equivalent Rock Density
NASA Technical Reports Server (NTRS)
Serrano, Navid; Huertas, Andres; McGuire, Patrick; Mayer, David; Ardvidson, Raymond
2010-01-01
Methods have been developed to quantitatively assess rock hazards at candidate landing sites with the aid of images from the HiRISE camera onboard NASA s Mars Reconnaissance Orbiter. HiRISE is able to resolve rocks as small as 1-m in diameter. Some sites of interest do not have adequate coverage with the highest resolution sensors and there is a need to infer relevant information (like site safety or underlying geomorphology). The proposed approach would make it possible to obtain rock density estimates at a level close to or equal to those obtained from high-resolution sensors where individual rocks are discernable.
BlueSNP: R package for highly scalable genome-wide association studies using Hadoop clusters.
Huang, Hailiang; Tata, Sandeep; Prill, Robert J
2013-01-01
Computational workloads for genome-wide association studies (GWAS) are growing in scale and complexity outpacing the capabilities of single-threaded software designed for personal computers. The BlueSNP R package implements GWAS statistical tests in the R programming language and executes the calculations across computer clusters configured with Apache Hadoop, a de facto standard framework for distributed data processing using the MapReduce formalism. BlueSNP makes computationally intensive analyses, such as estimating empirical p-values via data permutation, and searching for expression quantitative trait loci over thousands of genes, feasible for large genotype-phenotype datasets. http://github.com/ibm-bioinformatics/bluesnp
Electronic structure and microscopic model of V(2)GeO(4)F(2)-a quantum spin system with S = 1.
Rahaman, Badiur; Saha-Dasgupta, T
2007-07-25
We present first-principles density functional calculations and downfolding studies of the electronic and magnetic properties of the oxide-fluoride quantum spin system V(2)GeO(4)F(2). We discuss explicitly the nature of the exchange paths and provide quantitative estimates of magnetic exchange couplings. A microscopic modelling based on analysis of the electronic structure of this systems puts it in the interesting class of weakly coupled alternating chain S = 1 systems. Based on the microscopic model, we make inferrences about its spin excitation spectra, which needs to be tested by rigorous experimental study.
Wallace, Jack
2010-05-01
While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.
Ge, Yawen; Li, Yuecong; Bunting, M Jane; Li, Bing; Li, Zetao; Wang, Junting
2017-05-15
Vegetation reconstructions from palaeoecological records depend on adequate understanding of relationships between modern pollen, vegetation and climate. A key parameter for quantitative vegetation reconstructions is the Relative Pollen Productivity (RPP). Differences in both environmental and methodological factors are known to alter the RPP estimated significantly, making it difficult to determine whether the underlying pollen productivity does actually vary, and if so, why. In this paper, we present the results of a replication study for the Bashang steppe region, a typical steppe area in northern China, carried out in 2013 and 2014. In each year, 30 surface samples were collected for pollen analysis, with accompanying vegetation survey using the "Crackles Bequest Project" methodology. Sampling designs differed slightly between the two years: in 2013, sites were located completely randomly, whilst in 2014 sampling locations were constrained to be within a few km of roads. There is a strong inter-annual variability in both the pollen and the vegetation spectra therefore in RPPs, and annual precipitation may be a key influence on these variations. The pollen assemblages in both years are dominated by herbaceous taxa such as Artemisia, Amaranthaceae, Poaceae, Asteraceae, Cyperaceae, Fabaceae and Allium. Artemisia and Amaranthaceae pollen are significantly over-represented for their vegetation abundance. Poaceae, Cyperaceae and Fabaceae seem to have under-represented pollen for vegetation with correspondingly lower RPPs. Asteraceae seems to be well-represented, with moderate RPPs and less annual variation. Estimated Relevant Source Area of Pollen (RSAP) ranges from 2000 to 3000m. Different sampling designs have an effect both on RSAP and RPPs and random sample selection may be the best strategy for obtaining robust estimates. Our results have implications for further pollen-vegetation relationship and quantitative vegetation reconstruction research in typical steppe areas and in other open habitats with strong inter-annual variation. Copyright © 2017 Elsevier B.V. All rights reserved.
Optimizing Hybrid Metrology: Rigorous Implementation of Bayesian and Combined Regression.
Henn, Mark-Alexander; Silver, Richard M; Villarrubia, John S; Zhang, Nien Fan; Zhou, Hui; Barnes, Bryan M; Ming, Bin; Vladár, András E
2015-01-01
Hybrid metrology, e.g., the combination of several measurement techniques to determine critical dimensions, is an increasingly important approach to meet the needs of the semiconductor industry. A proper use of hybrid metrology may yield not only more reliable estimates for the quantitative characterization of 3-D structures but also a more realistic estimation of the corresponding uncertainties. Recent developments at the National Institute of Standards and Technology (NIST) feature the combination of optical critical dimension (OCD) measurements and scanning electron microscope (SEM) results. The hybrid methodology offers the potential to make measurements of essential 3-D attributes that may not be otherwise feasible. However, combining techniques gives rise to essential challenges in error analysis and comparing results from different instrument models, especially the effect of systematic and highly correlated errors in the measurement on the χ 2 function that is minimized. Both hypothetical examples and measurement data are used to illustrate solutions to these challenges.
Influence of safety measures on the risks of transporting dangerous goods through road tunnels.
Saccomanno, Frank; Haastrup, Palle
2002-12-01
Quantitative risk assessment (QRA) models are used to estimate the risks of transporting dangerous goods and to assess the merits of introducing alternative risk reduction measures for different transportation scenarios and assumptions. A comprehensive QRA model recently was developed in Europe for application to road tunnels. This model can assess the merits of a limited number of "native safety measures." In this article, we introduce a procedure for extending its scope to include the treatment of a number of important "nonnative safety measures" of interest to tunnel operators and decisionmakers. Nonnative safety measures were not included in the original model specification. The suggested procedure makes use of expert judgment and Monte Carlo simulation methods to model uncertainty in the revised risk estimates. The results of a case study application are presented that involve the risks of transporting a given volume of flammable liquid through a 10-km road tunnel.
Forecasting seasonal outbreaks of influenza.
Shaman, Jeffrey; Karspeck, Alicia
2012-12-11
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003-2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza.
Forecasting seasonal outbreaks of influenza
Shaman, Jeffrey; Karspeck, Alicia
2012-01-01
Influenza recurs seasonally in temperate regions of the world; however, our ability to predict the timing, duration, and magnitude of local seasonal outbreaks of influenza remains limited. Here we develop a framework for initializing real-time forecasts of seasonal influenza outbreaks, using a data assimilation technique commonly applied in numerical weather prediction. The availability of real-time, web-based estimates of local influenza infection rates makes this type of quantitative forecasting possible. Retrospective ensemble forecasts are generated on a weekly basis following assimilation of these web-based estimates for the 2003–2008 influenza seasons in New York City. The findings indicate that real-time skillful predictions of peak timing can be made more than 7 wk in advance of the actual peak. In addition, confidence in those predictions can be inferred from the spread of the forecast ensemble. This work represents an initial step in the development of a statistically rigorous system for real-time forecast of seasonal influenza. PMID:23184969
NIRS-SPM: statistical parametric mapping for near infrared spectroscopy
NASA Astrophysics Data System (ADS)
Tak, Sungho; Jang, Kwang Eun; Jung, Jinwook; Jang, Jaeduck; Jeong, Yong; Ye, Jong Chul
2008-02-01
Even though there exists a powerful statistical parametric mapping (SPM) tool for fMRI, similar public domain tools are not available for near infrared spectroscopy (NIRS). In this paper, we describe a new public domain statistical toolbox called NIRS-SPM for quantitative analysis of NIRS signals. Specifically, NIRS-SPM statistically analyzes the NIRS data using GLM and makes inference as the excursion probability which comes from the random field that are interpolated from the sparse measurement. In order to obtain correct inference, NIRS-SPM offers the pre-coloring and pre-whitening method for temporal correlation estimation. For simultaneous recording NIRS signal with fMRI, the spatial mapping between fMRI image and real coordinate in 3-D digitizer is estimated using Horn's algorithm. These powerful tools allows us the super-resolution localization of the brain activation which is not possible using the conventional NIRS analysis tools.
A fault tree model to assess probability of contaminant discharge from shipwrecks.
Landquist, H; Rosén, L; Lindhe, A; Norberg, T; Hassellöv, I-M; Lindgren, J F; Dahllöf, I
2014-11-15
Shipwrecks on the sea floor around the world may contain hazardous substances that can cause harm to the marine environment. Today there are no comprehensive methods for environmental risk assessment of shipwrecks, and thus there is poor support for decision-making on prioritization of mitigation measures. The purpose of this study was to develop a tool for quantitative risk estimation of potentially polluting shipwrecks, and in particular an estimation of the annual probability of hazardous substance discharge. The assessment of the probability of discharge is performed using fault tree analysis, facilitating quantification of the probability with respect to a set of identified hazardous events. This approach enables a structured assessment providing transparent uncertainty and sensitivity analyses. The model facilitates quantification of risk, quantification of the uncertainties in the risk calculation and identification of parameters to be investigated further in order to obtain a more reliable risk calculation. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Baldwin, Grover H.
The use of quantitative decision making tools provides the decision maker with a range of alternatives among which to decide, permits acceptance and use of the optimal solution, and decreases risk. Training line administrators in the use of these tools can help school business officials obtain reliable information upon which to base district…
ERIC Educational Resources Information Center
Caglayan, Günhan
2013-01-01
This study is about prospective secondary mathematics teachers' understanding and sense making of representational quantities generated by algebra tiles, the quantitative units (linear vs. areal) inherent in the nature of these quantities, and the quantitative addition and multiplication operations--referent preserving versus referent…
Ito, Kazunari; Gomi, Katsuya; Kariyama, Masahiro; Miyake, Tsuyoshi
2017-07-01
The construction of an experimental system that can mimic koji making in the manufacturing setting of a sake brewery is initially required for the quantitative evaluation of mycelia grown on/in koji pellets (haze formation). Koji making with rice was investigated with a solid-state fermentation (SSF) system using a non-airflow box (NAB), which produced uniform conditions in the culture substrate with high reproducibility and allowed for the control of favorable conditions in the substrate during culture. The SSF system using NAB accurately reproduced koji making in a manufacturing setting. To evaluate haze formation during koji making, surfaces and cross sections of koji pellets obtained from koji making tests were observed using a digital microscope. Image analysis was used to distinguish between haze and non-haze sections of koji pellets, enabling the evaluation of haze formation in a batch by measuring the haze rate of a specific number of koji pellets. This method allowed us to obtain continuous and quantitative data on the time course of haze formation. Moreover, drying koji during the late stage of koji making was revealed to cause further penetration of mycelia into koji pellets (internal haze). The koji making test with the SSF system using NAB and quantitative evaluation of haze formation in a batch by image analysis is a useful method for understanding the relations between haze formation and koji making conditions. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Smith, Eric G.
2015-01-01
Background: Nonrandomized studies typically cannot account for confounding from unmeasured factors. Method: A method is presented that exploits the recently-identified phenomenon of “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors. Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure. Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results: Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met. Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations: Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions: To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward. The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226
Changren Weng; Thomas L. Kubisiak; C. Dana Nelson; James P. Geaghan; Michael Stine
1999-01-01
Single marker regression and single marker maximum likelihood estimation were tied to detect quantitative trait loci (QTLs) controlling the early height growth of longleaf pine and slash pine using a ((longleaf pine x slash pine) x slash pine) BC, population consisting of 83 progeny. Maximum likelihood estimation was found to be more power than regression and could...
The environmental factors as reason for emotional tension
NASA Astrophysics Data System (ADS)
Prisniakova, L.
The information from environment is a reason of activation of an organism, it calls abrupt changings in nervous processes and it offers emotions. One part of emotions organizes and supports activity, others disorganize it. In fields of perception, of making decision, fulfilment of operatings, of learning the emotional excitation raises the level of carrying-out more easy problems and reduces of more difficult one. The report are presented the outcomes of quantitative determination of a level of emotional tension on successful activity. The inverse of the sign of influencing on efficiency of activity of the man is detected. The action of the emotional tension on efficiency of professional work was demonstrated to have similarly to influencing of motivation according to the law Yerkes -Dodson. The report introduces a mathematical model of connection of successful activity and motivations or the emotional tension. Introduced in the report the outcomes can serve the theoretical idealized basis of the quantitative characteristics of an estimation of activity of astronauts in conditions of the emotional factors at a phase of selection
A novel 3D imaging system for strawberry phenotyping.
He, Joe Q; Harrison, Richard J; Li, Bo
2017-01-01
Accurate and quantitative phenotypic data in plant breeding programmes is vital in breeding to assess the performance of genotypes and to make selections. Traditional strawberry phenotyping relies on the human eye to assess most external fruit quality attributes, which is time-consuming and subjective. 3D imaging is a promising high-throughput technique that allows multiple external fruit quality attributes to be measured simultaneously. A low cost multi-view stereo (MVS) imaging system was developed, which captured data from 360° around a target strawberry fruit. A 3D point cloud of the sample was derived and analysed with custom-developed software to estimate berry height, length, width, volume, calyx size, colour and achene number. Analysis of these traits in 100 fruits showed good concordance with manual assessment methods. This study demonstrates the feasibility of an MVS based 3D imaging system for the rapid and quantitative phenotyping of seven agronomically important external strawberry traits. With further improvement, this method could be applied in strawberry breeding programmes as a cost effective phenotyping technique.
NASA Astrophysics Data System (ADS)
Xie, Yijing; Thom, Maria; Miserocchi, Anna; McEvoy, Andrew W.; Desjardins, Adrien; Ourselin, Sebastien; Vercauteren, Tom
2017-02-01
In glioma resection surgery, the detection of tumour is often guided by using intraoperative fluorescence imaging notably with 5-ALA-PpIX, providing fluorescent contrast between normal brain tissue and the gliomas tissue to achieve improved tumour delineation and prolonged patient survival compared with the conventional white-light guided resection. However, the commercially available fluorescence imaging system relies on surgeon's eyes to visualise and distinguish the fluorescence signals, which unfortunately makes the resection subjective. In this study, we developed a novel multi-scale spectrally-resolved fluorescence imaging system and a computational model for quantification of PpIX concentration. The system consisted of a wide-field spectrally-resolved quantitative imaging device and a fluorescence endomicroscopic imaging system enabling optical biopsy. Ex vivo animal tissue experiments as well as human tumour sample studies demonstrated that the system was capable of specifically detecting the PpIX fluorescent signal and estimate the true concentration of PpIX in brain specimen.
Burstyn, Igor; Boffetta, Paolo; Kauppinen, Timo; Heikkilä, Pirjo; Svane, Ole; Partanen, Timo; Stücker, Isabelle; Frentzel-Beyme, Rainer; Ahrens, Wolfgang; Merzenich, Hiltrud; Heederik, Dick; Hooiveld, Mariëtte; Langård, Sverre; Randem, Britt G; Järvholm, Bengt; Bergdahl, Ingvar; Shaham, Judith; Ribak, Joseph; Kromhout, Hans
2003-01-01
An exposure matrix (EM) for known and suspected carcinogens was required for a multicenter international cohort study of cancer risk and bitumen among asphalt workers. Production characteristics in companies enrolled in the study were ascertained through use of a company questionnaire (CQ). Exposures to coal tar, bitumen fume, organic vapor, polycyclic aromatic hydrocarbons, diesel fume, silica, and asbestos were assessed semi-quantitatively using information from CQs, expert judgment, and statistical models. Exposures of road paving workers to bitumen fume, organic vapor, and benzo(a)pyrene were estimated quantitatively by applying regression models, based on monitoring data, to exposure scenarios identified by the CQs. Exposures estimates were derived for 217 companies enrolled in the cohort, plus the Swedish asphalt paving industry in general. Most companies were engaged in road paving and asphalt mixing, but some also participated in general construction and roofing. Coal tar use was most common in Denmark and The Netherlands, but the practice is now obsolete. Quantitative estimates of exposure to bitumen fume, organic vapor, and benzo(a)pyrene for pavers, and semi-quantitative estimates of exposure to these agents among all subjects were strongly correlated. Semi-quantitative estimates of exposure to bitumen fume and coal tar exposures were only moderately correlated. EM assessed non-monotonic historical decrease in exposures to all agents assessed except silica and diesel exhaust. We produced a data-driven EM using methodology that can be adapted for other multicenter studies. Copyright 2003 Wiley-Liss, Inc.
Human judgment vs. quantitative models for the management of ecological resources.
Holden, Matthew H; Ellner, Stephen P
2016-07-01
Despite major advances in quantitative approaches to natural resource management, there has been resistance to using these tools in the actual practice of managing ecological populations. Given a managed system and a set of assumptions, translated into a model, optimization methods can be used to solve for the most cost-effective management actions. However, when the underlying assumptions are not met, such methods can potentially lead to decisions that harm the environment and economy. Managers who develop decisions based on past experience and judgment, without the aid of mathematical models, can potentially learn about the system and develop flexible management strategies. However, these strategies are often based on subjective criteria and equally invalid and often unstated assumptions. Given the drawbacks of both methods, it is unclear whether simple quantitative models improve environmental decision making over expert opinion. In this study, we explore how well students, using their experience and judgment, manage simulated fishery populations in an online computer game and compare their management outcomes to the performance of model-based decisions. We consider harvest decisions generated using four different quantitative models: (1) the model used to produce the simulated population dynamics observed in the game, with the values of all parameters known (as a control), (2) the same model, but with unknown parameter values that must be estimated during the game from observed data, (3) models that are structurally different from those used to simulate the population dynamics, and (4) a model that ignores age structure. Humans on average performed much worse than the models in cases 1-3, but in a small minority of scenarios, models produced worse outcomes than those resulting from students making decisions based on experience and judgment. When the models ignored age structure, they generated poorly performing management decisions, but still outperformed students using experience and judgment 66% of the time. © 2016 by the Ecological Society of America.
Decision-Making in Multiple Sclerosis Patients: A Systematic Review.
Neuhaus, Mireille; Calabrese, Pasquale; Annoni, Jean-Marie
2018-01-01
Multiple sclerosis (MS) is frequently associated with cognitive and behavioural deficits. A growing number of studies suggest an impact of MS on decision-making abilities. The aim of this systematic review was to assess if (1) performance of MS patients in decision-making tasks was consistently different from controls and (2) whether this modification was associated with cognitive dysfunction and emotional alterations. The search was conducted on Pubmed/Medline database. 12 studies evaluating the difference between MS patients and healthy controls using validated decision-making tasks were included. Outcomes considered were quantitative (net scores) and qualitative measurements (deliberation time and learning from feedback). Quantitative and qualitative decision-making impairment in MS was present in 64.7% of measurements. Patients were equally impaired in tasks for decision-making under risk and ambiguity. A correlation to other cognitive functions was present in 50% of cases, with the highest associations in the domains of processing speed and attentional capacity. In MS patients, qualitative and quantitative modifications may be present in any kind of decision-making task and can appear independently of other cognitive measures. Since decision-making abilities have a significant impact on everyday life, this cognitive aspect has an influential importance in various MS-related treatment settings.
Vu, Trung N; Valkenborg, Dirk; Smets, Koen; Verwaest, Kim A; Dommisse, Roger; Lemière, Filip; Verschoren, Alain; Goethals, Bart; Laukens, Kris
2011-10-20
Nuclear magnetic resonance spectroscopy (NMR) is a powerful technique to reveal and compare quantitative metabolic profiles of biological tissues. However, chemical and physical sample variations make the analysis of the data challenging, and typically require the application of a number of preprocessing steps prior to data interpretation. For example, noise reduction, normalization, baseline correction, peak picking, spectrum alignment and statistical analysis are indispensable components in any NMR analysis pipeline. We introduce a novel suite of informatics tools for the quantitative analysis of NMR metabolomic profile data. The core of the processing cascade is a novel peak alignment algorithm, called hierarchical Cluster-based Peak Alignment (CluPA). The algorithm aligns a target spectrum to the reference spectrum in a top-down fashion by building a hierarchical cluster tree from peak lists of reference and target spectra and then dividing the spectra into smaller segments based on the most distant clusters of the tree. To reduce the computational time to estimate the spectral misalignment, the method makes use of Fast Fourier Transformation (FFT) cross-correlation. Since the method returns a high-quality alignment, we can propose a simple methodology to study the variability of the NMR spectra. For each aligned NMR data point the ratio of the between-group and within-group sum of squares (BW-ratio) is calculated to quantify the difference in variability between and within predefined groups of NMR spectra. This differential analysis is related to the calculation of the F-statistic or a one-way ANOVA, but without distributional assumptions. Statistical inference based on the BW-ratio is achieved by bootstrapping the null distribution from the experimental data. The workflow performance was evaluated using a previously published dataset. Correlation maps, spectral and grey scale plots show clear improvements in comparison to other methods, and the down-to-earth quantitative analysis works well for the CluPA-aligned spectra. The whole workflow is embedded into a modular and statistically sound framework that is implemented as an R package called "speaq" ("spectrum alignment and quantitation"), which is freely available from http://code.google.com/p/speaq/.
The Iowa Gambling Task in Parkinson's disease: A meta-analysis on effects of disease and medication.
Evens, Ricarda; Hoefler, Michael; Biber, Karolina; Lueken, Ulrike
2016-10-01
Decision-making under uncertainty as measured by the Iowa Gambling Task has frequently been studied in Parkinson's disease. The dopamine overdose hypothesis assumes that dopaminergic effects follow an inverted U-shaped function, restoring some cognitive functions while overdosing others. The present work quantitatively summarizes disease and medication effects on task performance and evaluates evidence for the dopamine overdose hypothesis of impaired decision-making in Parkinson's disease. A systematic literature search was performed to identify studies examining the Iowa Gambling Task in patients with Parkinson's disease. Outcomes were quantitatively combined, with separate estimates for the clinical (patients ON medication vs. healthy controls), disease (patients OFF medication vs. healthy controls), and medication effects (patients ON vs. OFF medication). Furthermore, using meta-regression analysis it was explored whether the study characteristics drug level, disease duration, and motor symptoms explained heterogeneous performance between studies. Patients with Parkinson's disease ON dopaminergic medication showed significantly impaired Iowa Gambling Task performance compared to healthy controls. This impairment was not normalized by short-term withdrawal of medication. Heterogeneity across studies was not explained by dopaminergic drug levels, disease durations or motor symptoms. While this meta-analysis showed significantly impaired decision-making performance in Parkinson's disease, there was no evidence that this impairment was related to dopamine overdosing. However, only very few studies assessed patients OFF medication and future studies are needed to concentrate on the modulation of dopaminergic drug levels and pay particular attention to problems related to repeated testing. Furthermore, short- vs. long-term medication effects demand further in-depth investigation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Occupational exposure decisions: can limited data interpretation training help improve accuracy?
Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul
2009-06-01
Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The accuracy for quantitative desktop judgments increased from 43 to 63% correct after the rule of thumb training (P < 0.001). The rule of thumb training did not significantly impact accuracy for qualitative desktop judgments. The finding that even some simple statistical rules of thumb improve judgment accuracy significantly suggests that hygienists need to routinely use statistical tools while making exposure judgments using monitoring data.
Quantitative Assessment of Cancer Risk from Exposure to Diesel Engine Emissions
Quantitative estimates of lung cancer risk from exposure to diesel engine emissions were developed using data from three chronic bioassays with Fischer 344 rats. uman target organ dose was estimated with the aid of a comprehensive dosimetry model. This model accounted for rat-hum...
A QUANTITATIVE APPROACH FOR ESTIMATING EXPOSURE TO PESTICIDES IN THE AGRICULTURAL HEALTH STUDY
We developed a quantitative method to estimate chemical-specific pesticide exposures in a large prospective cohort study of over 58,000 pesticide applicators in North Carolina and Iowa. An enrollment questionnaire was administered to applicators to collect basic time- and inten...
Quantification of Microbial Phenotypes
Martínez, Verónica S.; Krömer, Jens O.
2016-01-01
Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694
Quantitative estimation of time-variable earthquake hazard by using fuzzy set theory
NASA Astrophysics Data System (ADS)
Deyi, Feng; Ichikawa, M.
1989-11-01
In this paper, the various methods of fuzzy set theory, called fuzzy mathematics, have been applied to the quantitative estimation of the time-variable earthquake hazard. The results obtained consist of the following. (1) Quantitative estimation of the earthquake hazard on the basis of seismicity data. By using some methods of fuzzy mathematics, seismicity patterns before large earthquakes can be studied more clearly and more quantitatively, highly active periods in a given region and quiet periods of seismic activity before large earthquakes can be recognized, similarities in temporal variation of seismic activity and seismic gaps can be examined and, on the other hand, the time-variable earthquake hazard can be assessed directly on the basis of a series of statistical indices of seismicity. Two methods of fuzzy clustering analysis, the method of fuzzy similarity, and the direct method of fuzzy pattern recognition, have been studied is particular. One method of fuzzy clustering analysis is based on fuzzy netting, and another is based on the fuzzy equivalent relation. (2) Quantitative estimation of the earthquake hazard on the basis of observational data for different precursors. The direct method of fuzzy pattern recognition has been applied to research on earthquake precursors of different kinds. On the basis of the temporal and spatial characteristics of recognized precursors, earthquake hazards in different terms can be estimated. This paper mainly deals with medium-short-term precursors observed in Japan and China.
APPLICATION OF RADIOISOTOPES TO THE QUANTITATIVE CHROMATOGRAPHY OF FATTY ACIDS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budzynski, A.Z.; Zubrzycki, Z.J.; Campbell, I.G.
1959-10-31
The paper reports work done on the use of I/sup 131/, Zn/sup 65/, Sr/sup 90/, Zr/sup 95/, Ce/sup 144/ for the quantitative estimation of fatty acids on paper chromatograms, and for determination of the degree of usaturation of components of resolved fatty acid mixtures. I/sup 131/ is used to iodinate unsaturated fatty acids, and the amount of such acids is determined from the radiochromatogram. The degree of unsaturation of fatty acids is determined by estimation of the specific activiiy of spots. The other isotopes have been examined from the point of view of their suitability for estimation of total amountsmore » of fatty acids by formation of insoluble radioactive soaps held on the chromatogram. In particular, work is reported on the quantitative estimation of saturated fatty acids by measurement of the activity of their insoluble soaps with radioactive metals. Various quantitative relationships are described between amount of fatty acid in spot and such parameters as radiometrically estimated spot length, width, maximum intensity, and integrated spot activity. A convenient detection apparatus for taking radiochromatograms is also described. In conjunction with conventional chromatographic methods for resolving fatty acids the method permits the estimation of composition of fatty acid mixtures obtained from biological material. (auth)« less
Quantitative Methods for Administrative Decision Making in Junior Colleges.
ERIC Educational Resources Information Center
Gold, Benjamin Knox
With the rapid increase in number and size of junior colleges, administrators must take advantage of the decision-making tools already used in business and industry. This study investigated how these quantitative techniques could be applied to junior college problems. A survey of 195 California junior college administrators found that the problems…
Disease quantification on PET/CT images without object delineation
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Wu, Caiyun; Fitzpatrick, Danielle; Winchell, Nicole; Schuster, Stephen J.; Torigian, Drew A.
2017-03-01
The derivation of quantitative information from images to make quantitative radiology (QR) clinically practical continues to face a major image analysis hurdle because of image segmentation challenges. This paper presents a novel approach to disease quantification (DQ) via positron emission tomography/computed tomography (PET/CT) images that explores how to decouple DQ methods from explicit dependence on object segmentation through the use of only object recognition results to quantify disease burden. The concept of an object-dependent disease map is introduced to express disease severity without performing explicit delineation and partial volume correction of either objects or lesions. The parameters of the disease map are estimated from a set of training image data sets. The idea is illustrated on 20 lung lesions and 20 liver lesions derived from 18F-2-fluoro-2-deoxy-D-glucose (FDG)-PET/CT scans of patients with various types of cancers and also on 20 NEMA PET/CT phantom data sets. Our preliminary results show that, on phantom data sets, "disease burden" can be estimated to within 2% of known absolute true activity. Notwithstanding the difficulty in establishing true quantification on patient PET images, our results achieve 8% deviation from "true" estimates, with slightly larger deviations for small and diffuse lesions where establishing ground truth becomes really questionable, and smaller deviations for larger lesions where ground truth set up becomes more reliable. We are currently exploring extensions of the approach to include fully automated body-wide DQ, extensions to just CT or magnetic resonance imaging (MRI) alone, to PET/CT performed with radiotracers other than FDG, and other functional forms of disease maps.
NASA Astrophysics Data System (ADS)
Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki
2016-03-01
Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and posterior eye segment as well as in skin imaging. The new estimator shows superior performance and also shows clearer image contrast.
Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared
ERIC Educational Resources Information Center
von Helversen, Bettina; Rieskamp, Jorg
2009-01-01
The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…
Quantitative endoscopy: initial accuracy measurements.
Truitt, T O; Adelman, R A; Kelly, D H; Willging, J P
2000-02-01
The geometric optics of an endoscope can be used to determine the absolute size of an object in an endoscopic field without knowing the actual distance from the object. This study explores the accuracy of a technique that estimates absolute object size from endoscopic images. Quantitative endoscopy involves calibrating a rigid endoscope to produce size estimates from 2 images taken with a known traveled distance between the images. The heights of 12 samples, ranging in size from 0.78 to 11.80 mm, were estimated with this calibrated endoscope. Backup distances of 5 mm and 10 mm were used for comparison. The mean percent error for all estimated measurements when compared with the actual object sizes was 1.12%. The mean errors for 5-mm and 10-mm backup distances were 0.76% and 1.65%, respectively. The mean errors for objects <2 mm and > or =2 mm were 0.94% and 1.18%, respectively. Quantitative endoscopy estimates endoscopic image size to within 5% of the actual object size. This method remains promising for quantitatively evaluating object size from endoscopic images. It does not require knowledge of the absolute distance of the endoscope from the object, rather, only the distance traveled by the endoscope between images.
The role of quantitative safety evaluation in regulatory decision making of drugs.
Chakravarty, Aloka G; Izem, Rima; Keeton, Stephine; Kim, Clara Y; Levenson, Mark S; Soukup, Mat
2016-01-01
Evaluation of safety is a critical component of drug review at the US Food and Drug Administration (FDA). Statisticians are playing an increasingly visible role in quantitative safety evaluation and regulatory decision-making. This article reviews the history and the recent events relating to quantitative drug safety evaluation at the FDA. The article then focuses on five active areas of quantitative drug safety evaluation and the role Division of Biometrics VII (DBVII) plays in these areas, namely meta-analysis for safety evaluation, large safety outcome trials, post-marketing requirements (PMRs), the Sentinel Initiative, and the evaluation of risk from extended/long-acting opioids. This article will focus chiefly on developments related to quantitative drug safety evaluation and not on the many additional developments in drug safety in general.
Space spin-offs: is technology transfer worth it?
NASA Astrophysics Data System (ADS)
Bush, Lance B.
Dual-uses, spin-offs, and technology transfer have all become part of the space lexicon, creating a cultural attitude toward space activity justification. From the very beginning of space activities in the late 1950's, this idea of secondary benefits became a major part of the space culture and its beliefs system. Technology transfer has played a central role in public and political debates of funding for space activities. Over the years, several studies of the benefits of space activities have been performed, with some estimates reaching as high as a 60:1 return to the economy for each dollar spent in space activities. Though many of these models claiming high returns have been roundly criticized. More recent studies of technology transfer from federal laboratories to private sector are showing a return on investment of 2.8:1, with little evidence of jobs increases. Yet, a purely quantitative analysis is not sufficient as there exist cultural and social benefits attainable only through case studies. Space projects tend to have a long life cycle, making it difficult to track metrics on their secondary benefits. Recent studies have begun to make inroads towards a better understanding of the benefits and drawbacks of investing in technology transfer activities related to space, but there remains significant analyses to be performed which must include a combination of quantitative and qualitative analyses.
Petriwskyj, Andrea; Gibson, Alexandra; Parker, Deborah; Banks, Susan; Andrews, Sharon; Robinson, Andrew
2014-06-01
Ensuring older adults' involvement in their care is accepted as good practice and is vital, particularly for people with dementia, whose care and treatment needs change considerably over the course of the illness. However, involving family members in decision making on people's behalf is still practically difficult for staff and family. The aim of this review was to identify and appraise the existing quantitative evidence about family involvement in decision making for people with dementia living in residential aged care. The present Joanna Briggs Institute (JBI) metasynthesis assessed studies that investigated involvement of family members in decision making for people with dementia in residential aged care settings. While quantitative and qualitative studies were included in the review, this paper presents the quantitative findings. A comprehensive search of 15 electronic databases was performed. The search was limited to papers published in English, from 1990 to 2013. Twenty-six studies were identified as being relevant; 10 were quantitative, with 1 mixed method study. Two independent reviewers assessed the studies for methodological validity and extracted the data using the JBI Meta Analysis of Statistics Assessment and Review Instrument (JBI-MAStARI). The findings were synthesized and presented in narrative form. The findings related to decisions encountered and made by family surrogates, variables associated with decisions, surrogates' perceptions of, and preferences for, their roles, as well as outcomes for people with dementia and their families. The results identified patterns within, and variables associated with, surrogate decision making, all of which highlight the complexity and variation regarding family involvement. Attention needs to be paid to supporting family members in decision making in collaboration with staff.
Estimating Active Transportation Behaviors to Support Health Impact Assessment in the United States
Mansfield, Theodore J.; Gibson, Jacqueline MacDonald
2016-01-01
Health impact assessment (HIA) has been promoted as a means to encourage transportation and city planners to incorporate health considerations into their decision-making. Ideally, HIAs would include quantitative estimates of the population health effects of alternative planning scenarios, such as scenarios with and without infrastructure to support walking and cycling. However, the lack of baseline estimates of time spent walking or biking for transportation (together known as “active transportation”), which are critically related to health, often prevents planners from developing such quantitative estimates. To address this gap, we use data from the 2009 US National Household Travel Survey to develop a statistical model that estimates baseline time spent walking and biking as a function of the type of transportation used to commute to work along with demographic and built environment variables. We validate the model using survey data from the Raleigh–Durham–Chapel Hill, NC, USA, metropolitan area. We illustrate how the validated model could be used to support transportation-related HIAs by estimating the potential health benefits of built environment modifications that support walking and cycling. Our statistical model estimates that on average, individuals who commute on foot spend an additional 19.8 (95% CI 16.9–23.2) minutes per day walking compared to automobile commuters. Public transit riders walk an additional 5.0 (95% CI 3.5–6.4) minutes per day compared to automobile commuters. Bicycle commuters cycle for an additional 28.0 (95% CI 17.5–38.1) minutes per day compared to automobile commuters. The statistical model was able to predict observed transportation physical activity in the Raleigh–Durham–Chapel Hill region to within 0.5 MET-hours per day (equivalent to about 9 min of daily walking time) for 83% of observations. Across the Raleigh–Durham–Chapel Hill region, an estimated 38 (95% CI 15–59) premature deaths potentially could be avoided if the entire population walked 37.4 min per week for transportation (the amount of transportation walking observed in previous US studies of walkable neighborhoods). The approach developed here is useful both for estimating baseline behaviors in transportation HIAs and for comparing the magnitude of risks associated with physical inactivity to other competing health risks in urban areas. PMID:27200327
Estimating Active Transportation Behaviors to Support Health Impact Assessment in the United States.
Mansfield, Theodore J; Gibson, Jacqueline MacDonald
2016-01-01
Health impact assessment (HIA) has been promoted as a means to encourage transportation and city planners to incorporate health considerations into their decision-making. Ideally, HIAs would include quantitative estimates of the population health effects of alternative planning scenarios, such as scenarios with and without infrastructure to support walking and cycling. However, the lack of baseline estimates of time spent walking or biking for transportation (together known as "active transportation"), which are critically related to health, often prevents planners from developing such quantitative estimates. To address this gap, we use data from the 2009 US National Household Travel Survey to develop a statistical model that estimates baseline time spent walking and biking as a function of the type of transportation used to commute to work along with demographic and built environment variables. We validate the model using survey data from the Raleigh-Durham-Chapel Hill, NC, USA, metropolitan area. We illustrate how the validated model could be used to support transportation-related HIAs by estimating the potential health benefits of built environment modifications that support walking and cycling. Our statistical model estimates that on average, individuals who commute on foot spend an additional 19.8 (95% CI 16.9-23.2) minutes per day walking compared to automobile commuters. Public transit riders walk an additional 5.0 (95% CI 3.5-6.4) minutes per day compared to automobile commuters. Bicycle commuters cycle for an additional 28.0 (95% CI 17.5-38.1) minutes per day compared to automobile commuters. The statistical model was able to predict observed transportation physical activity in the Raleigh-Durham-Chapel Hill region to within 0.5 MET-hours per day (equivalent to about 9 min of daily walking time) for 83% of observations. Across the Raleigh-Durham-Chapel Hill region, an estimated 38 (95% CI 15-59) premature deaths potentially could be avoided if the entire population walked 37.4 min per week for transportation (the amount of transportation walking observed in previous US studies of walkable neighborhoods). The approach developed here is useful both for estimating baseline behaviors in transportation HIAs and for comparing the magnitude of risks associated with physical inactivity to other competing health risks in urban areas.
Ruiz, Patricia; Begluitti, Gino; Tincher, Terry; Wheeler, John; Mumtaz, Moiz
2012-07-27
Predicting toxicity quantitatively, using Quantitative Structure Activity Relationships (QSAR), has matured over recent years to the point that the predictions can be used to help identify missing comparison values in a substance's database. In this manuscript we investigate using the lethal dose that kills fifty percent of a test population (LD₅₀) for determining relative toxicity of a number of substances. In general, the smaller the LD₅₀ value, the more toxic the chemical, and the larger the LD₅₀ value, the lower the toxicity. When systemic toxicity and other specific toxicity data are unavailable for the chemical(s) of interest, during emergency responses, LD₅₀ values may be employed to determine the relative toxicity of a series of chemicals. In the present study, a group of chemical warfare agents and their breakdown products have been evaluated using four available rat oral QSAR LD₅₀ models. The QSAR analysis shows that the breakdown products of Sulfur Mustard (HD) are predicted to be less toxic than the parent compound as well as other known breakdown products that have known toxicities. The QSAR estimated break down products LD₅₀ values ranged from 299 mg/kg to 5,764 mg/kg. This evaluation allows for the ranking and toxicity estimation of compounds for which little toxicity information existed; thus leading to better risk decision making in the field.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.
Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf
2010-05-25
Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.
Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks
2010-01-01
Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862
Uncertainty of quantitative microbiological methods of pharmaceutical analysis.
Gunar, O V; Sakhno, N G
2015-12-30
The total uncertainty of quantitative microbiological methods, used in pharmaceutical analysis, consists of several components. The analysis of the most important sources of the quantitative microbiological methods variability demonstrated no effect of culture media and plate-count techniques in the estimation of microbial count while the highly significant effect of other factors (type of microorganism, pharmaceutical product and individual reading and interpreting errors) was established. The most appropriate method of statistical analysis of such data was ANOVA which enabled not only the effect of individual factors to be estimated but also their interactions. Considering all the elements of uncertainty and combining them mathematically the combined relative uncertainty of the test results was estimated both for method of quantitative examination of non-sterile pharmaceuticals and microbial count technique without any product. These data did not exceed 35%, appropriated for a traditional plate count methods. Copyright © 2015 Elsevier B.V. All rights reserved.
Üstündağ, Özgür; Dinç, Erdal; Özdemir, Nurten; Tilkan, M Günseli
2015-01-01
In the development strategies of new drug products and generic drug products, the simultaneous in-vitro dissolution behavior of oral dosage formulations is the most important indication for the quantitative estimation of efficiency and biopharmaceutical characteristics of drug substances. This is to force the related field's scientists to improve very powerful analytical methods to get more reliable, precise and accurate results in the quantitative analysis and dissolution testing of drug formulations. In this context, two new chemometric tools, partial least squares (PLS) and principal component regression (PCR) were improved for the simultaneous quantitative estimation and dissolution testing of zidovudine (ZID) and lamivudine (LAM) in a tablet dosage form. The results obtained in this study strongly encourage us to use them for the quality control, the routine analysis and the dissolution test of the marketing tablets containing ZID and LAM drugs.
Choi, Seo Yeon; Yang, Nuri; Jeon, Soo Kyung; Yoon, Tae Hyun
2014-09-01
In this study, we have demonstrated feasibility of a semi-quantitative approach for the estimation of cellular SiO2 nanoparticles (NPs), which is based on the flow cytometry measurements of their normalized side scattering intensity. In order to improve our understanding on the quantitative aspects of cell-nanoparticle interactions, flow cytometry, transmission electron microscopy, and X-ray fluorescence experiments were carefully performed for the HeLa cells exposed to SiO2 NPs with different core diameters, hydrodynamic sizes, and surface charges. Based on the observed relationships among the experimental data, a semi-quantitative cellular SiO2 NPs estimation method from their normalized side scattering and core diameters was proposed, which can be applied for the determination of cellular SiO2 NPs within their size-dependent linear ranges. © 2014 International Society for Advancement of Cytometry.
Assessing the robustness of quantitative fatty acid signature analysis to assumption violations
Bromaghin, Jeffrey F.; Budge, Suzanne M.; Thiemann, Gregory W.; Rode, Karyn D.
2016-01-01
In most QFASA applications, investigators will generally have some knowledge of the prey available to predators and be able to assess the completeness of prey signature data and sample additional prey as necessary. Conversely, because calibration coefficients are derived from feeding trials with captive animals and their values may be sensitive to consumer physiology and nutritional status, their applicability to free-ranging animals is difficult to establish. We therefore recommend that investigators first make any improvements to the prey signature data that seem warranted and then base estimation on the Aitchison distance measure, as it appears to minimize risk from violations of the assumption that is most difficult to verify.
Niskanen, Ilpo; Peiponen, Kai-Erik; Räty, Jukka
2010-05-01
Using a multifunction spectrophotometer, the refractive index of a pigment can be estimated by measuring the backscattering of light from the pigment in immersion liquids having slightly different refractive indices. A simple theoretical Gaussian function model related to the optical path distribution is introduced that makes it possible to describe quantitatively the backscattering signal from transparent pigments using a set of only a few immersion liquids. With the aid of the data fitting by a Gaussian function, the measurement time of the refractive index of the pigment can be reduced. The backscattering measurement technique is suggested to be useful in industrial measurement environments of pigments.
Evaluation of risk communication in a mammography patient decision aid.
Klein, Krystal A; Watson, Lindsey; Ash, Joan S; Eden, Karen B
2016-07-01
We characterized patients' comprehension, memory, and impressions of risk communication messages in a patient decision aid (PtDA), Mammopad, and clarified perceived importance of numeric risk information in medical decision making. Participants were 75 women in their forties with average risk factors for breast cancer. We used mixed methods, comprising a risk estimation problem administered within a pretest-posttest design, and semi-structured qualitative interviews with a subsample of 21 women. Participants' positive predictive value estimates of screening mammography improved after using Mammopad. Although risk information was only briefly memorable, through content analysis, we identified themes describing why participants value quantitative risk information, and obstacles to understanding. We describe ways the most complicated graphic was incompletely comprehended. Comprehension of risk information following Mammopad use could be improved. Patients valued receiving numeric statistical information, particularly in pictograph format. Obstacles to understanding risk information, including potential for confusion between statistics, should be identified and mitigated in PtDA design. Using simple pictographs accompanied by text, PtDAs may enhance a shared decision-making discussion. PtDA designers and providers should be aware of benefits and limitations of graphical risk presentations. Incorporating comprehension checks could help identify and correct misapprehensions of graphically presented statistics. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Evaluation of risk communication in a mammography patient decision aid
Klein, Krystal A.; Watson, Lindsey; Ash, Joan S.; Eden, Karen B.
2016-01-01
Objectives We characterized patients’ comprehension, memory, and impressions of risk communication messages in a patient decision aid (PtDA), Mammopad, and clarified perceived importance of numeric risk information in medical decision making. Methods Participants were 75 women in their forties with average risk factors for breast cancer. We used mixed methods, comprising a risk estimation problem administered within a pretest–posttest design, and semi-structured qualitative interviews with a subsample of 21 women. Results Participants’ positive predictive value estimates of screening mammography improved after using Mammopad. Although risk information was only briefly memorable, through content analysis, we identified themes describing why participants value quantitative risk information, and obstacles to understanding. We describe ways the most complicated graphic was incompletely comprehended. Conclusions Comprehension of risk information following Mammopad use could be improved. Patients valued receiving numeric statistical information, particularly in pictograph format. Obstacles to understanding risk information, including potential for confusion between statistics, should be identified and mitigated in PtDA design. Practice implications Using simple pictographs accompanied by text, PtDAs may enhance a shared decision-making discussion. PtDA designers and providers should be aware of benefits and limitations of graphical risk presentations. Incorporating comprehension checks could help identify and correct misapprehensions of graphically presented statistics PMID:26965020
Decision-Making in Multiple Sclerosis Patients: A Systematic Review
2018-01-01
Background Multiple sclerosis (MS) is frequently associated with cognitive and behavioural deficits. A growing number of studies suggest an impact of MS on decision-making abilities. The aim of this systematic review was to assess if (1) performance of MS patients in decision-making tasks was consistently different from controls and (2) whether this modification was associated with cognitive dysfunction and emotional alterations. Methods The search was conducted on Pubmed/Medline database. 12 studies evaluating the difference between MS patients and healthy controls using validated decision-making tasks were included. Outcomes considered were quantitative (net scores) and qualitative measurements (deliberation time and learning from feedback). Results Quantitative and qualitative decision-making impairment in MS was present in 64.7% of measurements. Patients were equally impaired in tasks for decision-making under risk and ambiguity. A correlation to other cognitive functions was present in 50% of cases, with the highest associations in the domains of processing speed and attentional capacity. Conclusions In MS patients, qualitative and quantitative modifications may be present in any kind of decision-making task and can appear independently of other cognitive measures. Since decision-making abilities have a significant impact on everyday life, this cognitive aspect has an influential importance in various MS-related treatment settings. PMID:29721338
The Effect of Pickling on Blue Borscht Gelatin and Other Interesting Diffusive Phenomena.
ERIC Educational Resources Information Center
Davis, Lawrence C.; Chou, Nancy C.
1998-01-01
Presents some simple demonstrations that students can construct for themselves in class to learn the difference between diffusion and convection rates. Uses cabbage leaves and gelatin and focuses on diffusion in ungelified media, a quantitative diffusion estimate with hydroxyl ions, and a quantitative diffusion estimate with photons. (DDR)
Holmberg, Christine; Waters, Erika A; Whitehouse, Katie; Daly, Mary; McCaskill-Stevens, Worta
2015-11-01
Decision-making experts emphasize that understanding and using probabilistic information are important for making informed decisions about medical treatments involving complex risk-benefit tradeoffs. Yet empirical research demonstrates that individuals may not use probabilities when making decisions. To explore decision making and the use of probabilities for decision making from the perspective of women who were risk-eligible to enroll in the Study of Tamoxifen and Raloxifene (STAR). We conducted narrative interviews with 20 women who agreed to participate in STAR and 20 women who declined. The project was based on a narrative approach. Analysis included the development of summaries of each narrative, and thematic analysis with developing a coding scheme inductively to code all transcripts to identify emerging themes. Interviewees explained and embedded their STAR decisions within experiences encountered throughout their lives. Such lived experiences included but were not limited to breast cancer family history, a personal history of breast biopsies, and experiences or assumptions about taking tamoxifen or medicines more generally. Women's explanations of their decisions about participating in a breast cancer chemoprevention trial were more complex than decision strategies that rely solely on a quantitative risk-benefit analysis of probabilities derived from populations In addition to precise risk information, clinicians and risk communicators should recognize the importance and legitimacy of lived experience in individual decision making. © The Author(s) 2015.
Hamilton, Joshua J.; Dwivedi, Vivek; Reed, Jennifer L.
2013-01-01
Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. PMID:23870272
Development of an agricultural job-exposure matrix for British Columbia, Canada.
Wood, David; Astrakianakis, George; Lang, Barbara; Le, Nhu; Bert, Joel
2002-09-01
Farmers in British Columbia (BC), Canada have been shown to have unexplained elevated proportional mortality rates for several cancers. Because agricultural exposures have never been documented systematically in BC, a quantitative agricultural Job-exposure matrix (JEM) was developed containing exposure assessments from 1950 to 1998. This JEM was developed to document historical exposures and to facilitate future epidemiological studies. Available information regarding BC farming practices was compiled and checklists of potential exposures were produced for each crop. Exposures identified included chemical, biological, and physical agents. Interviews with farmers and agricultural experts were conducted using the checklists as a starting point. This allowed the creation of an initial or 'potential' JEM based on three axes: exposure agent, 'type of work' and time. The 'type of work' axis was determined by combining several variables: region, crop, job title and task. This allowed for a complete description of exposures. Exposure assessments were made quantitatively, where data allowed, or by a dichotomous variable (exposed/unexposed). Quantitative calculations were divided into re-entry and application scenarios. 'Re-entry' exposures were quantified using a standard exposure model with some modification while application exposure estimates were derived using data from the North American Pesticide Handlers Exposure Database (PHED). As expected, exposures differed between crops and job titles both quantitatively and qualitatively. Of the 290 agents included in the exposure axis; 180 were pesticides. Over 3000 estimates of exposure were conducted; 50% of these were quantitative. Each quantitative estimate was at the daily absorbed dose level. Exposure estimates were then rated as high, medium, or low based on comparing them with their respective oral chemical reference dose (RfD) or Acceptable Daily Intake (ADI). This data was mainly obtained from the US Environmental Protection Agency (EPA) Integrated Risk Information System database. Of the quantitative estimates, 74% were rated as low (< 100%) and only 10% were rated as high (>500%). The JEM resulting from this study fills a void concerning exposures for BC farmers and farm workers. While only limited validation of assessments were possible, this JEM can serve as a benchmark for future studies. Preliminary analysis at the BC Cancer Agency (BCCA) using the JEM with prostate cancer records from a large cancer and occupation study/survey has already shown promising results. Development of this JEM provides a useful model for developing historical quantitative exposure estimates where is very little documented information available.
Estimation of Land Surface Fluxes and Their Uncertainty via Variational Data Assimilation Approach
NASA Astrophysics Data System (ADS)
Abdolghafoorian, A.; Farhadi, L.
2016-12-01
Accurate estimation of land surface heat and moisture fluxes as well as root zone soil moisture is crucial in various hydrological, meteorological, and agricultural applications. "In situ" measurements of these fluxes are costly and cannot be readily scaled to large areas relevant to weather and climate studies. Therefore, there is a need for techniques to make quantitative estimates of heat and moisture fluxes using land surface state variables. In this work, we applied a novel approach based on the variational data assimilation (VDA) methodology to estimate land surface fluxes and soil moisture profile from the land surface states. This study accounts for the strong linkage between terrestrial water and energy cycles by coupling the dual source energy balance equation with the water balance equation through the mass flux of evapotranspiration (ET). Heat diffusion and moisture diffusion into the column of soil are adjoined to the cost function as constraints. This coupling results in more accurate prediction of land surface heat and moisture fluxes and consequently soil moisture at multiple depths with high temporal frequency as required in many hydrological, environmental and agricultural applications. One of the key limitations of VDA technique is its tendency to be ill-posed, meaning that a continuum of possibilities exists for different parameters that produce essentially identical measurement-model misfit errors. On the other hand, the value of heat and moisture flux estimation to decision-making processes is limited if reasonable estimates of the corresponding uncertainty are not provided. In order to address these issues, in this research uncertainty analysis will be performed to estimate the uncertainty of retrieved fluxes and root zone soil moisture. The assimilation algorithm is tested with a series of experiments using a synthetic data set generated by the simultaneous heat and water (SHAW) model. We demonstrate the VDA performance by comparing the (synthetic) true measurements (including profile of soil moisture and temperature, land surface water and heat fluxes, and root water uptake) with VDA estimates. In addition, the feasibility of extending the proposed approach to use remote sensing observations is tested by limiting the number of LST observations and soil moisture observations.
Investigating Children's Abilities to Count and Make Quantitative Comparisons
ERIC Educational Resources Information Center
Lee, Joohi; Md-Yunus, Sham'ah
2016-01-01
This study was designed to investigate children's abilities to count and make quantitative comparisons. In addition, this study utilized reasoning questions (i.e., how did you know?). Thirty-four preschoolers, mean age 4.5 years old, participated in the study. According to the results, 89% of the children (n = 30) were able to do rote counting and…
Hildon, Zoe; Allwood, Dominique; Black, Nick
2012-02-01
Displays comparing the performance of healthcare providers are largely based on commonsense. To review the literature on the impact of compositional format and content of quantitative data displays on people's comprehension, choice and preference. Ovid databases, expert recommendations and snowballing techniques. Evaluations of the impact of different formats (bar charts, tables and pictographs) and content (ordering, explanatory visual cues, etc.) of quantitative data displays meeting defined quality criteria. Data extraction Type of decision; decision-making domains; audiences; formats; content; methodology; findings. Most of the 30 studies used quantitative (n= 26) methods with patients or public groups (n= 28) rather than with professionals (n= 2). Bar charts were the most frequent format, followed by pictographs and tables. As regards format, tables and pictographs appeared better understood than bar charts despite the latter being preferred. Although accessible to less numerate and older populations, pictographs tended to lead to more risk avoidance. Tables appeared accessible to all. Aspects of content enhancing the impact of data displays included giving visual explanatory cues and contextual information while still attempting simplicity ('less is more'); ordering data; consistency. Icons rather than numbers were more user-friendly but could lead to over-estimation of risk. Uncertainty was not widely understood, nor well represented. Though heterogeneous and limited in scope, there is sufficient research evidence to inform the presentation of quantitative data that compares the performance of healthcare providers. The impact of new formats, such as funnel plots, needs to be evaluated.
Otazú, Ivone B; Tavares, Rita de Cassia B; Hassan, Rocío; Zalcberg, Ilana; Tabak, Daniel G; Seuánez, Héctor N
2002-02-01
Serial assays of qualitative (multiplex and nested) and quantitative PCR were carried out for detecting and estimating the level of BCR-ABL transcripts in 39 CML patients following bone marrow transplantation. Seven of these patients, who received donor lymphocyte infusions (DLIs) following to relapse, were also monitored. Quantitative estimates of BCR-ABL transcripts were obtained by co-amplification with a competitor sequence. Estimates of ABL transcripts were used, an internal control and the ratio BCR-ABL/ABL was thus estimated for evaluating the kinetics of residual clones. Twenty four patients were followed shortly after BMT; two of these patients were in cytogenetic relapse coexisting with very high BCR-ABL levels while other 22 were in clinical, haematologic and cytogenetic remission 2-42 months after BMT. In this latter group, seven patients showed a favourable clinical-haematological progression in association with molecular remission while in 14 patients quantitative PCR assays indicated molecular relapse that was not associated with an early cytogenetic-haematologic relapse. BCR-ABL/ABL levels could not be correlated with presence of GVHD in 24 patients after BMT. In all seven patients treated with DLI, high levels of transcripts were detected at least 4 months before the appearance of clinical haematological relapse. Following DLI, five of these patients showed decreasing transcript levels from 2 to 5 logs between 4 and 12 months. In eight other patients studied long after BMT, five showed molecular relapse up to 117 months post-BMT and only one showed cytogenetic relapse. Our findings indicated that quantitative estimates of BCR-ABL transcripts were valuable for monitoring minimal residual disease in each patient.
A novel approach for estimating ingested dose associated with paracetamol overdose
Zurlinden, Todd J.; Heard, Kennon
2015-01-01
Aim In cases of paracetamol (acetaminophen, APAP) overdose, an accurate estimate of tissue‐specific paracetamol pharmacokinetics (PK) and ingested dose can offer health care providers important information for the individualized treatment and follow‐up of affected patients. Here a novel methodology is presented to make such estimates using a standard serum paracetamol measurement and a computational framework. Methods The core component of the computational framework was a physiologically‐based pharmacokinetic (PBPK) model developed and evaluated using an extensive set of human PK data. Bayesian inference was used for parameter and dose estimation, allowing the incorporation of inter‐study variability, and facilitating the calculation of uncertainty in model outputs. Results Simulations of paracetamol time course concentrations in the blood were in close agreement with experimental data under a wide range of dosing conditions. Also, predictions of administered dose showed good agreement with a large collection of clinical and emergency setting PK data over a broad dose range. In addition to dose estimation, the platform was applied for the determination of optimal blood sampling times for dose reconstruction and quantitation of the potential role of paracetamol conjugate measurement on dose estimation. Conclusions Current therapies for paracetamol overdose rely on a generic methodology involving the use of a clinical nomogram. By using the computational framework developed in this study, serum sample data, and the individual patient's anthropometric and physiological information, personalized serum and liver pharmacokinetic profiles and dose estimate could be generated to help inform an individualized overdose treatment and follow‐up plan. PMID:26441245
A novel approach for estimating ingested dose associated with paracetamol overdose.
Zurlinden, Todd J; Heard, Kennon; Reisfeld, Brad
2016-04-01
In cases of paracetamol (acetaminophen, APAP) overdose, an accurate estimate of tissue-specific paracetamol pharmacokinetics (PK) and ingested dose can offer health care providers important information for the individualized treatment and follow-up of affected patients. Here a novel methodology is presented to make such estimates using a standard serum paracetamol measurement and a computational framework. The core component of the computational framework was a physiologically-based pharmacokinetic (PBPK) model developed and evaluated using an extensive set of human PK data. Bayesian inference was used for parameter and dose estimation, allowing the incorporation of inter-study variability, and facilitating the calculation of uncertainty in model outputs. Simulations of paracetamol time course concentrations in the blood were in close agreement with experimental data under a wide range of dosing conditions. Also, predictions of administered dose showed good agreement with a large collection of clinical and emergency setting PK data over a broad dose range. In addition to dose estimation, the platform was applied for the determination of optimal blood sampling times for dose reconstruction and quantitation of the potential role of paracetamol conjugate measurement on dose estimation. Current therapies for paracetamol overdose rely on a generic methodology involving the use of a clinical nomogram. By using the computational framework developed in this study, serum sample data, and the individual patient's anthropometric and physiological information, personalized serum and liver pharmacokinetic profiles and dose estimate could be generated to help inform an individualized overdose treatment and follow-up plan. © 2015 The British Pharmacological Society.
Simulating realistic predator signatures in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.
2015-01-01
Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.
NASA Astrophysics Data System (ADS)
Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte
2007-01-01
We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.
Road tests of class 8 tractor trailers were conducted by the US Environmental Protection Agency on new and retreaded tires of varying rolling resistance in order to provide estimates of the quantitative relationship between rolling resistance and fuel consumption.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-27
... indirectly through changes in regional climate; and (b) Quantitative research on the relationship of food...). Population Estimates The most quantitative estimate of the historic size of the African lion population... research conducted by Chardonnet et al., three subpopulations were described as consisting of 18 groups...
Estimation of sample size and testing power (part 5).
Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo
2012-02-01
Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.
A new mean estimator using auxiliary variables for randomized response models
NASA Astrophysics Data System (ADS)
Ozgul, Nilgun; Cingi, Hulya
2013-10-01
Randomized response models are commonly used in surveys dealing with sensitive questions such as abortion, alcoholism, sexual orientation, drug taking, annual income, tax evasion to ensure interviewee anonymity and reduce nonrespondents rates and biased responses. Starting from the pioneering work of Warner [7], many versions of RRM have been developed that can deal with quantitative responses. In this study, new mean estimator is suggested for RRM including quantitative responses. The mean square error is derived and a simulation study is performed to show the efficiency of the proposed estimator to other existing estimators in RRM.
Vass, Caroline M; Payne, Katherine
2017-09-01
There is emerging interest in the use of discrete choice experiments as a means of quantifying the perceived balance between benefits and risks (quantitative benefit-risk assessment) of new healthcare interventions, such as medicines, under assessment by regulatory agencies. For stated preference data on benefit-risk assessment to be used in regulatory decision making, the methods to generate these data must be valid, reliable and capable of producing meaningful estimates understood by decision makers. Some reporting guidelines exist for discrete choice experiments, and for related methods such as conjoint analysis. However, existing guidelines focus on reporting standards, are general in focus and do not consider the requirements for using discrete choice experiments specifically for quantifying benefit-risk assessments in the context of regulatory decision making. This opinion piece outlines the current state of play in using discrete choice experiments for benefit-risk assessment and proposes key areas needing to be addressed to demonstrate that discrete choice experiments are an appropriate and valid stated preference elicitation method in this context. Methodological research is required to establish: how robust the results of discrete choice experiments are to formats and methods of risk communication; how information in the discrete choice experiment can be presented effectually to respondents; whose preferences should be elicited; the correct underlying utility function and analytical model; the impact of heterogeneity in preferences; and the generalisability of the results. We believe these methodological issues should be addressed, alongside developing a 'reference case', before agencies can safely and confidently use discrete choice experiments for quantitative benefit-risk assessment in the context of regulatory decision making for new medicines and healthcare products.
NASA Astrophysics Data System (ADS)
Panulla, Brian J.; More, Loretta D.; Shumaker, Wade R.; Jones, Michael D.; Hooper, Robert; Vernon, Jeffrey M.; Aungst, Stanley G.
2009-05-01
Rapid improvements in communications infrastructure and sophistication of commercial hand-held devices provide a major new source of information for assessing extreme situations such as environmental crises. In particular, ad hoc collections of humans can act as "soft sensors" to augment data collected by traditional sensors in a net-centric environment (in effect, "crowd-sourcing" observational data). A need exists to understand how to task such soft sensors, characterize their performance and fuse the data with traditional data sources. In order to quantitatively study such situations, as well as study distributed decision-making, we have developed an Extreme Events Laboratory (EEL) at The Pennsylvania State University. This facility provides a network-centric, collaborative situation assessment and decision-making capability by supporting experiments involving human observers, distributed decision making and cognition, and crisis management. The EEL spans the information chain from energy detection via sensors, human observations, signal and image processing, pattern recognition, statistical estimation, multi-sensor data fusion, visualization and analytics, and modeling and simulation. The EEL command center combines COTS and custom collaboration tools in innovative ways, providing capabilities such as geo-spatial visualization and dynamic mash-ups of multiple data sources. This paper describes the EEL and several on-going human-in-the-loop experiments aimed at understanding the new collective observation and analysis landscape.
Decision-making, sensitivity to reward, and attrition in weight-management
Koritzky, Gilly; Dieterle, Camille; Rice, Chantelle; Jordan, Katie; Bechara, Antoine
2014-01-01
Objective Attrition is a common problem in weight-management. Understanding the risk factors for attrition should enhance professionals’ ability to increase completion rates and improve health outcomes for more individuals. We propose a model that draws upon neuropsychological knowledge on reward-sensitivity in obesity and overeating to predict attrition. Design & Methods 52 participants in a weight-management program completed a complex decision-making task.Decision-making characteristics – including sensitivity to reward – were further estimated using a quantitative model. Impulsivity and risk-taking measures were also administered. Results Consistent with the hypothesis that sensitivity to reward predicted attrition, program dropouts had higher sensitivity to reward than completers (p < 0.03). No differences were observed between completers and dropouts in initial BMI, age, employment status, or the number of prior weight-loss attempts (p ≥ 0.07). Completers had a slightly higher education level than dropouts, but its inclusion in the model did not increase predictive power. Impulsivity, delay of gratification, and risk-taking did not predict attrition, either. Conclusions Findings link attrition in weight-management to the neural mechanisms associated with reward-seeking and related influences on decision-making. Individual differences in the magnitude of response elicited by rewards may account for the relative difficulty experienced by dieters in adhering to treatment. PMID:24771588
An attentional drift diffusion model over binary-attribute choice.
Fisher, Geoffrey
2017-11-01
In order to make good decisions, individuals need to identify and properly integrate information about various attributes associated with a choice. Since choices are often complex and made rapidly, they are typically affected by contextual variables that are thought to influence how much attention is paid to different attributes. I propose a modification of the attentional drift-diffusion model, the binary-attribute attentional drift diffusion model (baDDM), which describes the choice process over simple binary-attribute choices and how it is affected by fluctuations in visual attention. Using an eye-tracking experiment, I find the baDDM makes accurate quantitative predictions about several key variables including choices, reaction times, and how these variables are correlated with attention to two attributes in an accept-reject decision. Furthermore, I estimate an attribute-based fixation bias that suggests attention to an attribute increases its subjective weight by 5%, while the unattended attribute's weight is decreased by 10%. Copyright © 2017 Elsevier B.V. All rights reserved.
Intrinsic fluctuations of the proton saturation momentum scale in high multiplicity p+p collisions
McLerran, Larry; Tribedy, Prithwish
2015-11-02
High multiplicity events in p+p collisions are studied using the theory of the Color Glass Condensate. Here, we show that intrinsic fluctuations of the proton saturation momentum scale are needed in addition to the sub-nucleonic color charge fluctuations to explain the very high multiplicity tail of distributions in p+p collisions. It is presumed that the origin of such intrinsic fluctuations is non-perturbative in nature. Classical Yang Mills simulations using the IP-Glasma model are performed to make quantitative estimations. Furthermore, we find that fluctuations as large as O(1) of the average values of the saturation momentum scale can lead to raremore » high multiplicity events seen in p+p data at RHIC and LHC energies. Using the available data on multiplicity distributions we try to constrain the distribution of the proton saturation momentum scale and make predictions for the multiplicity distribution in 13 TeV p+p collisions.« less
Horowltz, A.J.
1986-01-01
Centrifugation, settling/centrifugation, and backflush-filtration procedures have been tested for the concentration of suspended sediment from water for subsequent trace-metal analysis. Either of the first two procedures is comparable with in-line filtration and can be carried out precisely, accurately, and with a facility that makes the procedures amenable to large-scale sampling and analysis programs. There is less potential for post-sampling alteration of suspended sediment-associated metal concentrations with the centrifugation procedure because sample stabilization is accomplished more rapidly than with settling/centrifugation. Sample preservation can be achieved by chilling. Suspended sediment associated metal levels can best be determined by direct analysis but can also be estimated from the difference between a set of unfiltered-digested and filtered subsamples. However, when suspended sediment concentrations (<150 mg/L) or trace-metal levels are low, the direct analysis approach makes quantitation more accurate and precise and can be accomplished with simpler analytical procedures.
NASA Astrophysics Data System (ADS)
Hazenberg, P.; Torfs, P. J. J. F.; Leijnse, H.; Delrieu, G.; Uijlenhoet, R.
2013-09-01
This paper presents a novel approach to estimate the vertical profile of reflectivity (VPR) from volumetric weather radar data using both a traditional Eulerian as well as a newly proposed Lagrangian implementation. For this latter implementation, the recently developed Rotational Carpenter Square Cluster Algorithm (RoCaSCA) is used to delineate precipitation regions at different reflectivity levels. A piecewise linear VPR is estimated for either stratiform or neither stratiform/convective precipitation. As a second aspect of this paper, a novel approach is presented which is able to account for the impact of VPR uncertainty on the estimated radar rainfall variability. Results show that implementation of the VPR identification and correction procedure has a positive impact on quantitative precipitation estimates from radar. Unfortunately, visibility problems severely limit the impact of the Lagrangian implementation beyond distances of 100 km. However, by combining this procedure with the global Eulerian VPR estimation procedure for a given rainfall type (stratiform and neither stratiform/convective), the quality of the quantitative precipitation estimates increases up to a distance of 150 km. Analyses of the impact of VPR uncertainty shows that this aspect accounts for a large fraction of the differences between weather radar rainfall estimates and rain gauge measurements.
Antoch, Marina P; Wrobel, Michelle; Kuropatwinski, Karen K; Gitlin, Ilya; Leonova, Katerina I; Toshkov, Ilia; Gleiberman, Anatoli S; Hutson, Alan D; Chernova, Olga B; Gudkov, Andrei V
2017-03-19
The development of healthspan-extending pharmaceuticals requires quantitative estimation of age-related progressive physiological decline. In humans, individual health status can be quantitatively assessed by means of a frailty index (FI), a parameter which reflects the scale of accumulation of age-related deficits. However, adaptation of this methodology to animal models is a challenging task since it includes multiple subjective parameters. Here we report a development of a quantitative non-invasive procedure to estimate biological age of an individual animal by creating physiological frailty index (PFI). We demonstrated the dynamics of PFI increase during chronological aging of male and female NIH Swiss mice. We also demonstrated acceleration of growth of PFI in animals placed on a high fat diet, reflecting aging acceleration by obesity and provide a tool for its quantitative assessment. Additionally, we showed that PFI could reveal anti-aging effect of mTOR inhibitor rapatar (bioavailable formulation of rapamycin) prior to registration of its effects on longevity. PFI revealed substantial sex-related differences in normal chronological aging and in the efficacy of detrimental (high fat diet) or beneficial (rapatar) aging modulatory factors. Together, these data introduce PFI as a reliable, non-invasive, quantitative tool suitable for testing potential anti-aging pharmaceuticals in pre-clinical studies.
Abdul Rahman, Hanif; Abdul-Mumin, Khadizah; Naing, Lin
2017-03-01
Little evidence estimated the exposure of psychosocial work stressors, work-related fatigue, and musculoskeletal disorders for nurses working in South-East Asian region, and research on this subject is almost nonexistent in Brunei. The main aim of our study was to provide a comprehensive exploration and estimate exposure of the study variables amongst emergency (ER) and critical care (CC) nurses in Brunei. The study also aims to compare whether experiences of ER nurses differ from those of CC nurses. This cross-sectional study was implemented in the ER and CC departments across Brunei public hospitals from February to April 2016 by using Copenhagen Psychosocial Questionnaire II, Occupational Fatigue Exhaustion Recovery scale, and Cornell Musculoskeletal Discomfort Questionnaire. In total, 201 ER and CC nurses (82.0% response rate) participated in the study. Quantitative demands of CC nurses were significantly higher than ER nurses. Even so, ER nurses were 4.0 times more likely [95% confidence interval (2.21, 7.35)] to experience threats of violence, and 2.8 times more likely [95% confidence interval: (1.50, 5.29)] to experience chronic fatigue. The results revealed that nurses experienced high quantitative demands, work pace, stress, and burnout. High prevalence of chronic and persistent fatigue, threats of violence and bullying, and musculoskeletal pain at the neck, shoulder, upper and lower back, and foot region, was also reported. This study has provided good estimates for the exposure rate of psychosocial work stressors, work-related fatigue, and musculoskeletal disorders among nurses in Brunei. It provided important initial insight for nursing management and policymakers to make informed decisions on current and future planning to provide nurses with a conducive work environment. Copyright © 2017. Published by Elsevier B.V.
Low Reynolds number wind tunnel measurements - The importance of being earnest
NASA Technical Reports Server (NTRS)
Mueller, Thomas J.; Batill, Stephen M.; Brendel, Michael; Perry, Mark L.; Bloch, Diane R.
1986-01-01
A method for obtaining two-dimensional aerodynamic force coefficients at low Reynolds numbers using a three-component external platform balance is presented. Regardless of method, however, the importance of understanding the possible influence of the test facility and instrumentation on the final results cannot be overstated. There is an uncertainty in the ability of the facility to simulate a two-dimensional flow environment due to the confinement effect of the wind tunnel and the method used to mount the airfoil. Additionally, the ability of the instrumentation to accurately measure forces and pressures has an associated uncertainty. This paper focuses on efforts taken to understand the errors introduced by the techniques and apparatus used at the University of Notre Dame, and, the importance of making an earnest estimate of the uncertainty. Although quantitative estimates of facility induced errors are difficult to obtain, the uncertainty in measured results can be handled in a straightforward manner and provide the experimentalist, and others, with a basis to evaluate experimental results.
An approach to and web-based tool for infectious disease outbreak intervention analysis
NASA Astrophysics Data System (ADS)
Daughton, Ashlynn R.; Generous, Nicholas; Priedhorsky, Reid; Deshpande, Alina
2017-04-01
Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a subjective process involving surveillance and expert opinion. However, there are many situations where neither may be available. Modeling can fill gaps in the decision making process by using available data to provide quantitative estimates of outbreak trajectories. Effective reduction of the spread of infectious diseases can be achieved through collaboration between the modeling community and public health policy community. However, such collaboration is rare, resulting in a lack of models that meet the needs of the public health community. Here we show a Susceptible-Infectious-Recovered (SIR) model modified to include control measures that allows parameter ranges, rather than parameter point estimates, and includes a web user interface for broad adoption. We apply the model to three diseases, measles, norovirus and influenza, to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.
Methylcyclopentadienyl manganese tricarbonyl: health risk uncertainties and research directions.
Davis, J M
1998-01-01
With the way cleared for increased use of the fuel additive methylcyclopentadienyl manganese tricarbonyl (MMT) in the United States, the issue of possible public health impacts associated with this additive has gained greater attention. In assessing potential health risks of particulate Mn emitted from the combustion of MMT in gasoline, the U.S. Environmental Protection Agency not only considered the qualitative types of toxic effects associated with inhaled Mn, but conducted extensive exposure-response analyses using various statistical approaches and also estimated population exposure distributions of particulate Mn based on data from an exposure study conducted in California when MMT was used in leaded gasoline. Because of limitations in available data and the need to make several assumptions and extrapolations, the resulting risk characterization had inherent uncertainties that made it impossible to estimate health risks in a definitive or quantitative manner. To support an improved health risk characterization, further investigation is needed in the areas of health effects, emission characterization, and exposure analysis. PMID:9539013
Optimizing Hybrid Metrology: Rigorous Implementation of Bayesian and Combined Regression
Henn, Mark-Alexander; Silver, Richard M.; Villarrubia, John S.; Zhang, Nien Fan; Zhou, Hui; Barnes, Bryan M.; Ming, Bin; Vladár, András E.
2015-01-01
Hybrid metrology, e.g., the combination of several measurement techniques to determine critical dimensions, is an increasingly important approach to meet the needs of the semiconductor industry. A proper use of hybrid metrology may yield not only more reliable estimates for the quantitative characterization of 3-D structures but also a more realistic estimation of the corresponding uncertainties. Recent developments at the National Institute of Standards and Technology (NIST) feature the combination of optical critical dimension (OCD) measurements and scanning electron microscope (SEM) results. The hybrid methodology offers the potential to make measurements of essential 3-D attributes that may not be otherwise feasible. However, combining techniques gives rise to essential challenges in error analysis and comparing results from different instrument models, especially the effect of systematic and highly correlated errors in the measurement on the χ2 function that is minimized. Both hypothetical examples and measurement data are used to illustrate solutions to these challenges. PMID:26681991
Possible effects of volcanic eruptions on stratospheric minor constituent chemistry
NASA Technical Reports Server (NTRS)
Stolarski, R. S.; Butler, D. M.
1979-01-01
Although stratosphere penetrating volcanic eruptions have been infrequent during the last half century, periods have existed in the last several hundred years when such eruptions were significantly more frequent. Several mechanisms exist for these injections to affect stratospheric minor constituent chemistry, both on the long-term average and for short-term perturbations. These mechanisms are reviewed and, because of the sensitivity of current models of stratospheric ozone to chlorine perturbations, quantitative estimates are made of chlorine injection rates. It is found that, if chlorine makes up as much as 0.5 to 1% of the gases released and if the total gases released are about the same magnitude as the fine ash, then a major stratosphere penetrating eruption could deplete the ozone column by several percent. The estimate for the Agung eruption of 1963 is just under 1% an amount not excluded by the ozone record but complicated by the peak in atmospheric nuclear explosions at about the same time.
Hu, Zhe-Yi; Parker, Robert B.; Herring, Vanessa L.; Laizure, S. Casey
2012-01-01
Dabigatran etexilate (DABE) is an oral prodrug that is rapidly converted by esterases to dabigatran (DAB), a direct inhibitor of thrombin. To elucidate the esterase-mediated metabolic pathway of DABE, a high-performance liquid chromatography/mass spectrometer (LC-MS/MS)-based metabolite identification and semi-quantitative estimation approach was developed. To overcome the poor full-scan sensitivity of conventional triple quadrupole mass spectrometry, precursor-product ion pairs were predicted, to search for the potential in vitro metabolites. The detected metabolites were confirmed by the product ion scan. A dilution method was introduced to evaluate the matrix effects of tentatively identified metabolites without chemical standards. Quantitative information on detected metabolites was obtained using ‘metabolite standards’ generated from incubation samples that contain a high concentration of metabolite in combination with a correction factor for mass spectrometry response. Two in vitro metabolites of DABE (M1 and M2) were identified, and quantified by the semi-quantitative estimation approach. It is noteworthy that CES1 convert DABE to M1 while CES2 mediates the conversion of DABE to M2. M1 (or M2) was further metabolized to DAB by CES2 (or CES1). The approach presented here provides a solution to a bioanalytical need for fast identification and semi-quantitative estimation of CES metabolites in preclinical samples. PMID:23239178
Multiparametric Quantitative Ultrasound Imaging in Assessment of Chronic Kidney Disease.
Gao, Jing; Perlman, Alan; Kalache, Safa; Berman, Nathaniel; Seshan, Surya; Salvatore, Steven; Smith, Lindsey; Wehrli, Natasha; Waldron, Levi; Kodali, Hanish; Chevalier, James
2017-11-01
To evaluate the value of multiparametric quantitative ultrasound imaging in assessing chronic kidney disease (CKD) using kidney biopsy pathologic findings as reference standards. We prospectively measured multiparametric quantitative ultrasound markers with grayscale, spectral Doppler, and acoustic radiation force impulse imaging in 25 patients with CKD before kidney biopsy and 10 healthy volunteers. Based on all pathologic (glomerulosclerosis, interstitial fibrosis/tubular atrophy, arteriosclerosis, and edema) scores, the patients with CKD were classified into mild (no grade 3 and <2 of grade 2) and moderate to severe (at least 2 of grade 2 or 1 of grade 3) CKD groups. Multiparametric quantitative ultrasound parameters included kidney length, cortical thickness, pixel intensity, parenchymal shear wave velocity, intrarenal artery peak systolic velocity (PSV), end-diastolic velocity (EDV), and resistive index. We tested the difference in quantitative ultrasound parameters among mild CKD, moderate to severe CKD, and healthy controls using analysis of variance, analyzed correlations of quantitative ultrasound parameters with pathologic scores and the estimated glomerular filtration rate (GFR) using Pearson correlation coefficients, and examined the diagnostic performance of quantitative ultrasound parameters in determining moderate CKD and an estimated GFR of less than 60 mL/min/1.73 m 2 using receiver operating characteristic curve analysis. There were significant differences in cortical thickness, pixel intensity, PSV, and EDV among the 3 groups (all P < .01). Among quantitative ultrasound parameters, the top areas under the receiver operating characteristic curves for PSV and EDV were 0.88 and 0.97, respectively, for determining pathologic moderate to severe CKD, and 0.76 and 0.86 for estimated GFR of less than 60 mL/min/1.73 m 2 . Moderate to good correlations were found for PSV, EDV, and pixel intensity with pathologic scores and estimated GFR. The PSV, EDV, and pixel intensity are valuable in determining moderate to severe CKD. The value of shear wave velocity in assessing CKD needs further investigation. © 2017 by the American Institute of Ultrasound in Medicine.
Biurrun Manresa, José A.; Arguissain, Federico G.; Medina Redondo, David E.; Mørch, Carsten D.; Andersen, Ole K.
2015-01-01
The agreement between humans and algorithms on whether an event-related potential (ERP) is present or not and the level of variation in the estimated values of its relevant features are largely unknown. Thus, the aim of this study was to determine the categorical and quantitative agreement between manual and automated methods for single-trial detection and estimation of ERP features. To this end, ERPs were elicited in sixteen healthy volunteers using electrical stimulation at graded intensities below and above the nociceptive withdrawal reflex threshold. Presence/absence of an ERP peak (categorical outcome) and its amplitude and latency (quantitative outcome) in each single-trial were evaluated independently by two human observers and two automated algorithms taken from existing literature. Categorical agreement was assessed using percentage positive and negative agreement and Cohen’s κ, whereas quantitative agreement was evaluated using Bland-Altman analysis and the coefficient of variation. Typical values for the categorical agreement between manual and automated methods were derived, as well as reference values for the average and maximum differences that can be expected if one method is used instead of the others. Results showed that the human observers presented the highest categorical and quantitative agreement, and there were significantly large differences between detection and estimation of quantitative features among methods. In conclusion, substantial care should be taken in the selection of the detection/estimation approach, since factors like stimulation intensity and expected number of trials with/without response can play a significant role in the outcome of a study. PMID:26258532
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benito-Lopez, Bernardino, E-mail: benitobl@um.es; Rocio Moreno-Enguix, Maria del, E-mail: mrmoreno@um.es; Solana-Ibanez, Jose, E-mail: jsolana@um.es
Effective waste management systems can make critical contributions to public health, environmental sustainability and economic development. The challenge affects every person and institution in society, and measures cannot be undertaken without data collection and a quantitative analysis approach. In this paper, the two-stage double bootstrap procedure of is used to estimate the efficiency determinants of Spanish local entities in the provision of public street-cleaning and refuse collection services. The purpose is to identify factors that influence efficiency. The final sample comprised 1072 municipalities. In the first stage, robust efficiency estimates are obtained with Data Envelopment Analysis (DEA). We apply themore » second stage, based on a truncated-regression, to estimate the effect of a group of environmental factors on DEA estimates. The results show the existence of a significant relation between efficiency and all the variables analysed (per capita income, urban population density, the comparative index of the importance of tourism and that of the whole economic activity). We have also considered the influence of a dummy categorical variable - the political sign of the governing party - on the efficient provision of the services under study. The results from the methodology proposed show that municipalities governed by progressive parties are more efficient.« less
The interrupted power law and the size of shadow banking.
Fiaschi, Davide; Kondor, Imre; Marsili, Matteo; Volpati, Valerio
2014-01-01
Using public data (Forbes Global 2000) we show that the asset sizes for the largest global firms follow a Pareto distribution in an intermediate range, that is "interrupted" by a sharp cut-off in its upper tail, where it is totally dominated by financial firms. This flattening of the distribution contrasts with a large body of empirical literature which finds a Pareto distribution for firm sizes both across countries and over time. Pareto distributions are generally traced back to a mechanism of proportional random growth, based on a regime of constant returns to scale. This makes our findings of an "interrupted" Pareto distribution all the more puzzling, because we provide evidence that financial firms in our sample should operate in such a regime. We claim that the missing mass from the upper tail of the asset size distribution is a consequence of shadow banking activity and that it provides an (upper) estimate of the size of the shadow banking system. This estimate-which we propose as a shadow banking index-compares well with estimates of the Financial Stability Board until 2009, but it shows a sharper rise in shadow banking activity after 2010. Finally, we propose a proportional random growth model that reproduces the observed distribution, thereby providing a quantitative estimate of the intensity of shadow banking activity.
Benito-López, Bernardino; Moreno-Enguix, María del Rocio; Solana-Ibañez, José
2011-06-01
Effective waste management systems can make critical contributions to public health, environmental sustainability and economic development. The challenge affects every person and institution in society, and measures cannot be undertaken without data collection and a quantitative analysis approach. In this paper, the two-stage double bootstrap procedure of Simar and Wilson (2007) is used to estimate the efficiency determinants of Spanish local entities in the provision of public street-cleaning and refuse collection services. The purpose is to identify factors that influence efficiency. The final sample comprised 1072 municipalities. In the first stage, robust efficiency estimates are obtained with Data Envelopment Analysis (DEA). We apply the second stage, based on a truncated-regression, to estimate the effect of a group of environmental factors on DEA estimates. The results show the existence of a significant relation between efficiency and all the variables analysed (per capita income, urban population density, the comparative index of the importance of tourism and that of the whole economic activity). We have also considered the influence of a dummy categorical variable - the political sign of the governing party - on the efficient provision of the services under study. The results from the methodology proposed show that municipalities governed by progressive parties are more efficient. Copyright © 2011 Elsevier Ltd. All rights reserved.
Association of earthquakes and faults in the San Francisco Bay area using Bayesian inference
Wesson, R.L.; Bakun, W.H.; Perkins, D.M.
2003-01-01
Bayesian inference provides a method to use seismic intensity data or instrumental locations, together with geologic and seismologic data, to make quantitative estimates of the probabilities that specific past earthquakes are associated with specific faults. Probability density functions are constructed for the location of each earthquake, and these are combined with prior probabilities through Bayes' theorem to estimate the probability that an earthquake is associated with a specific fault. Results using this method are presented here for large, preinstrumental, historical earthquakes and for recent earthquakes with instrumental locations in the San Francisco Bay region. The probabilities for individual earthquakes can be summed to construct a probabilistic frequency-magnitude relationship for a fault segment. Other applications of the technique include the estimation of the probability of background earthquakes, that is, earthquakes not associated with known or considered faults, and the estimation of the fraction of the total seismic moment associated with earthquakes less than the characteristic magnitude. Results for the San Francisco Bay region suggest that potentially damaging earthquakes with magnitudes less than the characteristic magnitudes should be expected. Comparisons of earthquake locations and the surface traces of active faults as determined from geologic data show significant disparities, indicating that a complete understanding of the relationship between earthquakes and faults remains elusive.
Bispectral infrared forest fire detection and analysis using classification techniques
NASA Astrophysics Data System (ADS)
Aranda, Jose M.; Melendez, Juan; de Castro, Antonio J.; Lopez, Fernando
2004-01-01
Infrared cameras are well established as a useful tool for fire detection, but their use for quantitative forest fire measurements faces difficulties, due to the complex spatial and spectral structure of fires. In this work it is shown that some of these difficulties can be overcome by applying classification techniques, a standard tool for the analysis of satellite multispectral images, to bi-spectral images of fires. Images were acquired by two cameras that operate in the medium infrared (MIR) and thermal infrared (TIR) bands. They provide simultaneous and co-registered images, calibrated in brightness temperatures. The MIR-TIR scatterplot of these images can be used to classify the scene into different fire regions (background, ashes, and several ember and flame regions). It is shown that classification makes possible to obtain quantitative measurements of physical fire parameters like rate of spread, embers temperature, and radiated power in the MIR and TIR bands. An estimation of total radiated power and heat release per unit area is also made and compared with values derived from heat of combustion and fuel consumption.
Quantitative sonoelastography for the in vivo assessment of skeletal muscle viscoelasticity
NASA Astrophysics Data System (ADS)
Hoyt, Kenneth; Kneezel, Timothy; Castaneda, Benjamin; Parker, Kevin J.
2008-08-01
A novel quantitative sonoelastography technique for assessing the viscoelastic properties of skeletal muscle tissue was developed. Slowly propagating shear wave interference patterns (termed crawling waves) were generated using a two-source configuration vibrating normal to the surface. Theoretical models predict crawling wave displacement fields, which were validated through phantom studies. In experiments, a viscoelastic model was fit to dispersive shear wave speed sonoelastographic data using nonlinear least-squares techniques to determine frequency-independent shear modulus and viscosity estimates. Shear modulus estimates derived using the viscoelastic model were in agreement with that obtained by mechanical testing on phantom samples. Preliminary sonoelastographic data acquired in healthy human skeletal muscles confirm that high-quality quantitative elasticity data can be acquired in vivo. Studies on relaxed muscle indicate discernible differences in both shear modulus and viscosity estimates between different skeletal muscle groups. Investigations into the dynamic viscoelastic properties of (healthy) human skeletal muscles revealed that voluntarily contracted muscles exhibit considerable increases in both shear modulus and viscosity estimates as compared to the relaxed state. Overall, preliminary results are encouraging and quantitative sonoelastography may prove clinically feasible for in vivo characterization of the dynamic viscoelastic properties of human skeletal muscle.
Bourret, A; Garant, D
2017-03-01
Quantitative genetics approaches, and particularly animal models, are widely used to assess the genetic (co)variance of key fitness related traits and infer adaptive potential of wild populations. Despite the importance of precision and accuracy of genetic variance estimates and their potential sensitivity to various ecological and population specific factors, their reliability is rarely tested explicitly. Here, we used simulations and empirical data collected from an 11-year study on tree swallow (Tachycineta bicolor), a species showing a high rate of extra-pair paternity and a low recruitment rate, to assess the importance of identity errors, structure and size of the pedigree on quantitative genetic estimates in our dataset. Our simulations revealed an important lack of precision in heritability and genetic-correlation estimates for most traits, a low power to detect significant effects and important identifiability problems. We also observed a large bias in heritability estimates when using the social pedigree instead of the genetic one (deflated heritabilities) or when not accounting for an important cause of resemblance among individuals (for example, permanent environment or brood effect) in model parameterizations for some traits (inflated heritabilities). We discuss the causes underlying the low reliability observed here and why they are also likely to occur in other study systems. Altogether, our results re-emphasize the difficulties of generalizing quantitative genetic estimates reliably from one study system to another and the importance of reporting simulation analyses to evaluate these important issues.
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
Fallout Deposition in the Marshall Islands from Bikini and Enewetak Nuclear Weapons Tests
Beck, Harold L.; Bouville, André; Moroz, Brian E.; Simon, Steven L.
2009-01-01
Deposition densities (Bq m-2) of all important dose-contributing radionuclides occurring in nuclear weapons testing fallout from tests conducted at Bikini and Enewetak Atolls (1946-1958) have been estimated on a test-specific basis for all the 31 atolls and separate reef islands of the Marshall Islands. A complete review of various historical and contemporary data, as well as meteorological analysis, was used to make judgments regarding which tests deposited fallout in the Marshall Islands and to estimate fallout deposition density. Our analysis suggested that only 20 of the 66 nuclear tests conducted in or near the Marshall Islands resulted in substantial fallout deposition on any of the 25 inhabited atolls. This analysis was confirmed by the fact that the sum of our estimates of 137Cs deposition from these 20 tests at each atoll is in good agreement with the total 137Cs deposited as estimated from contemporary soil sample analyses. The monitoring data and meteorological analyses were used to quantitatively estimate the deposition density of 63 activation and fission products for each nuclear test, plus the cumulative deposition of 239+240Pu at each atoll. Estimates of the degree of fractionation of fallout from each test at each atoll, as well as of the fallout transit times from the test sites to the atolls were used in this analysis. The estimates of radionuclide deposition density, fractionation, and transit times reported here are the most complete available anywhere and are suitable for estimations of both external and internal dose to representative persons as described in companion papers. PMID:20622548
Fallout deposition in the Marshall Islands from Bikini and Enewetak nuclear weapons tests.
Beck, Harold L; Bouville, André; Moroz, Brian E; Simon, Steven L
2010-08-01
Deposition densities (Bq m(-2)) of all important dose-contributing radionuclides occurring in nuclear weapons testing fallout from tests conducted at Bikini and Enewetak Atolls (1946-1958) have been estimated on a test-specific basis for 32 atolls and separate reef islands of the Marshall Islands. A complete review of various historical and contemporary data, as well as meteorological analysis, was used to make judgments regarding which tests deposited fallout in the Marshall Islands and to estimate fallout deposition density. Our analysis suggested that only 20 of the 66 nuclear tests conducted in or near the Marshall Islands resulted in substantial fallout deposition on any of the 23 inhabited atolls. This analysis was confirmed by the fact that the sum of our estimates of 137Cs deposition from these 20 tests at each atoll is in good agreement with the total 137Cs deposited as estimated from contemporary soil sample analyses. The monitoring data and meteorological analyses were used to quantitatively estimate the deposition density of 63 activation and fission products for each nuclear test, plus the cumulative deposition of 239+240Pu at each atoll. Estimates of the degree of fractionation of fallout from each test at each atoll, as well as of the fallout transit times from the test sites to the atolls were used in this analysis. The estimates of radionuclide deposition density, fractionation, and transit times reported here are the most complete available anywhere and are suitable for estimations of both external and internal dose to representative persons as described in companion papers.
NASA Astrophysics Data System (ADS)
Baer, E. M.; Whittington, C.; Burn, H.
2008-12-01
The geological sciences are fundamentally quantitative. However, the diversity of students' mathematical preparation and skills makes the successful use of quantitative concepts difficult in introductory level classes. At Highline Community College, we have implemented a one-credit co-requisite course to give students supplemental instruction for quantitative skills used in the course. The course, formally titled "Quantitative Geology," nicknamed "MathPatch," runs parallel to our introductory Physical Geology course. MathPatch teaches the quantitative skills required for the geology class right before they are needed. Thus, students learn only the skills they need and are given opportunities to apply them immediately. Topics include complex-graph reading, unit conversions, large numbers, scientific notation, scale and measurement, estimation, powers of 10, and other fundamental mathematical concepts used in basic geological concepts. Use of this course over the past 8 years has successfully accomplished the goals of increasing students' quantitative skills, success and retention. Students master the quantitative skills to a greater extent than before the course was implemented, and less time is spent covering basic quantitative skills in the classroom. Because the course supports the use of quantitative skills, the large number of faculty that teach Geology 101 are more comfortable in using quantitative analysis, and indeed see it as an expectation of the course at Highline. Also significant, retention in the geology course has increased substantially, from 75% to 85%. Although successful, challenges persist with requiring MathPatch as a supplementary course. One, we have seen enrollments decrease in Geology 101, which may be the result of adding this co-requisite. Students resist mandatory enrollment in the course, although they are not good at evaluating their own need for the course. The logistics utilizing MathPatch in an evening class with fewer and longer class meetings has been challenging. Finally, in order to better serve our students' needs, we began to offer on-line sections of MathPatch; this mode of instruction is not as clearly effective, although it is very popular. Through the new The Math You Need project, we hope to improve the effectiveness of the on-line instruction so it can provide comparable results to the face-to-face sections of this class.
Non-working nurses in Japan: estimated size and its age-cohort characteristics.
Nakata, Yoshifumi; Miyazaki, Satoru
2008-12-01
This paper aims to forecast the total number of non-working nursing staff in Japan both overall and in terms of separate age groups for assistant nurses and fully qualified nurses. This also examines policy implications of those forecasts. Although the existence of around 550,000 of non-working nursing staff has been announced, the actual number of non-working nurses is not so clear that we might make errors in making policy to meet nurse workforce demand and supply in Japan. Estimations by integrating various data on the quantitative characteristics of non-working nursing staff were carried out. Considering the length and the type of education or training in referred four nursing positions; registered nurses, assistant nurses, public health nurses and midwives, we first estimated the number of students who completed a full course. And then multiplying by the ratio for gender and age classifications at the time of entry into courses, the number of those who obtained licenses was estimated. The number of non-working nurses was estimated at 100,000 higher than those in 2005 by government. Looking at age group, it is also possible to see a strong reflection of an employment pattern that follows the life cycle of female workers. Further analysis of life cycle effects and cohort effects proved the effect of life cycles even when subtracting the differences between the working behaviours of different generations. Our findings strongly suggest the need to provide an urgent policy that workplace conditions can be created in which a balance between work and family is achievable. Moreover, to empower clinical activity, we also believe there is an urgent need to reexamine the overall career vision for assistant nurses including in terms of compensation. Relevance to clinical practice. Our findings strongly suggests that consideration for work-life balance of nursing staff; particularly, female staff is all the more important to provide a stable quality care.
NASA Astrophysics Data System (ADS)
Foster, L. K.; Clark, B. R.; Duncan, L. L.; Tebo, D. T.; White, J.
2017-12-01
Several historical groundwater models exist within the Coastal Lowlands Aquifer System (CLAS), which spans the Gulf Coastal Plain in Texas, Louisiana, Mississippi, Alabama, and Florida. The largest of these models, called the Gulf Coast Regional Aquifer System Analysis (RASA) model, has been brought into a new framework using the Newton formulation for MODFLOW-2005 (MODFLOW-NWT) and serves as the starting point of a new investigation underway by the U.S. Geological Survey to improve understanding of the CLAS and provide predictions of future groundwater availability within an uncertainty quantification (UQ) framework. The use of an UQ framework will not only provide estimates of water-level observation worth, hydraulic parameter uncertainty, boundary-condition uncertainty, and uncertainty of future potential predictions, but it will also guide the model development process. Traditionally, model development proceeds from dataset construction to the process of deterministic history matching, followed by deterministic predictions using the model. This investigation will combine the use of UQ with existing historical models of the study area to assess in a quantitative framework the effect model package and property improvements have on the ability to represent past-system states, as well as the effect on the model's ability to make certain predictions of water levels, water budgets, and base-flow estimates. Estimates of hydraulic property information and boundary conditions from the existing models and literature, forming the prior, will be used to make initial estimates of model forecasts and their corresponding uncertainty, along with an uncalibrated groundwater model run within an unconstrained Monte Carlo analysis. First-Order Second-Moment (FOSM) analysis will also be used to investigate parameter and predictive uncertainty, and guide next steps in model development prior to rigorous history matching by using PEST++ parameter estimation code.
Eisenberg, Marisa C; Jain, Harsh V
2017-10-27
Mathematical modeling has a long history in the field of cancer therapeutics, and there is increasing recognition that it can help uncover the mechanisms that underlie tumor response to treatment. However, making quantitative predictions with such models often requires parameter estimation from data, raising questions of parameter identifiability and estimability. Even in the case of structural (theoretical) identifiability, imperfect data and the resulting practical unidentifiability of model parameters can make it difficult to infer the desired information, and in some cases, to yield biologically correct inferences and predictions. Here, we examine parameter identifiability and estimability using a case study of two compartmental, ordinary differential equation models of cancer treatment with drugs that are cell cycle-specific (taxol) as well as non-specific (oxaliplatin). We proceed through model building, structural identifiability analysis, parameter estimation, practical identifiability analysis and its biological implications, as well as alternative data collection protocols and experimental designs that render the model identifiable. We use the differential algebra/input-output relationship approach for structural identifiability, and primarily the profile likelihood approach for practical identifiability. Despite the models being structurally identifiable, we show that without consideration of practical identifiability, incorrect cell cycle distributions can be inferred, that would result in suboptimal therapeutic choices. We illustrate the usefulness of estimating practically identifiable combinations (in addition to the more typically considered structurally identifiable combinations) in generating biologically meaningful insights. We also use simulated data to evaluate how the practical identifiability of the model would change under alternative experimental designs. These results highlight the importance of understanding the underlying mechanisms rather than purely using parsimony or information criteria/goodness-of-fit to decide model selection questions. The overall roadmap for identifiability testing laid out here can be used to help provide mechanistic insight into complex biological phenomena, reduce experimental costs, and optimize model-driven experimentation. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Montopoli, Mario; Roberto, Nicoletta; Adirosi, Elisa; Gorgucci, Eugenio; Baldini, Luca
2017-04-01
Weather radars are nowadays a unique tool to estimate quantitatively the rain precipitation near the surface. This is an important task for a plenty of applications. For example, to feed hydrological models, mitigate the impact of severe storms at the ground using radar information in modern warning tools as well as aid the validation studies of satellite-based rain products. With respect to the latter application, several ground validation studies of the Global Precipitation Mission (GPM) products have recently highlighted the importance of accurate QPE from ground-based weather radars. To date, a plenty of works analyzed the performance of various QPE algorithms making use of actual and synthetic experiments, possibly trained by measurement of particle size distributions and electromagnetic models. Most of these studies support the use of dual polarization variables not only to ensure a good level of radar data quality but also as a direct input in the rain estimation equations. Among others, one of the most important limiting factors in radar QPE accuracy is the vertical variability of particle size distribution that affects at different levels, all the radar variables acquired as well as rain rates. This is particularly impactful in mountainous areas where the altitudes of the radar sampling is likely several hundred of meters above the surface. In this work, we analyze the impact of the vertical profile variations of rain precipitation on several dual polarization radar QPE algorithms when they are tested a in complex orography scenario. So far, in weather radar studies, more emphasis has been given to the extrapolation strategies that make use of the signature of the vertical profiles in terms of radar co-polar reflectivity. This may limit the use of the radar vertical profiles when dual polarization QPE algorithms are considered because in that case all the radar variables used in the rain estimation process should be consistently extrapolated at the surface. To avoid facing such a complexity, especially with a view to operational implementation, we propose to look at the features of the vertical profile of rain (VPR), i.e. after performing the rain estimation. This procedure allows characterizing a single variable (i.e. rain) when dealing with vertical extrapolations. Some case studies of severe thunderstorms that hit the mountainous area surrounding Rome in Italy causing floodings and damages and observed by the research C-band polarization agility Doppler radar named Polar 55C, managed by the Institute of Atmospheric Sciences and Climate (ISAC) at the National Research Council of Italy (CNR), are used to support the concept of VPR. Our results indicate that the combined algorithm, which merges together the differential phase shift (Kdp), the reflectivity factor at horizontal polarization (Zhh), and differential reflectivity (Zdr), once accurately processed, performs best among those tested that make use of Zhh alone, Kdp alone, and Zhh and Zdr pair. Improvements from 25% to 80% are found for the total rain accumulations in terms of normalized bias when the VPR extrapolation is applied.
Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W
2018-04-01
The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.
VBA: A Probabilistic Treatment of Nonlinear Models for Neurobiological and Behavioural Data
Daunizeau, Jean; Adam, Vincent; Rigoux, Lionel
2014-01-01
This work is in line with an on-going effort tending toward a computational (quantitative and refutable) understanding of human neuro-cognitive processes. Many sophisticated models for behavioural and neurobiological data have flourished during the past decade. Most of these models are partly unspecified (i.e. they have unknown parameters) and nonlinear. This makes them difficult to peer with a formal statistical data analysis framework. In turn, this compromises the reproducibility of model-based empirical studies. This work exposes a software toolbox that provides generic, efficient and robust probabilistic solutions to the three problems of model-based analysis of empirical data: (i) data simulation, (ii) parameter estimation/model selection, and (iii) experimental design optimization. PMID:24465198
Abrasion-ablation model for neutron production in heavy ion reactions
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wilson, John W.; Townsend, Lawrence W.
1995-01-01
In heavy ion reactions, neutron production at forward angles is observed to occur with a Gaussian shape that is centered near the beam energy and extends to energies well above that of the beam. This paper presents an abrasion-ablation model for making quantitative predictions of the neutron spectrum. To describe neutrons produced from the abrasion step of the reaction where the projectile and target overlap, the authors use the Glauber model and include effects of final-state interactions. They then use the prefragment mass distribution from abrasion with a statistical evaporation model to estimate the neutron spectrum resulting from ablation. Measurements of neutron production from Ne and Nb beams are compared with calculations, and good agreement is found.
Estimating the Geoelectric Field Using Precomputed EMTFs: Effect of Magnetometer Cadence
NASA Astrophysics Data System (ADS)
Grawe, M.; Butala, M.; Makela, J. J.; Kamalabadi, F.
2017-12-01
Studies that make use of electromagnetic transfer functions (EMTFs) to calculate the surface electric field from a specified surface magnetic field often use historical magnetometer information for validation and comparison purposes. Depending on the data source, the magnetometer cadence is typically between 1 and 60 seconds. It is often implied that a 60 (and sometimes 10) second cadence is acceptable for purposes of geoelectric field calculation using a geophysical model. Here, we quantitatively assess this claim under different geological settings and using models of varying complexity (using uniform/1D/3D EMTFs) across several different space weather events. Conclusions are made about sampling rate sufficiency as a function of local geology and the spectral content of the surface magnetic field.
Development and Measurement of Preschoolers' Quantitative Knowledge
ERIC Educational Resources Information Center
Geary, David C.
2015-01-01
The collection of studies in this special issue make an important contribution to our understanding and measurement of the core cognitive and noncognitive factors that influence children's emerging quantitative competencies. The studies also illustrate how the field has matured, from a time when the quantitative competencies of infants and young…
A Framework to Determine New System Requirements Under Design Parameter and Demand Uncertainties
2015-04-30
relegates quantitative complexities of decision-making to the method and designates trade-space exploration to the practitioner. We demonstrate the...quantitative complexities of decision-making to the method and designates trade-space exploration to the practitioner. We demonstrate the approach...play a critical role in determining new system requirements. Scope and Method of Approach The early stages of the design process have substantial
USDA-ARS?s Scientific Manuscript database
Classical quantitative genetics aids crop improvement by providing the means to estimate heritability, genetic correlations, and predicted responses to various selection schemes. Genomics has the potential to aid quantitative genetics and applied crop improvement programs via large-scale, high-thro...
Williams, M S; Ebel, E D; Cao, Y
2013-01-01
The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.
Identification of quantitative trait loci for fibrin clot phenotypes: The EuroCLOT study
Williams, Frances MK; Carter, Angela M; Kato, Bernet; Falchi, Mario; Bathum, Lise; Surdulescu, Gabriela; Kyvik, Kirsten Ohm; Palotie, Aarno; Spector, Tim D; Grant, Peter J
2012-01-01
Objectives Fibrin makes up the structural basis of an occlusive arterial thrombus and variability in fibrin phenotype relates to cardiovascular risk. The aims of the current study from the EU consortium EuroCLOT were to 1) determine the heritability of fibrin phenotypes and 2) identify QTLs associated with fibrin phenotypes. Methods 447 dizygotic (DZ) and 460 monozygotic (MZ) pairs of healthy UK Caucasian female twins and 199 DZ twin pairs from Denmark were studied. D-dimer, an indicator of fibrin turnover, was measured by ELISA and measures of clot formation, morphology and lysis were determined by turbidimetric assays. Heritability estimates and genome-wide linkage analysis were performed. Results Estimates of heritability for d-dimer and turbidometric variables were in the range 17 - 46%, with highest levels for maximal absorbance which provides an estimate of clot density. Genome-wide linkage analysis revealed 6 significant regions with LOD>3 on 5 chromosomes (5, 6, 9, 16 and 17). Conclusions The results indicate a significant genetic contribution to variability in fibrin phenotypes and highlight regions in the human genome which warrant further investigation in relation to ischaemic cardiovascular disorders and their therapy. PMID:19150881
Towards the automatization of the Foucault knife-edge quantitative test
NASA Astrophysics Data System (ADS)
Rodríguez, G.; Villa, J.; Martínez, G.; de la Rosa, I.; Ivanov, R.
2017-08-01
Given the increasing necessity of simple, economical and reliable methods and instruments for performing quality tests of optical surfaces such as mirrors and lenses, in the recent years we resumed the study of the long forgotten Foucault knife-edge test from the point of view of the physical optics, ultimately achieving a closed mathematical expression that directly relates the knife-edge position along the displacement paraxial axis with the observable irradiance pattern, which later allowed us to propose a quantitative methodology for estimating the wavefront error of an aspherical mirror with precision akin to interferometry. In this work, we present a further improved digital image processing algorithm in which the sigmoidal cost-function for calculating the transient slope-point of each associated intensity-illumination profile is replaced for a simplified version of it, thus making the whole process of estimating the wavefront gradient remarkably more stable and efficient, at the same time, the Fourier based algorithm employed for gradient integration has been replaced as well for a regularized quadratic cost-function that allows a considerably easier introduction of the region of interest (ROI) of the function, which solved by means of a linear gradient conjugate method largely increases the overall accuracy and efficiency of the algorithm. This revised approach of our methodology can be easily implemented and handled by most single-board microcontrollers in the market, hence enabling the implementation of a full-integrated automatized test apparatus, opening a realistic path for even the proposal of a stand-alone optical mirror analyzer prototype.
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-08-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision-making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps, i.e. the spatial probability of a future vent opening given the past eruptive activity of a volcano. This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source Geographic Information System Quantum GIS, that is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows to select an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input datasets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
Biomechanical and optical behavior of human corneas before and after photorefractive keratectomy.
Sánchez, Paolo; Moutsouris, Kyros; Pandolfi, Anna
2014-06-01
To evaluate numerically the biomechanical and optical behavior of human corneas and quantitatively estimate the changes in refractive power and stress caused by photorefractive keratectomy (PRK). Athineum Refractive Center, Athens, Greece, and Politecnico di Milano, Milan, Italy. Retrospective comparative interventional cohort study. Corneal topographies of 10 human eyes were taken with a scanning-slit corneal topographer (Orbscan II) before and after PRK. Ten patient-specific finite element models were created to estimate the strain and stress fields in the cornea in preoperative and postoperative configurations. The biomechanical response in postoperative eyes was computed by directly modeling the postoperative geometry from the topographer and by reproducing the corneal ablation planned for the PRK with a numerical reprofiling procedure. Postoperative corneas were more compliant than preoperative corneas. In the optical zone, corneal thinning decreased the mechanical stiffness, causing local resteepening and making the central refractive power more sensitive to variations in intraocular pressure (IOP). At physiologic IOP, the postoperative corneas had a mean 7% forward increase in apical displacement and a mean 20% increase in the stress components at the center of the anterior surface over the preoperative condition. Patient-specific numerical models of the cornea can provide quantitative information on the changes in refractive power and in the stress field caused by refractive surgery. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Evaluation of Historical and Projected Agricultural Climate Risk Over the Continental US
NASA Astrophysics Data System (ADS)
Zhu, X.; Troy, T. J.; Devineni, N.
2016-12-01
Food demands are rising due to an increasing population with changing food preferences, which places pressure on agricultural systems. In addition, in the past decade climate extremes have highlighted the vulnerability of our agricultural production to climate variability. Quantitative analyses in the climate-agriculture research field have been performed in many studies. However, climate risk still remains difficult to evaluate at large scales yet shows great potential of help us better understand historical climate change impacts and evaluate the future risk given climate projections. In this study, we developed a framework to evaluate climate risk quantitatively by applying statistical methods such as Bayesian regression, distribution fitting, and Monte Carlo simulation. We applied the framework over different climate regions in the continental US both historically and for modeled climate projections. The relative importance of any major growing season climate index, such as maximum dry period or heavy precipitation, was evaluated to determine what climate indices play a role in affecting crop yields. The statistical modeling framework was applied using county yields, with irrigated and rainfed yields separated to evaluate the different risk. This framework provides estimates of the climate risk facing agricultural production in the near-term that account for the full uncertainty of climate occurrences, range of crop response, and spatial correlation in climate. In particular, the method provides robust estimates of importance of irrigation in mitigating agricultural climate risk. The results of this study can contribute to decision making about crop choice and water use in an uncertain climate.
76 FR 13018 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-09
... statistical surveys that yield quantitative results that can be generalized to the population of study. This... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. Total Burden Estimate for the...
Measurement of lung expansion with computed tomography and comparison with quantitative histology.
Coxson, H O; Mayo, J R; Behzad, H; Moore, B J; Verburgt, L M; Staples, C A; Paré, P D; Hogg, J C
1995-11-01
The total and regional lung volumes were estimated from computed tomography (CT), and the pleural pressure gradient was determined by using the milliliters of gas per gram of tissue estimated from the X-ray attenuation values and the pressure-volume curve of the lung. The data show that CT accurately estimated the volume of the resected lobe but overestimated its weight by 24 +/- 19%. The volume of gas per gram of tissue was less in the gravity-dependent regions due to a pleural pressure gradient of 0.24 +/- 0.08 cmH2O/cm of descent in the thorax. The proportion of tissue to air obtained with CT was similar to that obtained by quantitative histology. We conclude that the CT scan can be used to estimate total and regional lung volumes and that measurements of the proportions of tissue and air within the thorax by CT can be used in conjunction with quantitative histology to evaluate lung structure.
Improved Modeling of Three-Point Estimates for Decision Making: Going Beyond the Triangle
2016-03-01
OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND THE TRIANGLE by Daniel W. Mulligan March 2016 Thesis Advisor: Mark Rhoades...REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPROVED MODELING OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND...unlimited IMPROVED MODELING OF THREE-POINT ESTIMATES FOR DECISION MAKING: GOING BEYOND THE TRIANGLE Daniel W. Mulligan Civilian, National
Toward Creating A Global Retrospective Climatology of Aerosol Properties
NASA Technical Reports Server (NTRS)
Curran, Robert J.; Mishchenko, Michael I.; Hansen, James E. (Technical Monitor)
2000-01-01
Tropospheric aerosols are thought to cause a significant direct and indirect climate forcing, but the magnitude of this forcing remains highly uncertain because of poor knowledge of global aerosol characteristics and their temporal changes. The standard long-term global product, the one-channel Advanced Very-High-Resolution Radiometer (AVHRR) aerosol optical thickness over the ocean, relies on a single predefined aerosol model and can be inaccurate in many cases. Furthermore, it provides no information on aerosol column number density, thus making it impossible to estimate the indirect aerosol effect on climate. Total Ozone Mapping Spectrometer (TOMS) data can be used to detect absorbing aerosols over land, but are insensitive to aerosols located below one kilometer. It is thus clear that innovative approaches must be employed in order to extract a more quantitative and accurate aerosol climatology from available satellite and other measurements, thus enabling more reliable estimates of the direct and indirect aerosol forcings. The Global Aerosol Climatology Project (GACP) was established in 1998 as part of the Global Energy and Water Cycle Experiment (GEWEX). Its main objective is to analyze satellite radiance measurements and field observations to infer the global distribution of aerosols, their properties, and their seasonal and interannual variations. The overall goal is to develop advanced global aerosol climatologies for the period of satellite data and to make the aerosol climatologies broadly available through the GACP web site.
The Attentional Field Revealed by Single-Voxel Modeling of fMRI Time Courses
DeYoe, Edgar A.
2015-01-01
The spatial topography of visual attention is a distinguishing and critical feature of many theoretical models of visuospatial attention. Previous fMRI-based measurements of the topography of attention have typically been too crude to adequately test the predictions of different competing models. This study demonstrates a new technique to make detailed measurements of the topography of visuospatial attention from single-voxel, fMRI time courses. Briefly, this technique involves first estimating a voxel's population receptive field (pRF) and then “drifting” attention through the pRF such that the modulation of the voxel's fMRI time course reflects the spatial topography of attention. The topography of the attentional field (AF) is then estimated using a time-course modeling procedure. Notably, we are able to make these measurements in many visual areas including smaller, higher order areas, thus enabling a more comprehensive comparison of attentional mechanisms throughout the full hierarchy of human visual cortex. Using this technique, we show that the AF scales with eccentricity and varies across visual areas. We also show that voxels in multiple visual areas exhibit suppressive attentional effects that are well modeled by an AF having an enhancing Gaussian center with a suppressive surround. These findings provide extensive, quantitative neurophysiological data for use in modeling the psychological effects of visuospatial attention. PMID:25810532
Quantitative estimation of pesticide-likeness for agrochemical discovery.
Avram, Sorin; Funar-Timofei, Simona; Borota, Ana; Chennamaneni, Sridhar Rao; Manchala, Anil Kumar; Muresan, Sorel
2014-12-01
The design of chemical libraries, an early step in agrochemical discovery programs, is frequently addressed by means of qualitative physicochemical and/or topological rule-based methods. The aim of this study is to develop quantitative estimates of herbicide- (QEH), insecticide- (QEI), fungicide- (QEF), and, finally, pesticide-likeness (QEP). In the assessment of these definitions, we relied on the concept of desirability functions. We found a simple function, shared by the three classes of pesticides, parameterized particularly, for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings. Subsequently, we describe the scoring of each pesticide class by the corresponding quantitative estimate. In a comparative study, we assessed the performance of the scoring functions using extensive datasets of patented pesticides. The hereby-established quantitative assessment has the ability to rank compounds whether they fail well-established pesticide-likeness rules or not, and offer an efficient way to prioritize (class-specific) pesticides. These findings are valuable for the efficient estimation of pesticide-likeness of vast chemical libraries in the field of agrochemical discovery. Graphical AbstractQuantitative models for pesticide-likeness were derived using the concept of desirability functions parameterized for six, easy to compute, independent and interpretable, molecular properties: molecular weight, logP, number of hydrogen bond acceptors, number of hydrogen bond donors, number of rotatable bounds and number of aromatic rings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blais, AR; Dekaban, M; Lee, T-Y
2014-08-15
Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kineticmore » models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.« less
Electroencephalography and quantitative electroencephalography in mild traumatic brain injury.
Haneef, Zulfi; Levin, Harvey S; Frost, James D; Mizrahi, Eli M
2013-04-15
Mild traumatic brain injury (mTBI) causes brain injury resulting in electrophysiologic abnormalities visible in electroencephalography (EEG) recordings. Quantitative EEG (qEEG) makes use of quantitative techniques to analyze EEG characteristics such as frequency, amplitude, coherence, power, phase, and symmetry over time independently or in combination. QEEG has been evaluated for its use in making a diagnosis of mTBI and assessing prognosis, including the likelihood of progressing to the postconcussive syndrome (PCS) phase. We review the EEG and qEEG changes of mTBI described in the literature. An attempt is made to separate the findings seen during the acute, subacute, and chronic phases after mTBI. Brief mention is also made of the neurobiological correlates of qEEG using neuroimaging techniques or in histopathology. Although the literature indicates the promise of qEEG in making a diagnosis and indicating prognosis of mTBI, further study is needed to corroborate and refine these methods.
Electroencephalography and Quantitative Electroencephalography in Mild Traumatic Brain Injury
Levin, Harvey S.; Frost, James D.; Mizrahi, Eli M.
2013-01-01
Abstract Mild traumatic brain injury (mTBI) causes brain injury resulting in electrophysiologic abnormalities visible in electroencephalography (EEG) recordings. Quantitative EEG (qEEG) makes use of quantitative techniques to analyze EEG characteristics such as frequency, amplitude, coherence, power, phase, and symmetry over time independently or in combination. QEEG has been evaluated for its use in making a diagnosis of mTBI and assessing prognosis, including the likelihood of progressing to the postconcussive syndrome (PCS) phase. We review the EEG and qEEG changes of mTBI described in the literature. An attempt is made to separate the findings seen during the acute, subacute, and chronic phases after mTBI. Brief mention is also made of the neurobiological correlates of qEEG using neuroimaging techniques or in histopathology. Although the literature indicates the promise of qEEG in making a diagnosis and indicating prognosis of mTBI, further study is needed to corroborate and refine these methods. PMID:23249295
NASA Astrophysics Data System (ADS)
Betsofen, S. Ya.; Kolobov, Yu. R.; Volkova, E. F.; Bozhko, S. A.; Voskresenskaya, I. I.
2015-04-01
Quantitative methods have been developed to estimate the anisotropy of the strength properties and to determine the phase composition of Mg-Al alloys. The efficiency of the methods is confirmed for MA5 alloy subjected to severe plastic deformation. It is shown that the Taylor factors calculated for basal slip averaged over all orientations of a polycrystalline aggregate with allowance for texture can be used for a quantitative estimation of the contribution of the texture of semifinished magnesium alloy products to the anisotropy of their strength properties. A technique of determining the composition of a solid solution and the intermetallic phase Al12Mg17 content is developed using the measurement of the lattice parameters of the solid solution and the known dependence of these lattice parameters on the composition.
Hamilton, Joshua J; Dwivedi, Vivek; Reed, Jennifer L
2013-07-16
Constraint-based methods provide powerful computational techniques to allow understanding and prediction of cellular behavior. These methods rely on physiochemical constraints to eliminate infeasible behaviors from the space of available behaviors. One such constraint is thermodynamic feasibility, the requirement that intracellular flux distributions obey the laws of thermodynamics. The past decade has seen several constraint-based methods that interpret this constraint in different ways, including those that are limited to small networks, rely on predefined reaction directions, and/or neglect the relationship between reaction free energies and metabolite concentrations. In this work, we utilize one such approach, thermodynamics-based metabolic flux analysis (TMFA), to make genome-scale, quantitative predictions about metabolite concentrations and reaction free energies in the absence of prior knowledge of reaction directions, while accounting for uncertainties in thermodynamic estimates. We applied TMFA to a genome-scale network reconstruction of Escherichia coli and examined the effect of thermodynamic constraints on the flux space. We also assessed the predictive performance of TMFA against gene essentiality and quantitative metabolomics data, under both aerobic and anaerobic, and optimal and suboptimal growth conditions. Based on these results, we propose that TMFA is a useful tool for validating phenotypes and generating hypotheses, and that additional types of data and constraints can improve predictions of metabolite concentrations. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Volle, Romain; Nourrisson, Céline; Mirand, Audrey; Regagnon, Christel; Chambon, Martine; Henquell, Cécile; Bailly, Jean-Luc; Peigue-Lafeuille, Hélène; Archimbaud, Christine
2012-10-01
Human enteroviruses are the most frequent cause of aseptic meningitis and are involved in other neurological infections. Qualitative detection of enterovirus genomes in cerebrospinal fluid is a prerequisite in diagnosing neurological diseases. The pathogenesis of these infections is not well understood and research in this domain would benefit from the availability of a quantitative technique to determine viral load in clinical specimens. This study describes the development of a real-time RT-qPCR assay using hydrolysis TaqMan probe and a competitive RNA internal control. The assay has high specificity and can be used for a large sample of distinct enterovirus strains and serotypes. The reproducible limit of detection was estimated at 1875 copies/ml of quantitative standards composed of RNA transcripts obtained from a cloned echovirus 30 genome. Technical performance was unaffected by the introduction of a competitive RNA internal control before RNA extraction. The mean enterovirus RNA concentration in an evaluation series of 15 archived cerebrospinal fluid specimens was determined at 4.78 log(10)copies/ml for the overall sample. The sensitivity and reproducibility of the real time RT-qPCR assay used in combination with the internal control to monitor the overall specimen process make it a valuable tool with applied research into enterovirus infections. Copyright © 2012 Elsevier B.V. All rights reserved.
Donaldson, K.A.; Griffin, Dale W.; Paul, J.H.
2002-01-01
A method was developed for the quantitative detection of pathogenic human enteroviruses from surface waters in the Florida Keys using Taqman (R) one-step Reverse transcription (RT)-PCR with the Model 7700 ABI Prism (R) Sequence Detection System. Viruses were directly extracted from unconcentrated grab samples of seawater, from seawater concentrated by vortex flow filtration using a 100kD filter and from sponge tissue. Total RNA was extracted from the samples, purified and concentrated using spin-column chromatography. A 192-196 base pair portion of the 5??? untranscribed region was amplified from these extracts. Enterovirus concentrations were estimated using real-time RT-PCR technology. Nine of 15 sample sites or 60% were positive for the presence of pathogenic human enteroviruses. Considering only near-shore sites, 69% were positive with viral concentrations ranging from 9.3viruses/ml to 83viruses/g of sponge tissue (uncorrected for extraction efficiency). Certain amplicons were selected for cloning and sequencing for identification. Three strains of waterborne enteroviruses were identified as Coxsackievirus A9, Coxsackievirus A16, and Poliovirus Sabin type 1. Time and cost efficiency of this one-step real-time RT-PCR methodology makes this an ideal technique to detect, quantitate and identify pathogenic enteroviruses in recreational waters. Copyright ?? 2002 Elsevier Science Ltd.
Puhan, Milo A; Yu, Tsung; Boyd, Cynthia M; Ter Riet, Gerben
2015-07-02
When faced with uncertainties about the effects of medical interventions regulatory agencies, guideline developers, clinicians, and researchers commonly ask for more research, and in particular for more randomized trials. The conduct of additional randomized trials is, however, sometimes not the most efficient way to reduce uncertainty. Instead, approaches such as value of information analysis or other approaches should be used to prioritize research that will most likely reduce uncertainty and inform decisions. In situations where additional research for specific interventions needs to be prioritized, we propose the use of quantitative benefit-harm assessments that illustrate how the benefit-harm balance may change as a consequence of additional research. The example of roflumilast for patients with chronic obstructive pulmonary disease shows that additional research on patient preferences (e.g., how important are exacerbations relative to psychiatric harms?) or outcome risks (e.g., what is the incidence of psychiatric outcomes in patients with chronic obstructive pulmonary disease without treatment?) is sometimes more valuable than additional randomized trials. We propose that quantitative benefit-harm assessments have the potential to explore the impact of additional research and to identify research priorities Our approach may be seen as another type of value of information analysis and as a useful approach to stimulate specific new research that has the potential to change current estimates of the benefit-harm balance and decision making.
Delius, Judith; Frank, Oliver
2017-01-01
Nuclear magnetic resonance (NMR) spectroscopy is well-established in assessing the binding affinity between low molecular weight ligands and proteins. However, conventional NMR-based binding assays are often limited to small proteins of high purity and may require elaborate isotopic labeling of one of the potential binding partners. As protein–polyphenol complexation is assumed to be a key event in polyphenol-mediated oral astringency, here we introduce a label-free, ligand-focused 1H NMR titration assay to estimate binding affinities and characterize soluble complex formation between proteins and low molecular weight polyphenols. The method makes use of the effects of NMR line broadening due to protein–ligand interactions and quantitation of the non-bound ligand at varying protein concentrations by quantitative 1H NMR spectroscopy (qHNMR) using electronic reference to access in vivo concentration (ERETIC 2). This technique is applied to assess the interaction kinetics of selected astringent tasting polyphenols and purified mucin, a major lubricating glycoprotein of human saliva, as well as human whole saliva. The protein affinity values (BC50) obtained are subsequently correlated with the intrinsic mouth-puckering, astringent oral sensation imparted by these compounds. The quantitative NMR method is further exploited to study the effect of carboxymethyl cellulose, a candidate “anti-astringent” protein binding antagonist, on the polyphenol–protein interaction. Consequently, the NMR approach presented here proves to be a versatile tool to study the interactions between proteins and low-affinity ligands in solution and may find promising applications in the discovery of bioactives. PMID:28886151
Quantitative Methods Intervention: What Do the Students Want?
ERIC Educational Resources Information Center
Frankland, Lianne; Harrison, Jacqui
2016-01-01
The shortage of social science graduates with competent quantitative skills jeopardises the competitive UK economy, public policy making effectiveness and the status the UK has as a world leader in higher education and research (British Academy for Humanities and Social Sciences, 2012). There is a growing demand for quantitative skills across all…
Quantitative Aging Pattern in Mouse Urine Vapor as Measured by Gas-Liquid Chromatography
NASA Technical Reports Server (NTRS)
Robinson, Arthur B.; Dirren, Henri; Sheets, Alan; Miquel, Jaime; Lundgren, Paul R.
1975-01-01
We have discovered a quantitative aging pattern in mouse urine vapor. The diagnostic power of the pattern has been found to be high. We hope that this pattern will eventually allow quantitative estimates of physiological age and some insight into the biochemistry of aging.
Genomic Quantitative Genetics to Study Evolution in the Wild.
Gienapp, Phillip; Fior, Simone; Guillaume, Frédéric; Lasky, Jesse R; Sork, Victoria L; Csilléry, Katalin
2017-12-01
Quantitative genetic theory provides a means of estimating the evolutionary potential of natural populations. However, this approach was previously only feasible in systems where the genetic relatedness between individuals could be inferred from pedigrees or experimental crosses. The genomic revolution opened up the possibility of obtaining the realized proportion of genome shared among individuals in natural populations of virtually any species, which could promise (more) accurate estimates of quantitative genetic parameters in virtually any species. Such a 'genomic' quantitative genetics approach relies on fewer assumptions, offers a greater methodological flexibility, and is thus expected to greatly enhance our understanding of evolution in natural populations, for example, in the context of adaptation to environmental change, eco-evolutionary dynamics, and biodiversity conservation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Chardon, Jurgen; Swart, Arno
2016-07-01
In the consumer phase of a typical quantitative microbiological risk assessment (QMRA), mathematical equations identify data gaps. To acquire useful data we designed a food consumption and food handling survey (2,226 respondents) for QMRA applications that is especially aimed at obtaining quantitative data. For a broad spectrum of food products, the survey covered the following topics: processing status at retail, consumer storage, preparation, and consumption. Questions were designed to facilitate distribution fitting. In the statistical analysis, special attention was given to the selection of the most adequate distribution to describe the data. Bootstrap procedures were used to describe uncertainty. The final result was a coherent quantitative consumer phase food survey and parameter estimates for food handling and consumption practices in The Netherlands, including variation over individuals and uncertainty estimates.
ERIC Educational Resources Information Center
Feinstein, Leon
The cost benefits of lifelong learning in the United Kingdom were estimated, based on quantitative evidence. Between 1975-1996, 43 police force areas in England and Wales were studied to determine the effect of wages on crime. It was found that a 10 percent rise in the average pay of those on low pay reduces the overall area property crime rate by…
Quantitation without Calibration: Response Profile as an Indicator of Target Amount.
Debnath, Mrittika; Farace, Jessica M; Johnson, Kristopher D; Nesterova, Irina V
2018-06-21
Quantitative assessment of biomarkers is essential in numerous contexts from decision-making in clinical situations to food quality monitoring to interpretation of life-science research findings. However, appropriate quantitation techniques are not as widely addressed as detection methods. One of the major challenges in biomarker's quantitation is the need to have a calibration for correlating a measured signal to a target amount. The step complicates the methodologies and makes them less sustainable. In this work we address the issue via a new strategy: relying on position of response profile rather than on an absolute signal value for assessment of a target's amount. In order to enable the capability we develop a target-probe binding mechanism based on a negative cooperativity effect. A proof-of-concept example demonstrates that the model is suitable for quantitative analysis of nucleic acids over a wide concentration range. The general principles of the platform will be applicable toward a variety of biomarkers such as nucleic acids, proteins, peptides, and others.
Repeatability Assessment by ISO 11843-7 in Quantitative HPLC for Herbal Medicines.
Chen, Liangmian; Kotani, Akira; Hakamata, Hideki; Tsutsumi, Risa; Hayashi, Yuzuru; Wang, Zhimin; Kusu, Fumiyo
2015-01-01
We have proposed an assessment methods to estimate the measurement relative standard deviation (RSD) of chromatographic peaks in quantitative HPLC for herbal medicines by the methodology of ISO 11843 Part 7 (ISO 11843-7:2012), which provides detection limits stochastically. In quantitative HPLC with UV detection (HPLC-UV) of Scutellaria Radix for the determination of baicalin, the measurement RSD of baicalin by ISO 11843-7:2012 stochastically was within a 95% confidence interval of the statistically obtained RSD by repetitive measurements (n = 6). Thus, our findings show that it is applicable for estimating of the repeatability of HPLC-UV for determining baicalin without repeated measurements. In addition, the allowable limit of the "System repeatability" in "Liquid Chromatography" regulated in a pharmacopoeia can be obtained by the present assessment method. Moreover, the present assessment method was also successfully applied to estimate the measurement RSDs of quantitative three-channel liquid chromatography with electrochemical detection (LC-3ECD) of Chrysanthemi Flos for determining caffeoylquinic acids and flavonoids. By the present repeatability assessment method, reliable measurement RSD was obtained stochastically, and the experimental time was remarkably reduced.
Ortel, Terry W.; Spies, Ryan R.
2015-11-19
Next-Generation Radar (NEXRAD) has become an integral component in the estimation of precipitation (Kitzmiller and others, 2013). The high spatial and temporal resolution of NEXRAD has revolutionized the ability to estimate precipitation across vast regions, which is especially beneficial in areas without a dense rain-gage network. With the improved precipitation estimates, hydrologic models can produce reliable streamflow forecasts for areas across the United States. NEXRAD data from the National Weather Service (NWS) has been an invaluable tool used by the U.S. Geological Survey (USGS) for numerous projects and studies; NEXRAD data processing techniques similar to those discussed in this Fact Sheet have been developed within the USGS, including the NWS Quantitative Precipitation Estimates archive developed by Blodgett (2013).
Garvelink, Mirjam M; Ngangue, Patrice A G; Adekpedjou, Rheda; Diouf, Ndeye T; Goh, Larissa; Blair, Louisa; Légaré, France
2016-04-01
We conducted a mixed-methods knowledge synthesis to assess the effectiveness of interventions to improve caregivers' involvement in decision making with seniors, and to describe caregivers' experiences of decision making in the absence of interventions. We analyzed forty-nine qualitative, fourteen quantitative, and three mixed-methods studies. The qualitative studies indicated that caregivers had unmet needs for information, discussions of values and needs, and decision support, which led to negative sentiments after decision making. Our results indicate that there have been insufficient quantitative evaluations of interventions to involve caregivers in decision making with seniors and that the evaluations that do exist found few clinically significant effects. Elements of usual care that received positive evaluations were the availability of a decision coach and a supportive decision-making environment. Additional rigorously evaluated interventions are needed to help caregivers be more involved in decision making with seniors. Project HOPE—The People-to-People Health Foundation, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhandari, Deepak; Kertesz, Vilmos; Van Berkel, Gary J
RATIONALE: Ascorbic acid (AA) and folic acid (FA) are water-soluble vitamins and are usually fortified in food and dietary supplements. For the safety of human health, proper intake of these vitamins is recommended. Improvement in the analysis time required for the quantitative determination of these vitamins in food and nutritional formulations is desired. METHODS: A simple and fast (~5 min) in-tube sample preparation was performed, independently for FA and AA, by mixing extraction solvent with a powdered sample aliquot followed by agitation, centrifugation, and filtration to recover an extract for analysis. Quantitative detection was achieved by flow-injection (1 L injectionmore » volume) electrospray ionization tandem mass spectrometry (ESI-MS/MS) in negative ion mode using the method of standard addition. RESULTS: Method of standard addition was employed for the quantitative estimation of each vitamin in a sample extract. At least 2 spiked and 1 non-spiked sample extract were injected in triplicate for each quantitative analysis. Given an injection-to-injection interval of approximately 2 min, about 18 min was required to complete the quantitative estimation of each vitamin. The concentration values obtained for the respective vitamins in the standard reference material (SRM) 3280 using this approach were within the statistical range of the certified values provided in the NIST Certificate of Analysis. The estimated limit of detections of FA and AA were 13 and 5.9 ng/g, respectively. CONCLUSIONS: Flow-injection ESI-MS/MS was successfully applied for the rapid quantitation of FA and AA in SRM 3280 multivitamin/multielement tablets.« less
Four Forms of the Fourier Transform - for Freshmen, using Matlab
NASA Astrophysics Data System (ADS)
Simons, F. J.; Maloof, A. C.
2016-12-01
In 2015, a Fall "Freshman Seminar" at Princeton University (http://geoweb.princeton.edu/people/simons/FRS-SESC.html) taught students to combine field observations of the natural world with quantitative modeling and interpretation, to answer questions like: "How have Earth and human histories been recorded in the geology of Princeton, the Catskills, France and Spain?" (where we took the students on a data-gathering field trip during Fall Break), and "What experiments and analysis can a first-year (possibly non-future-major) do to query such archives of the past?" In the classroom, through problem sets, and around campus, students gained practical experience collecting geological and geophysical data in a geographic context, and analyzing these data using statistical techniques such as regression, time-series and image analysis, with the programming language Matlab. In this presentation I will detail how we instilled basic Matlab skills for quantitative geoscience data analysis through a 6-week progression of topics and exercises. In the 6 weeks after the Fall Break trip, we strengthened these competencies to make our students fully proficient for further learning, as evidenced by their end-of-term independent research work.The particular case study is focused on introducing power-spectral analysis to Freshmen, in a way that even the least quantitative among them could functionally understand. Not counting (0) "inspection", the four ways by which we have successfully instilled the concept of power-spectral analysis in a hands-on fashion are (1) "correlation", (2) "inversion", (3) "stacking", and formal (4) "Fourier transformation". These four provide the main "mappings". Along the way, of course, we also make sure that the students understand that "power-spectral density estimation" is not the same as "Fourier transformation", nor that every Fourier transform has to be "Fast". Hence, concepts from analysis-of-variance techniques, regression, and hypothesis testing, arise in this context, and will be discussed.
Measuring and Modeling Behavioral Decision Dynamics in Collective Evacuation
Carlson, Jean M.; Alderson, David L.; Stromberg, Sean P.; Bassett, Danielle S.; Craparo, Emily M.; Guiterrez-Villarreal, Francisco; Otani, Thomas
2014-01-01
Identifying and quantifying factors influencing human decision making remains an outstanding challenge, impacting the performance and predictability of social and technological systems. In many cases, system failures are traced to human factors including congestion, overload, miscommunication, and delays. Here we report results of a behavioral network science experiment, targeting decision making in a natural disaster. In a controlled laboratory setting, our results quantify several key factors influencing individual evacuation decision making in a controlled laboratory setting. The experiment includes tensions between broadcast and peer-to-peer information, and contrasts the effects of temporal urgency associated with the imminence of the disaster and the effects of limited shelter capacity for evacuees. Based on empirical measurements of the cumulative rate of evacuations as a function of the instantaneous disaster likelihood, we develop a quantitative model for decision making that captures remarkably well the main features of observed collective behavior across many different scenarios. Moreover, this model captures the sensitivity of individual- and population-level decision behaviors to external pressures, and systematic deviations from the model provide meaningful estimates of variability in the collective response. Identification of robust methods for quantifying human decisions in the face of risk has implications for policy in disasters and other threat scenarios, specifically the development and testing of robust strategies for training and control of evacuations that account for human behavior and network topologies. PMID:24520331
NASA Astrophysics Data System (ADS)
Martin-Hernandez, F.; Negredo, A. M.; Salguero, J. M.
2015-12-01
Many storylines presenting a geoscientific background are portrayed in science fiction movies. However, this background is often discussed only in qualitative terms in outreach books and forums. Here we report a mentoring experience of an end of degree project carried out in the fourth year of the degree in Physics in the Complutense University of Madrid (Spain). The supervisors intended to take advantage of the students' passion for science fiction movies to foster learning by assessing a robust, quantitative and critical analysis of the main geoscientific phenomena appearing in Avatar movie by James Cameron (2009). The student was supposed to consult abundant scientific literature. Much interest was paid to analyze the conditions for the levitation of Hallelujah floating mountains in Pandora, the imaginary satellite where the movie action takes place. Pandora was assumed to be an Earth-like astronomical object where the same physical laws as in the Earth could be applied. Hallelujah Mountains are made of unobtanium, an electrical superconductor at room-temperature and therefore diamagnetic material and they are assumed to be located over a magnetic field pole. The numerical values of the magnetic susceptibility and the required field to make the material levitate at the Pandora's gravity conditions were estimated. For this purpose, the magnetic susceptibility of the superconductor with the highest critical temperature existing today on Earth, the cuprate YBa2Cu3O7 was estimated. Results were compared with the magnetic susceptibility of two diamagnetic and abundant materials in the Earth's crust, namely quartz and calcite, and with the water susceptibility. The magnetic field required to levitate cuprates was almost 9 T, about six orders of magnitude higher than the Earth's magnetic field. On the basis of the quantitative analysis of magnetic and gravity field in Pandora, the student provided a list of suggestions to improve the scientific basis for futures releases.
Toxicity Estimation Software Tool (TEST)
The Toxicity Estimation Software Tool (TEST) was developed to allow users to easily estimate the toxicity of chemicals using Quantitative Structure Activity Relationships (QSARs) methodologies. QSARs are mathematical models used to predict measures of toxicity from the physical c...
NASA Astrophysics Data System (ADS)
Boyer, E. W.; Alexander, R. B.; Smith, R. A.; Shih, J.; Schwarz, G. E.
2010-12-01
Organic carbon (OC) is a critical water quality characteristic in surface waters, as it is an important component of the energy balance and food chains in freshwater and estuarine aquatic ecosystems, is significant in the mobilization and transport of contaminants along flow paths, and is associated with the formation of known carcinogens in drinking water supplies. The importance of OC dynamics on water quality has been recognized, but challenges remain in quantitatively addressing processes controlling OC fluxes over broad spatial scales in a hydrological context. Here, we: 1) quantified lateral OC fluxes in rivers, streams, and reservoirs across the nation; 2) partitioned how much organic carbon that is stored in lakes, rivers and streams comes from allochthonous sources (produced in the terrestrial landscape) versus autochthonous sources (produced in-stream by primary production); and 3) estimated the delivery of dissolved and total forms of organic carbon to coastal estuaries and embayments. To accomplish this, we developed national-scale models of organic carbon in U.S. surface waters using the spatially referenced regression on watersheds (SPARROW) technique. This approach uses mechanistic formulations, imposes mass balance constraints, and provides a formal parameter estimation structure to statistically estimate sources and fate of OC in terrestrial and aquatic ecosystems. We make use of a GIS based framework to describe sources of organic matter and characteristics of the landscape that affect its fate and transport, from spatial databases providing characterizations of climate, land cover, primary productivity, topography, soils, geology, and water routing. We calibrated and evaluated the model with statistical estimates of organic carbon loads that were observed at 1,125 monitoring stations across the nation. Our results illustrate spatial patterns and magnitudes OC loadings in rivers and reservoirs, highlighting hot spots and suggesting origins of the OC to each location. Further, our results yield quantitative estimates of aquatic OC fluxes for large water regions and for the nation, providing a refined estimate of the role of surface water fluxes of OC in relationship to regional and national carbon budgets. Finally, we are using our simulations to explore the potential role of climate and other changes in the terrestrial environment on OC fluxes in aquatic systems.
[Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].
Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie
2013-11-01
In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.
Voronovskaja's theorem revisited
NASA Astrophysics Data System (ADS)
Tachev, Gancho T.
2008-07-01
We represent a new quantitative variant of Voronovskaja's theorem for Bernstein operator. This estimate improves the recent quantitative versions of Voronovskaja's theorem for certain Bernstein-type operators, obtained by H. Gonska, P. Pitul and I. Rasa in 2006.
Neural network fusion capabilities for efficient implementation of tracking algorithms
NASA Astrophysics Data System (ADS)
Sundareshan, Malur K.; Amoozegar, Farid
1997-03-01
The ability to efficiently fuse information of different forms to facilitate intelligent decision making is one of the major capabilities of trained multilayer neural networks that is now being recognized. While development of innovative adaptive control algorithms for nonlinear dynamical plants that attempt to exploit these capabilities seems to be more popular, a corresponding development of nonlinear estimation algorithms using these approaches, particularly for application in target surveillance and guidance operations, has not received similar attention. We describe the capabilities and functionality of neural network algorithms for data fusion and implementation of tracking filters. To discuss details and to serve as a vehicle for quantitative performance evaluations, the illustrative case of estimating the position and velocity of surveillance targets is considered. Efficient target- tracking algorithms that can utilize data from a host of sensing modalities and are capable of reliably tracking even uncooperative targets executing fast and complex maneuvers are of interest in a number of applications. The primary motivation for employing neural networks in these applications comes from the efficiency with which more features extracted from different sensor measurements can be utilized as inputs for estimating target maneuvers. A system architecture that efficiently integrates the fusion capabilities of a trained multilayer neural net with the tracking performance of a Kalman filter is described. The innovation lies in the way the fusion of multisensor data is accomplished to facilitate improved estimation without increasing the computational complexity of the dynamical state estimator itself.
MilQuant: a free, generic software tool for isobaric tagging-based quantitation.
Zou, Xiao; Zhao, Minzhi; Shen, Hongyan; Zhao, Xuyang; Tong, Yuanpeng; Wang, Qingsong; Wei, Shicheng; Ji, Jianguo
2012-09-18
Isobaric tagging techniques such as iTRAQ and TMT are widely used in quantitative proteomics and especially useful for samples that demand in vitro labeling. Due to diversity in choices of MS acquisition approaches, identification algorithms, and relative abundance deduction strategies, researchers are faced with a plethora of possibilities when it comes to data analysis. However, the lack of generic and flexible software tool often makes it cumbersome for researchers to perform the analysis entirely as desired. In this paper, we present MilQuant, mzXML-based isobaric labeling quantitator, a pipeline of freely available programs that supports native acquisition files produced by all mass spectrometer types and collection approaches currently used in isobaric tagging based MS data collection. Moreover, aside from effective normalization and abundance ratio deduction algorithms, MilQuant exports various intermediate results along each step of the pipeline, making it easy for researchers to customize the analysis. The functionality of MilQuant was demonstrated by four distinct datasets from different laboratories. The compatibility and extendibility of MilQuant makes it a generic and flexible tool that can serve as a full solution to data analysis of isobaric tagging-based quantitation. Copyright © 2012 Elsevier B.V. All rights reserved.
Quantitative Market Research Regarding Funding of District 8 Construction Projects
DOT National Transportation Integrated Search
1995-05-01
The primary objective of this quantitative research is to provide information : for more effective decision making regarding the level of investment in various : transportation systems in District 8. : This objective was accomplished by establishing ...
Evaluating Rapid Models for High-Throughput Exposure Forecasting (SOT)
High throughput exposure screening models can provide quantitative predictions for thousands of chemicals; however these predictions must be systematically evaluated for predictive ability. Without the capability to make quantitative, albeit uncertain, forecasts of exposure, the ...
NASA Astrophysics Data System (ADS)
Diederich, M.; Ryzhkov, A.; Simmer, C.; Mühlbauer, K.
2011-12-01
The amplitude a of radar wave reflected by meteorological targets can be misjudged due to several factors. At X band wavelength, attenuation of the radar beam by hydro meteors reduces the signal strength enough to be a significant source of error for quantitative precipitation estimation. Depending on the surrounding orography, the radar beam may be partially blocked when scanning at low elevation angles, and the knowledge of the exact amount of signal loss through beam blockage becomes necessary. The phase shift between the radar signals at horizontal and vertical polarizations is affected by the hydrometeors that the beam travels through, but remains unaffected by variations in signal strength. This has allowed for several ways of compensating for the attenuation of the signal, and for consistency checks between these variables. In this study, we make use of several weather radars and gauge network measuring in the same area to examine the effectiveness of several methods of attenuation and beam blockage corrections. The methods include consistency checks of radar reflectivity and specific differential phase, calculation of beam blockage using a topography map, estimating attenuation using differential propagation phase, and the ZPHI method proposed by Testud et al. in 2000. Results show the high effectiveness of differential phase in estimating attenuation, and potential of the ZPHI method to compensate attenuation, beam blockage, and calibration errors.
Estimating the number of animals in wildlife populations
Lancia, R.A.; Kendall, W.L.; Pollock, K.H.; Nichols, J.D.; Braun, Clait E.
2005-01-01
INTRODUCTION In 1938, Howard M. Wight devoted 9 pages, which was an entire chapter in the first wildlife management techniques manual, to what he termed 'census' methods. As books and chapters such as this attest, the volume of literature on this subject has grown tremendously. Abundance estimation remains an active area of biometrical research, as reflected in the many differences between this chapter and the similar contribution in the previous manual. Our intent in this chapter is to present an overview of the basic and most widely used population estimation techniques and to provide an entree to the relevant literature. Several possible approaches could be taken in writing a chapter dealing with population estimation. For example, we could provide a detailed treatment focusing on statistical models and on derivation of estimators based on these models. Although a chapter using this approach might provide a valuable reference for quantitative biologists and biometricians, it would be of limited use to many field biologists and wildlife managers. Another approach would be to focus on details of actually applying different population estimation techniques. This approach would include both field application (e.g., how to set out a trapping grid or conduct an aerial survey) and detailed instructions on how to use the resulting data with appropriate estimation equations. We are reluctant to attempt such an approach, however, because of the tremendous diversity of real-world field situations defined by factors such as the animal being studied, habitat, available resources, and because of our resultant inability to provide detailed instructions for all possible cases. We believe it is more useful to provide the reader with the conceptual basis underlying estimation methods. Thus, we have tried to provide intuitive explanations for how basic methods work. In doing so, we present relevant estimation equations for many methods and provide citations of more detailed treatments covering both statistical considerations and field applications. We have chosen to present methods that are representative of classes of estimators, rather than address every available method. Our hope is that this chapter will provide the reader with enough background to make an informed decision about what general method(s) will likely perform well in any particular field situation. Readers with a more quantitative background may then be able to consult detailed references and tailor the selected method to suit their particular needs. Less quantitative readers should consult a biometrician, preferably one with experience in wildlife studies, for this 'tailoring,' with the hope they will be able to do so with a basic understanding of the general method, thereby permitting useful interaction and discussion with the biometrician. SUMMARY Estimating the abundance or density of animals in wild populations is not a trivial matter. Virtually all techniques involve the basic problem of estimating the probability of seeing, capturing, or otherwise detecting animals during some type of survey and, in many cases, sampling concerns as well. In the case of indices, the detection probability is assumed to be constant (but unknown). We caution against use of indices unless this assumption can be verified for the comparison(s) of interest. In the case of population estimation, many methods have been developed over the years to estimate the probability of detection associated with various kinds of count statistics. Techniques range from complete counts, where sampling concerns often dominate, to incomplete counts where detection probabilities are also important. Some examples of the latter are multiple observers, removal methods, and capture-recapture. Before embarking on a survey to estimate the size of a population, one must understand clearly what information is needed and for what purpose the information will be used. The key to derivin
PEITH(Θ): perfecting experiments with information theory in Python with GPU support.
Dony, Leander; Mackerodt, Jonas; Ward, Scott; Filippi, Sarah; Stumpf, Michael P H; Liepe, Juliane
2018-04-01
Different experiments provide differing levels of information about a biological system. This makes it difficult, a priori, to select one of them beyond mere speculation and/or belief, especially when resources are limited. With the increasing diversity of experimental approaches and general advances in quantitative systems biology, methods that inform us about the information content that a given experiment carries about the question we want to answer, become crucial. PEITH(Θ) is a general purpose, Python framework for experimental design in systems biology. PEITH(Θ) uses Bayesian inference and information theory in order to derive which experiments are most informative in order to estimate all model parameters and/or perform model predictions. https://github.com/MichaelPHStumpf/Peitho. m.stumpf@imperial.ac.uk or juliane.liepe@mpibpc.mpg.de.
Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger
2018-05-01
In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.
Comparative study of electronic structure and microscopic model of SrMn3P4O14 and Sr3Cu3(PO4)4
NASA Astrophysics Data System (ADS)
Khanam, Dilruba; Rahaman, Badiur
2018-05-01
We present the first principle density functional calculations to figure out the comparative study of the underlying spin model SrMn3P4O14 and Sr3Cu3(PO4)4. We explicitly discuss the nature of the exchange paths and provide quantitative estimates of magnetic exchange couplings for both compounds. A microscopic modeling based on analysis of the electronic structure of both systems puts them in the interesting class of weakly coupled trimer units, which makes chains S=5/2 for SrMn3P4O14 and S=1/2 for Sr3Cu3(PO4)4 that are in turn weakly coupled to each other.
Prevalence of suicidal ideation in Chinese college students: a meta-analysis.
Li, Zhan-Zhan; Li, Ya-Ming; Lei, Xian-Yang; Zhang, Dan; Liu, Li; Tang, Si-Yuan; Chen, Lizhang
2014-01-01
About 1 million people worldwide commit suicide each year, and college students with suicidal ideation are at high risk of suicide. The prevalence of suicidal ideation in college students has been estimated extensively, but quantitative syntheses of overall prevalence are scarce, especially in China. Accurate estimates of prevalence are important for making public policy. In this paper, we aimed to determine the prevalence of suicidal ideation in Chinese college students. Databases including PubMed, Web of Knowledge, Chinese Web of Knowledge, Wangfang (Chinese database) and Weipu (Chinese database) were systematically reviewed to identify articles published between 2004 to July 2013, in either English or Chinese, reporting prevalence estimates of suicidal ideation among Chinese college students. The strategy also included a secondary search of reference lists of records retrieved from databases. Then the prevalence estimates were summarized using a random effects model. The effects of moderator variables on the prevalence estimates were assessed using a meta-regression model. A total of 41 studies involving 160339 college students were identified, and the prevalence ranged from 1.24% to 26.00%. The overall pooled prevalence of suicidal ideation among Chinese college students was 10.72% (95%CI: 8.41% to 13.28%). We noted substantial heterogeneity in prevalence estimates. Subgroup analyses showed that prevalence of suicidal ideation in females is higher than in males. The prevalence of suicidal ideation in Chinese college students is relatively high, although the suicide rate is lower compared with the entire society, suggesting the need for local surveys to inform the development of health services for college students.
Prevalence of Suicidal Ideation in Chinese College Students: A Meta-Analysis
Li, Zhan-Zhan; Li, Ya-Ming; Lei, Xian-Yang; Zhang, Dan; Liu, Li; Tang, Si-Yuan; Chen, Lizhang
2014-01-01
Background About 1 million people worldwide commit suicide each year, and college students with suicidal ideation are at high risk of suicide. The prevalence of suicidal ideation in college students has been estimated extensively, but quantitative syntheses of overall prevalence are scarce, especially in China. Accurate estimates of prevalence are important for making public policy. In this paper, we aimed to determine the prevalence of suicidal ideation in Chinese college students. Objective and Methods Databases including PubMed, Web of Knowledge, Chinese Web of Knowledge, Wangfang (Chinese database) and Weipu (Chinese database) were systematically reviewed to identify articles published between 2004 to July 2013, in either English or Chinese, reporting prevalence estimates of suicidal ideation among Chinese college students. The strategy also included a secondary search of reference lists of records retrieved from databases. Then the prevalence estimates were summarized using a random effects model. The effects of moderator variables on the prevalence estimates were assessed using a meta-regression model. Results A total of 41 studies involving 160339 college students were identified, and the prevalence ranged from 1.24% to 26.00%. The overall pooled prevalence of suicidal ideation among Chinese college students was 10.72% (95%CI: 8.41% to 13.28%). We noted substantial heterogeneity in prevalence estimates. Subgroup analyses showed that prevalence of suicidal ideation in females is higher than in males. Conclusions The prevalence of suicidal ideation in Chinese college students is relatively high, although the suicide rate is lower compared with the entire society, suggesting the need for local surveys to inform the development of health services for college students. PMID:25285890
NASA Astrophysics Data System (ADS)
Ballari, D.; Castro, E.; Campozano, L.
2016-06-01
Precipitation monitoring is of utmost importance for water resource management. However, in regions of complex terrain such as Ecuador, the high spatio-temporal precipitation variability and the scarcity of rain gauges, make difficult to obtain accurate estimations of precipitation. Remotely sensed estimated precipitation, such as the Multi-satellite Precipitation Analysis TRMM, can cope with this problem after a validation process, which must be representative in space and time. In this work we validate monthly estimates from TRMM 3B43 satellite precipitation (0.25° x 0.25° resolution), by using ground data from 14 rain gauges in Ecuador. The stations are located in the 3 most differentiated regions of the country: the Pacific coastal plains, the Andean highlands, and the Amazon rainforest. Time series, between 1998 - 2010, of imagery and rain gauges were compared using statistical error metrics such as bias, root mean square error, and Pearson correlation; and with detection indexes such as probability of detection, equitable threat score, false alarm rate and frequency bias index. The results showed that precipitation seasonality is well represented and TRMM 3B43 acceptably estimates the monthly precipitation in the three regions of the country. According to both, statistical error metrics and detection indexes, the coastal and Amazon regions are better estimated quantitatively than the Andean highlands. Additionally, it was found that there are better estimations for light precipitation rates. The present validation of TRMM 3B43 provides important results to support further studies on calibration and bias correction of precipitation in ungagged watershed basins.
Wu, Allison Chia-Yi; Rifkin, Scott A
2015-03-27
Recent techniques for tagging and visualizing single molecules in fixed or living organisms and cell lines have been revolutionizing our understanding of the spatial and temporal dynamics of fundamental biological processes. However, fluorescence microscopy images are often noisy, and it can be difficult to distinguish a fluorescently labeled single molecule from background speckle. We present a computational pipeline to distinguish the true signal of fluorescently labeled molecules from background fluorescence and noise. We test our technique using the challenging case of wide-field, epifluorescence microscope image stacks from single molecule fluorescence in situ experiments on nematode embryos where there can be substantial out-of-focus light and structured noise. The software recognizes and classifies individual mRNA spots by measuring several features of local intensity maxima and classifying them with a supervised random forest classifier. A key innovation of this software is that, by estimating the probability that each local maximum is a true spot in a statistically principled way, it makes it possible to estimate the error introduced by image classification. This can be used to assess the quality of the data and to estimate a confidence interval for the molecule count estimate, all of which are important for quantitative interpretations of the results of single-molecule experiments. The software classifies spots in these images well, with >95% AUROC on realistic artificial data and outperforms other commonly used techniques on challenging real data. Its interval estimates provide a unique measure of the quality of an image and confidence in the classification.
Improving the quantification of contrast enhanced ultrasound using a Bayesian approach
NASA Astrophysics Data System (ADS)
Rizzo, Gaia; Tonietto, Matteo; Castellaro, Marco; Raffeiner, Bernd; Coran, Alessandro; Fiocco, Ugo; Stramare, Roberto; Grisan, Enrico
2017-03-01
Contrast Enhanced Ultrasound (CEUS) is a sensitive imaging technique to assess tissue vascularity, that can be useful in the quantification of different perfusion patterns. This can be particularly important in the early detection and staging of arthritis. In a recent study we have shown that a Gamma-variate can accurately quantify synovial perfusion and it is flexible enough to describe many heterogeneous patterns. Moreover, we have shown that through a pixel-by-pixel analysis the quantitative information gathered characterizes more effectively the perfusion. However, the SNR ratio of the data and the nonlinearity of the model makes the parameter estimation difficult. Using classical non-linear-leastsquares (NLLS) approach the number of unreliable estimates (those with an asymptotic coefficient of variation greater than a user-defined threshold) is significant, thus affecting the overall description of the perfusion kinetics and of its heterogeneity. In this work we propose to solve the parameter estimation at the pixel level within a Bayesian framework using Variational Bayes (VB), and an automatic and data-driven prior initialization. When evaluating the pixels for which both VB and NLLS provided reliable estimates, we demonstrated that the parameter values provided by the two methods are well correlated (Pearson's correlation between 0.85 and 0.99). Moreover, the mean number of unreliable pixels drastically reduces from 54% (NLLS) to 26% (VB), without increasing the computational time (0.05 s/pixel for NLLS and 0.07 s/pixel for VB). When considering the efficiency of the algorithms as computational time per reliable estimate, VB outperforms NLLS (0.11 versus 0.25 seconds per reliable estimate respectively).
Pulkkinen, Aki; Cox, Ben T; Arridge, Simon R; Goh, Hwan; Kaipio, Jari P; Tarvainen, Tanja
2016-11-01
Estimation of optical absorption and scattering of a target is an inverse problem associated with quantitative photoacoustic tomography. Conventionally, the problem is expressed as two folded. First, images of initial pressure distribution created by absorption of a light pulse are formed based on acoustic boundary measurements. Then, the optical properties are determined based on these photoacoustic images. The optical stage of the inverse problem can thus suffer from, for example, artefacts caused by the acoustic stage. These could be caused by imperfections in the acoustic measurement setting, of which an example is a limited view acoustic measurement geometry. In this work, the forward model of quantitative photoacoustic tomography is treated as a coupled acoustic and optical model and the inverse problem is solved by using a Bayesian approach. Spatial distribution of the optical properties of the imaged target are estimated directly from the photoacoustic time series in varying acoustic detection and optical illumination configurations. It is numerically demonstrated, that estimation of optical properties of the imaged target is feasible in limited view acoustic detection setting.
Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti
2017-08-11
In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.
Quantitative prediction of drug side effects based on drug-related features.
Niu, Yanqing; Zhang, Wen
2017-09-01
Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.
Yanez, Livia Z; Camarillo, David B
2017-04-01
Measurement of oocyte and embryo biomechanical properties has recently emerged as an exciting new approach to obtain a quantitative, objective estimate of developmental potential. However, many traditional methods for probing cell mechanical properties are time consuming, labor intensive and require expensive equipment. Microfluidic technology is currently making its way into many aspects of assisted reproductive technologies (ART), and is particularly well suited to measure embryo biomechanics due to the potential for robust, automated single-cell analysis at a low cost. This review will highlight microfluidic approaches to measure oocyte and embryo mechanics along with their ability to predict developmental potential and find practical application in the clinic. Although these new devices must be extensively validated before they can be integrated into the existing clinical workflow, they could eventually be used to constantly monitor oocyte and embryo developmental progress and enable more optimal decision making in ART. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Box-Cox transformation for QTL mapping.
Yang, Runqing; Yi, Nengjun; Xu, Shizhong
2006-01-01
The maximum likelihood method of QTL mapping assumes that the phenotypic values of a quantitative trait follow a normal distribution. If the assumption is violated, some forms of transformation should be taken to make the assumption approximately true. The Box-Cox transformation is a general transformation method which can be applied to many different types of data. The flexibility of the Box-Cox transformation is due to a variable, called transformation factor, appearing in the Box-Cox formula. We developed a maximum likelihood method that treats the transformation factor as an unknown parameter, which is estimated from the data simultaneously along with the QTL parameters. The method makes an objective choice of data transformation and thus can be applied to QTL analysis for many different types of data. Simulation studies show that (1) Box-Cox transformation can substantially increase the power of QTL detection; (2) Box-Cox transformation can replace some specialized transformation methods that are commonly used in QTL mapping; and (3) applying the Box-Cox transformation to data already normally distributed does not harm the result.
Mitigating direct detection bounds in non-minimal Higgs portal scalar dark matter models
NASA Astrophysics Data System (ADS)
Bhattacharya, Subhaditya; Ghosh, Purusottam; Maity, Tarak Nath; Ray, Tirtha Sankar
2017-10-01
The minimal Higgs portal dark matter model is increasingly in tension with recent results form direct detection experiments like LUX and XENON. In this paper we make a systematic study of simple extensions of the Z_2 stabilized singlet scalar Higgs portal scenario in terms of their prospects at direct detection experiments. We consider both enlarging the stabilizing symmetry to Z_3 and incorporating multipartite features in the dark sector. We demonstrate that in these non-minimal models the interplay of annihilation, co-annihilation and semi-annihilation processes considerably relax constraints from present and proposed direct detection experiments while simultaneously saturating observed dark matter relic density. We explore in particular the resonant semi-annihilation channel within the multipartite Z_3 framework which results in new unexplored regions of parameter space that would be difficult to constrain by direct detection experiments in the near future. The role of dark matter exchange processes within multi-component Z_3× Z_3^' } framework is illustrated. We make quantitative estimates to elucidate the role of various annihilation processes in the different allowed regions of parameter space within these models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaponov, Yu. V.
A special Majorana model for three neutrino flavors is developed on the basis of the Pauli transformation group. In this model, the neutrinos possess a partially conserved generalized lepton (Pauli) charge that makes it possible to discriminate between neutrinos of different type. It is shown that, within the model in question, a transition from the basic 'mass' representation, where the average value of this charge is zero, to the representation associated with physical neutrinos characterized by specific Pauli 'flavor' charges establishes a relation between the neutrino mixing angles {theta}{sub mix,12}, {theta}{sub mix,23}, and {theta}{sub mix,13} and an additional relation betweenmore » the Majorana neutrino masses. The Lagrangian mass part, which includes a term invariant under Pauli transformations and a representation-dependent term, concurrently assumes a 'quasi-Dirac' form. With allowance for these relations, the existing set of experimental data on the features of neutrino oscillations makes it possible to obtain quantitative estimates for the absolute values of the neutrino masses and the 2{beta}-decay mass parameter m{sub {beta}{beta}} and a number of additional constraints on the neutrino mixing angles.« less
Schmidt, J.M.; Light, T.D.; Drew, L.J.; Wilson, Frederic H.; Miller, M.L.; Saltus, R.W.
2007-01-01
The Bay Resource Management Plan (RMP) area in southwestern Alaska, north and northeast of Bristol Bay contains significant potential for undiscovered locatable mineral resources of base and precious metals, in addition to metallic mineral deposits that are already known. A quantitative probabilistic assessment has identified 24 tracts of land that are permissive for 17 mineral deposit model types likely to be explored for within the next 15 years in this region. Commodities we discuss in this report that have potential to occur in the Bay RMP area are Ag, Au, Cr, Cu, Fe, Hg, Mo, Pb, Sn, W, Zn, and platinum-group elements. Geoscience data for the region are sufficient to make quantitative estimates of the number of undiscovered deposits only for porphyry copper, epithermal vein, copper skarn, iron skarn, hot-spring mercury, placer gold, and placer platinum-deposit models. A description of a group of shallow- to intermediate-level intrusion-related gold deposits is combined with grade and tonnage data from 13 deposits of this type to provide a quantitative estimate of undiscovered deposits of this new type. We estimate that significant resources of Ag, Au, Cu, Fe, Hg, Mo, Pb, and Pt occur in the Bay Resource Management Plan area in these deposit types. At the 10th percentile probability level, the Bay RMP area is estimated to contain 10,067 metric tons silver, 1,485 metric tons gold, 12.66 million metric tons copper, 560 million metric tons iron, 8,100 metric tons mercury, 500,000 metric tons molybdenum, 150 metric tons lead, and 17 metric tons of platinum in undiscovered deposits of the eight quantified deposit types. At the 90th percentile probability level, the Bay RMP area is estimated to contain 89 metric tons silver, 14 metric tons gold, 911,215 metric tons copper, 330,000 metric tons iron, 1 metric ton mercury, 8,600 metric tons molybdenum and 1 metric ton platinum in undiscovered deposits of the eight deposit types. Other commodities, which may occur in the Bay RMP area, include Cr, Sn, W, Zn, and other platinum-group elements such as Ir, Os, and Pd. We define 13 permissive tracts for 9 additional deposit model types. These are: Besshi- and Cyprus, and Kuroko-volcanogenic massive sulfides, hot spring gold, low sulfide gold veins, Mississippi-Valley Pb-Zn, tin greisen, zinc skarn and Alaskan-type zoned ultramafic platinum-group element deposits. Resources in undiscovered deposits of these nine types have not been quantified, and would be in addition to those in known deposits and the undiscovered resources listed above. Additional mineral resources also may occur in the Bay RMP area in deposit types, which were not considered here.
Application of pedagogy reflective in statistical methods course and practicum statistical methods
NASA Astrophysics Data System (ADS)
Julie, Hongki
2017-08-01
Subject Elementary Statistics, Statistical Methods and Statistical Methods Practicum aimed to equip students of Mathematics Education about descriptive statistics and inferential statistics. The students' understanding about descriptive and inferential statistics were important for students on Mathematics Education Department, especially for those who took the final task associated with quantitative research. In quantitative research, students were required to be able to present and describe the quantitative data in an appropriate manner, to make conclusions from their quantitative data, and to create relationships between independent and dependent variables were defined in their research. In fact, when students made their final project associated with quantitative research, it was not been rare still met the students making mistakes in the steps of making conclusions and error in choosing the hypothetical testing process. As a result, they got incorrect conclusions. This is a very fatal mistake for those who did the quantitative research. There were some things gained from the implementation of reflective pedagogy on teaching learning process in Statistical Methods and Statistical Methods Practicum courses, namely: 1. Twenty two students passed in this course and and one student did not pass in this course. 2. The value of the most accomplished student was A that was achieved by 18 students. 3. According all students, their critical stance could be developed by them, and they could build a caring for each other through a learning process in this course. 4. All students agreed that through a learning process that they undergo in the course, they can build a caring for each other.
A Quantitative Methodology to Examine the Development of Moral Judgment
ERIC Educational Resources Information Center
Buchanan, James P.; Thompson, Spencer K.
1973-01-01
Unlike Piaget's clinical procedure, the experiment's methodology allowed substantiation of the ability of children to simultaneously weigh damage and intent information when making a moral judgment. Other advantages of this quantitative methodology are also presented. (Authors)
A quantitative risk-based model for reasoning over critical system properties
NASA Technical Reports Server (NTRS)
Feather, M. S.
2002-01-01
This position paper suggests the use of a quantitative risk-based model to help support reeasoning and decision making that spans many of the critical properties such as security, safety, survivability, fault tolerance, and real-time.
Chow, Steven Kwok Keung; Yeung, David Ka Wai; Ahuja, Anil T; King, Ann D
2012-01-01
Purpose To quantitatively evaluate the kinetic parameter estimation for head and neck (HN) dynamic contrast-enhanced (DCE) MRI with dual-flip-angle (DFA) T1 mapping. Materials and methods Clinical DCE-MRI datasets of 23 patients with HN tumors were included in this study. T1 maps were generated based on multiple-flip-angle (MFA) method and different DFA combinations. Tofts model parameter maps of kep, Ktrans and vp based on MFA and DFAs were calculated and compared. Fitted parameter by MFA and DFAs were quantitatively evaluated in primary tumor, salivary gland and muscle. Results T1 mapping deviations by DFAs produced remarkable kinetic parameter estimation deviations in head and neck tissues. In particular, the DFA of [2º, 7º] overestimated, while [7º, 12º] and [7º, 15º] underestimated Ktrans and vp, significantly (P<0.01). [2º, 15º] achieved the smallest but still statistically significant overestimation for Ktrans and vp in primary tumors, 32.1% and 16.2% respectively. kep fitting results by DFAs were relatively close to the MFA reference compared to Ktrans and vp. Conclusions T1 deviations induced by DFA could result in significant errors in kinetic parameter estimation, particularly Ktrans and vp, through Tofts model fitting. MFA method should be more reliable and robust for accurate quantitative pharmacokinetic analysis in head and neck. PMID:23289084
Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong
2018-01-01
The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.
An approach to and web-based tool for infectious disease outbreak intervention analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daughton, Ashlynn R.; Generous, Nicholas; Priedhorsky, Reid
Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a subjective process involving surveillance and expert opinion. However, there are many situations where neither may be available. Modeling can fill gaps in the decision making process by using available data to provide quantitative estimates of outbreak trajectories. Effective reduction of the spread of infectious diseases can be achieved through collaboration between the modeling community and public health policy community. However, such collaboration is rare, resulting in a lack of models that meet the needs of the public healthmore » community. Here we show a Susceptible-Infectious-Recovered (SIR) model modified to include control measures that allows parameter ranges, rather than parameter point estimates, and includes a web user interface for broad adoption. We apply the model to three diseases, measles, norovirus and influenza, to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.« less
Deep-Water Acoustic Anomalies from Methane Hydrate in the Bering Sea
Wood, Warren T.; Barth, Ginger A.; Scholl, David W.; Lebedeva-Ivanova, Nina
2015-01-01
A recent expedition to the central Bering Sea, one of the most remote locations in the world, has yielded observations confirming gas and gas hydrates in this deep ocean basin. Significant sound speed anomalies found using inversion of pre-stack seismic data are observed in association with variable seismic amplitude anomalies in the thick sediment column. The anomalously low sound speeds below the inferred base of methane hydrate stability indicate the presence of potentially large quantities of gas-phase methane associated with each velocity-amplitude anomaly (VAMP). The data acquired are of such high quality that quantitative estimates of the concentrations of gas hydrates in the upper few hundred meters of sediment are also possible, and analyses are under way to make these estimates. Several VAMPs were specifically targeted in this survey; others were crossed incidentally. Indications of many dozens or hundreds of these features exist throughout the portion of the Bering Sea relevant to the U.S. extended continental shelf (ECS) consistent with the United Nations Convention on the Law of the Sea.
Emerging Tools to Estimate and to Predict Exposures to ...
The timely assessment of the human and ecological risk posed by thousands of existing and emerging commercial chemicals is a critical challenge facing EPA in its mission to protect public health and the environment The US EPA has been conducting research to enhance methods used to estimate and forecast exposures for tens of thousands of chemicals. This research is aimed at both assessing risks and supporting life cycle analysis, by developing new models and tools for high throughput exposure screening and prioritization, as well as databases that support these and other tools, especially regarding consumer products. The models and data address usage, and take advantage of quantitative structural activity relationships (QSARs) for both inherent chemical properties and function (why the chemical is a product ingredient). To make them more useful and widely available, the new tools, data and models are designed to be: • Flexible • Intraoperative • Modular (useful to more than one, stand-alone application) • Open (publicly available software) Presented at the Society for Risk Analysis Forum: Risk Governance for Key Enabling Technologies, Venice, Italy, March 1-3, 2017
Accumulator and random-walk models of psychophysical discrimination: a counter-evaluation.
Vickers, D; Smith, P
1985-01-01
In a recent assessment of models of psychophysical discrimination, Heath criticises the accumulator model for its reliance on computer simulation and qualitative evidence, and contrasts it unfavourably with a modified random-walk model, which yields exact predictions, is susceptible to critical test, and is provided with simple parameter-estimation techniques. A counter-evaluation is presented, in which the approximations employed in the modified random-walk analysis are demonstrated to be seriously inaccurate, the resulting parameter estimates to be artefactually determined, and the proposed test not critical. It is pointed out that Heath's specific application of the model is not legitimate, his data treatment inappropriate, and his hypothesis concerning confidence inconsistent with experimental results. Evidence from adaptive performance changes is presented which shows that the necessary assumptions for quantitative analysis in terms of the modified random-walk model are not satisfied, and that the model can be reconciled with data at the qualitative level only by making it virtually indistinguishable from an accumulator process. A procedure for deriving exact predictions for an accumulator process is outlined.
An approach to and web-based tool for infectious disease outbreak intervention analysis
Daughton, Ashlynn R.; Generous, Nicholas; Priedhorsky, Reid; ...
2017-04-18
Infectious diseases are a leading cause of death globally. Decisions surrounding how to control an infectious disease outbreak currently rely on a subjective process involving surveillance and expert opinion. However, there are many situations where neither may be available. Modeling can fill gaps in the decision making process by using available data to provide quantitative estimates of outbreak trajectories. Effective reduction of the spread of infectious diseases can be achieved through collaboration between the modeling community and public health policy community. However, such collaboration is rare, resulting in a lack of models that meet the needs of the public healthmore » community. Here we show a Susceptible-Infectious-Recovered (SIR) model modified to include control measures that allows parameter ranges, rather than parameter point estimates, and includes a web user interface for broad adoption. We apply the model to three diseases, measles, norovirus and influenza, to show the feasibility of its use and describe a research agenda to further promote interactions between decision makers and the modeling community.« less
Temporal processing dysfunction in schizophrenia.
Carroll, Christine A; Boggs, Jennifer; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P
2008-07-01
Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the pathophysiology of schizophrenia, there remains a paucity of research directly examining overt timing performance in the disorder. Accordingly, the present study investigated timing in schizophrenia using a well-established task of time perception. Twenty-three individuals with schizophrenia and 22 non-psychiatric control participants completed a temporal bisection task, which required participants to make temporal judgments about auditory and visually presented durations ranging from 300 to 600 ms. Both schizophrenia and control groups displayed greater visual compared to auditory timing variability, with no difference between groups in the visual modality. However, individuals with schizophrenia exhibited less temporal precision than controls in the perception of auditory durations. These findings correlated with parameter estimates obtained from a quantitative model of time estimation, and provide evidence of a fundamental deficit in temporal auditory precision in schizophrenia.
Analysis of Membrane Lipids of Airborne Micro-Organisms
NASA Technical Reports Server (NTRS)
MacNaughton, Sarah
2006-01-01
A method of characterization of airborne micro-organisms in a given location involves (1) large-volume filtration of air onto glass-fiber filters; (2) accelerated extraction of membrane lipids of the collected micro-organisms by use of pressurized hot liquid; and (3) identification and quantitation of the lipids by use of gas chromatography and mass spectrometry. This method is suitable for use in both outdoor and indoor environments; for example, it can be used to measure airborne microbial contamination in buildings ("sick-building syndrome"). The classical approach to analysis of airborne micro-organisms is based on the growth of cultureable micro-organisms and does not provide an account of viable but noncultureable micro-organisms, which typically amount to more than 90 percent of the micro-organisms present. In contrast, the present method provides an account of all micro-organisms, including cultureable, noncultureable, aerobic, and anaerobic ones. The analysis of lipids according to this method makes it possible to estimate the number of viable airborne micro-organisms present in the sampled air and to obtain a quantitative profile of the general types of micro-organisms present along with some information about their physiological statuses.
3D TOCSY-HSQC NMR for metabolic flux analysis using non-uniform sampling
Reardon, Patrick N.; Marean-Reardon, Carrie L.; Bukovec, Melanie A.; ...
2016-02-05
13C-Metabolic Flux Analysis ( 13C-MFA) is rapidly being recognized as the authoritative method for determining fluxes through metabolic networks. Site-specific 13C enrichment information obtained using NMR spectroscopy is a valuable input for 13C-MFA experiments. Chemical shift overlaps in the 1D or 2D NMR experiments typically used for 13C-MFA frequently hinder assignment and quantitation of site-specific 13C enrichment. Here we propose the use of a 3D TOCSY-HSQC experiment for 13C-MFA. We employ Non-Uniform Sampling (NUS) to reduce the acquisition time of the experiment to a few hours, making it practical for use in 13C-MFA experiments. Our data show that the NUSmore » experiment is linear and quantitative. Identification of metabolites in complex mixtures, such as a biomass hydrolysate, is simplified by virtue of the 13C chemical shift obtained in the experiment. In addition, the experiment reports 13C-labeling information that reveals the position specific labeling of subsets of isotopomers. As a result, the information provided by this technique will enable more accurate estimation of metabolic fluxes in larger metabolic networks.« less
Contributed Review: Nuclear magnetic resonance core analysis at 0.3 T
NASA Astrophysics Data System (ADS)
Mitchell, Jonathan; Fordham, Edmund J.
2014-11-01
Nuclear magnetic resonance (NMR) provides a powerful toolbox for petrophysical characterization of reservoir core plugs and fluids in the laboratory. Previously, there has been considerable focus on low field magnet technology for well log calibration. Now there is renewed interest in the study of reservoir samples using stronger magnets to complement these standard NMR measurements. Here, the capabilities of an imaging magnet with a field strength of 0.3 T (corresponding to 12.9 MHz for proton) are reviewed in the context of reservoir core analysis. Quantitative estimates of porosity (saturation) and pore size distributions are obtained under favorable conditions (e.g., in carbonates), with the added advantage of multidimensional imaging, detection of lower gyromagnetic ratio nuclei, and short probe recovery times that make the system suitable for shale studies. Intermediate field instruments provide quantitative porosity maps of rock plugs that cannot be obtained using high field medical scanners due to the field-dependent susceptibility contrast in the porous medium. Example data are presented that highlight the potential applications of an intermediate field imaging instrument as a complement to low field instruments in core analysis and for materials science studies in general.
Quality of Life and Aesthetic Plastic Surgery: A Systematic Review and Meta-analysis.
Dreher, Rodrigo; Blaya, Carolina; Tenório, Juliana L C; Saltz, Renato; Ely, Pedro B; Ferrão, Ygor A
2016-09-01
Quality of life (QoL) is an important outcome in plastic surgery. However, authors use different scales to address this subject, making it difficult to compare the outcomes. To address this discrepancy, the aim of this study was to perform a systematic review and a random effect meta-analysis. The search was made in two electronic databases (LILACS and PUBMED) using Mesh and non-Mesh terms related to aesthetic plastic surgery and QoL. We performed qualitative and quantitative analyses of the gathered data. We calculated a random effect meta-analysis with Der Simonian and Laird as variance estimator to compare pre- and postoperative QoL standardized mean difference. To check if there is difference between aesthetic surgeries, we compared reduction mammoplasty to other aesthetic surgeries. Of 1,715 identified, 20 studies were included in the qualitative analysis and 16 went through quantitative analysis. The random effect of all aesthetic surgeries shows that QoL improved after surgery. Reduction mammoplasty has improved QoL more than other procedures in social functioning and physical functioning domains. Aesthetic plastic surgery increases QoL. Reduction mammoplasty seems to have better improvement compared with other aesthetic surgeries.
Biji, C P; Sudheendrakumar, V V; Sajeev, T V
2006-09-01
Hyblaea puera nucleoployhedrovirus (HpNPV) is a potential biocontrol agent of the teak defoliator, Hyblaea puera (Cramer) (Lepidoptera: Hyblaeidae). To quantify the growth of the virus in the host larvae, three larval stages of the teak defoliator were subjected to quantitative bioassays using specified dilutions of HpNPV. The HpNPV production was found to be dependent on the dose, incubation period as well as stage specific responses of the host insect used. As larvae matured, production of the virus per mg body weight was not found to be in a constant proportion to the increase in the body weight. The combination which yielded the greatest virus production of 3.55 x 10(9) polyhedral occlusion bodies (POBs) was that in which larva weighing 26-37 mg was fed with 1 x 10(6) POBs, incubated for 6 h and harvested at 72 h post infection (h p.i.). The response of the fourth instar larvae was found to be more productive than the third and fifth instar larvae, which makes it an ideal candidate for mass production of the virus in vivo.
NASA Astrophysics Data System (ADS)
Rossini, L.; Khan, A.; Del Alamo, J. C.; Martinez-Legazpi, P.; Pérez Del Villar, C.; Benito, Y.; Yotti, R.; Barrio, A.; Delgado-Montero, A.; Gonzalez-Mansilla, A.; Fernandez-Avilés, F.; Bermejo, J.
2016-11-01
Left ventricular thrombosis (LVT) is a major complication of acute myocardial infarction (AMI). In these patients, the benefits of chronic anticoagulation therapy need to be balanced with its pro-hemorrhagic effects. Blood stasis in the cardiac chambers, a risk factor for LVT, is not addressed in current clinical practice. We recently developed a method to quantitatively assess the blood residence time (RT) inside the left ventricle (LV) based on 2D color-Doppler velocimetry (echo-CDV). Using time-resolved blood velocity fields acquired non-invasively, we integrate a modified advection equation to map intraventricular stasis regions. Here, we present how this tool can be used to estimate the risk of LVT in patients with AMI. 73 patients with a first anterior-AMI were studied by echo-CDV and RT analysis within 72h from admission and at a 5-month follow-up. Patients who eventually develop LVT showed early abnormalities of intraventricular RT: the apical region with RT>2s was significantly larger, had a higher RT and a longer wall contact length. Thus, quantitative analysis of intraventricular flow based on echocardiography may provide subclinical markers of LV thrombosis risk to guide clinical decision making.
Fast and Accurate Learning When Making Discrete Numerical Estimates.
Sanborn, Adam N; Beierholm, Ulrik R
2016-04-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates.
Fast and Accurate Learning When Making Discrete Numerical Estimates
Sanborn, Adam N.; Beierholm, Ulrik R.
2016-01-01
Many everyday estimation tasks have an inherently discrete nature, whether the task is counting objects (e.g., a number of paint buckets) or estimating discretized continuous variables (e.g., the number of paint buckets needed to paint a room). While Bayesian inference is often used for modeling estimates made along continuous scales, discrete numerical estimates have not received as much attention, despite their common everyday occurrence. Using two tasks, a numerosity task and an area estimation task, we invoke Bayesian decision theory to characterize how people learn discrete numerical distributions and make numerical estimates. Across three experiments with novel stimulus distributions we found that participants fell between two common decision functions for converting their uncertain representation into a response: drawing a sample from their posterior distribution and taking the maximum of their posterior distribution. While this was consistent with the decision function found in previous work using continuous estimation tasks, surprisingly the prior distributions learned by participants in our experiments were much more adaptive: When making continuous estimates, participants have required thousands of trials to learn bimodal priors, but in our tasks participants learned discrete bimodal and even discrete quadrimodal priors within a few hundred trials. This makes discrete numerical estimation tasks good testbeds for investigating how people learn and make estimates. PMID:27070155
Quantitative estimation of film forming polymer-plasticizer interactions by the Lorentz-Lorenz Law.
Dredán, J; Zelkó, R; Dávid, A Z; Antal, I
2006-03-09
Molar refraction as well as refractive index has many uses. Beyond confirming the identity and purity of a compound, determination of molecular structure and molecular weight, molar refraction is also used in other estimation schemes, such as in critical properties, surface tension, solubility parameter, molecular polarizability, dipole moment, etc. In the present study molar refraction values of polymer dispersions were determined for the quantitative estimation of film forming polymer-plasticizer interactions. Information can be obtained concerning the extent of interaction between the polymer and the plasticizer from the calculation of molar refraction values of film forming polymer dispersions containing plasticizer.
A quantitative approach to combine sources in stable isotope mixing models
Stable isotope mixing models, used to estimate source contributions to a mixture, typically yield highly uncertain estimates when there are many sources and relatively few isotope elements. Previously, ecologists have either accepted the uncertain contribution estimates for indiv...
Johnston, Iain G; Rickett, Benjamin C; Jones, Nick S
2014-12-02
Back-of-the-envelope or rule-of-thumb calculations involving rough estimates of quantities play a central scientific role in developing intuition about the structure and behavior of physical systems, for example in so-called Fermi problems in the physical sciences. Such calculations can be used to powerfully and quantitatively reason about biological systems, particularly at the interface between physics and biology. However, substantial uncertainties are often associated with values in cell biology, and performing calculations without taking this uncertainty into account may limit the extent to which results can be interpreted for a given problem. We present a means to facilitate such calculations where uncertainties are explicitly tracked through the line of reasoning, and introduce a probabilistic calculator called CALADIS, a free web tool, designed to perform this tracking. This approach allows users to perform more statistically robust calculations in cell biology despite having uncertain values, and to identify which quantities need to be measured more precisely to make confident statements, facilitating efficient experimental design. We illustrate the use of our tool for tracking uncertainty in several example biological calculations, showing that the results yield powerful and interpretable statistics on the quantities of interest. We also demonstrate that the outcomes of calculations may differ from point estimates when uncertainty is accurately tracked. An integral link between CALADIS and the BioNumbers repository of biological quantities further facilitates the straightforward location, selection, and use of a wealth of experimental data in cell biological calculations. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
QVAST: a new Quantum GIS plugin for estimating volcanic susceptibility
NASA Astrophysics Data System (ADS)
Bartolini, S.; Cappello, A.; Martí, J.; Del Negro, C.
2013-11-01
One of the most important tasks of modern volcanology is the construction of hazard maps simulating different eruptive scenarios that can be used in risk-based decision making in land-use planning and emergency management. The first step in the quantitative assessment of volcanic hazards is the development of susceptibility maps (i.e., the spatial probability of a future vent opening given the past eruptive activity of a volcano). This challenging issue is generally tackled using probabilistic methods that use the calculation of a kernel function at each data location to estimate probability density functions (PDFs). The smoothness and the modeling ability of the kernel function are controlled by the smoothing parameter, also known as the bandwidth. Here we present a new tool, QVAST, part of the open-source geographic information system Quantum GIS, which is designed to create user-friendly quantitative assessments of volcanic susceptibility. QVAST allows the selection of an appropriate method for evaluating the bandwidth for the kernel function on the basis of the input parameters and the shapefile geometry, and can also evaluate the PDF with the Gaussian kernel. When different input data sets are available for the area, the total susceptibility map is obtained by assigning different weights to each of the PDFs, which are then combined via a weighted summation and modeled in a non-homogeneous Poisson process. The potential of QVAST, developed in a free and user-friendly environment, is here shown through its application in the volcanic fields of Lanzarote (Canary Islands) and La Garrotxa (NE Spain).
NASA Astrophysics Data System (ADS)
Bruns, S.; Stipp, S. L. S.; Sørensen, H. O.
2017-09-01
Digital rock physics carries the dogmatic concept of having to segment volume images for quantitative analysis but segmentation rejects huge amounts of signal information. Information that is essential for the analysis of difficult and marginally resolved samples, such as materials with very small features, is lost during segmentation. In X-ray nanotomography reconstructions of Hod chalk we observed partial volume voxels with an abundance that limits segmentation based analysis. Therefore, we investigated the suitability of greyscale analysis for establishing statistical representative elementary volumes (sREV) for the important petrophysical parameters of this type of chalk, namely porosity, specific surface area and diffusive tortuosity, by using volume images without segmenting the datasets. Instead, grey level intensities were transformed to a voxel level porosity estimate using a Gaussian mixture model. A simple model assumption was made that allowed formulating a two point correlation function for surface area estimates using Bayes' theory. The same assumption enables random walk simulations in the presence of severe partial volume effects. The established sREVs illustrate that in compacted chalk, these simulations cannot be performed in binary representations without increasing the resolution of the imaging system to a point where the spatial restrictions of the represented sample volume render the precision of the measurement unacceptable. We illustrate this by analyzing the origins of variance in the quantitative analysis of volume images, i.e. resolution dependence and intersample and intrasample variance. Although we cannot make any claims on the accuracy of the approach, eliminating the segmentation step from the analysis enables comparative studies with higher precision and repeatability.
Steventon, Adam; Grieve, Richard; Bardsley, Martin
2015-11-01
Policy makers require estimates of comparative effectiveness that apply to the population of interest, but there has been little research on quantitative approaches to assess and extend the generalizability of randomized controlled trial (RCT)-based evaluations. We illustrate an approach using observational data. Our example is the Whole Systems Demonstrator (WSD) trial, in which 3230 adults with chronic conditions were assigned to receive telehealth or usual care. First, we used novel placebo tests to assess whether outcomes were similar between the RCT control group and a matched subset of nonparticipants who received usual care. We matched on 65 baseline variables obtained from the electronic medical record. Second, we conducted sensitivity analysis to consider whether the estimates of treatment effectiveness were robust to alternative assumptions about whether "usual care" is defined by the RCT control group or nonparticipants. Thus, we provided alternative estimates of comparative effectiveness by contrasting the outcomes of the RCT telehealth group and matched nonparticipants. For some endpoints, such as the number of outpatient attendances, the placebo tests passed, and the effectiveness estimates were robust to the choice of comparison group. However, for other endpoints, such as emergency admissions, the placebo tests failed and the estimates of treatment effect differed markedly according to whether telehealth patients were compared with RCT controls or matched nonparticipants. The proposed placebo tests indicate those cases when estimates from RCTs do not generalize to routine clinical practice and motivate complementary estimates of comparative effectiveness that use observational data. Future RCTs are recommended to incorporate these placebo tests and the accompanying sensitivity analyses to enhance their relevance to policy making. © The Author(s) 2015.
Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina
2016-09-01
Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.
Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.
2016-01-01
Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. © RSNA, 2016 Online supplemental material is available for this article. PMID:27002418
A general way for quantitative magnetic measurement by transmitted electrons
NASA Astrophysics Data System (ADS)
Song, Dongsheng; Li, Gen; Cai, Jianwang; Zhu, Jing
2016-01-01
EMCD (electron magnetic circular dichroism) technique opens a new door to explore magnetic properties by transmitted electrons. The recently developed site-specific EMCD technique makes it possible to obtain rich magnetic information from the Fe atoms sited at nonequivalent crystallographic planes in NiFe2O4, however it is based on a critical demand for the crystallographic structure of the testing sample. Here, we have further improved and tested the method for quantitative site-specific magnetic measurement applicable for more complex crystallographic structure by using the effective dynamical diffraction effects (general routine for selecting proper diffraction conditions, making use of the asymmetry of dynamical diffraction for design of experimental geometry and quantitative measurement, etc), and taken yttrium iron garnet (Y3Fe5O12, YIG) with more complex crystallographic structure as an example to demonstrate its applicability. As a result, the intrinsic magnetic circular dichroism signals, spin and orbital magnetic moment of iron with site-specific are quantitatively determined. The method will further promote the development of quantitative magnetic measurement with high spatial resolution by transmitted electrons.
Yap, John Stephen; Fan, Jianqing; Wu, Rongling
2009-12-01
Estimation of the covariance structure of longitudinal processes is a fundamental prerequisite for the practical deployment of functional mapping designed to study the genetic regulation and network of quantitative variation in dynamic complex traits. We present a nonparametric approach for estimating the covariance structure of a quantitative trait measured repeatedly at a series of time points. Specifically, we adopt Huang et al.'s (2006, Biometrika 93, 85-98) approach of invoking the modified Cholesky decomposition and converting the problem into modeling a sequence of regressions of responses. A regularized covariance estimator is obtained using a normal penalized likelihood with an L(2) penalty. This approach, embedded within a mixture likelihood framework, leads to enhanced accuracy, precision, and flexibility of functional mapping while preserving its biological relevance. Simulation studies are performed to reveal the statistical properties and advantages of the proposed method. A real example from a mouse genome project is analyzed to illustrate the utilization of the methodology. The new method will provide a useful tool for genome-wide scanning for the existence and distribution of quantitative trait loci underlying a dynamic trait important to agriculture, biology, and health sciences.
Elschot, Mattijs; Vermolen, Bart J.; Lam, Marnix G. E. H.; de Keizer, Bart; van den Bosch, Maurice A. A. J.; de Jong, Hugo W. A. M.
2013-01-01
Background After yttrium-90 (90Y) microsphere radioembolization (RE), evaluation of extrahepatic activity and liver dosimetry is typically performed on 90Y Bremsstrahlung SPECT images. Since these images demonstrate a low quantitative accuracy, 90Y PET has been suggested as an alternative. The aim of this study is to quantitatively compare SPECT and state-of-the-art PET on the ability to detect small accumulations of 90Y and on the accuracy of liver dosimetry. Methodology/Principal Findings SPECT/CT and PET/CT phantom data were acquired using several acquisition and reconstruction protocols, including resolution recovery and Time-Of-Flight (TOF) PET. Image contrast and noise were compared using a torso-shaped phantom containing six hot spheres of various sizes. The ability to detect extra- and intrahepatic accumulations of activity was tested by quantitative evaluation of the visibility and unique detectability of the phantom hot spheres. Image-based dose estimates of the phantom were compared to the true dose. For clinical illustration, the SPECT and PET-based estimated liver dose distributions of five RE patients were compared. At equal noise level, PET showed higher contrast recovery coefficients than SPECT. The highest contrast recovery coefficients were obtained with TOF PET reconstruction including resolution recovery. All six spheres were consistently visible on SPECT and PET images, but PET was able to uniquely detect smaller spheres than SPECT. TOF PET-based estimates of the dose in the phantom spheres were more accurate than SPECT-based dose estimates, with underestimations ranging from 45% (10-mm sphere) to 11% (37-mm sphere) for PET, and 75% to 58% for SPECT, respectively. The differences between TOF PET and SPECT dose-estimates were supported by the patient data. Conclusions/Significance In this study we quantitatively demonstrated that the image quality of state-of-the-art PET is superior over Bremsstrahlung SPECT for the assessment of the 90Y microsphere distribution after radioembolization. PMID:23405207
Bellgowan, P. S. F.; Saad, Z. S.; Bandettini, P. A.
2003-01-01
Estimates of hemodynamic amplitude, delay, and width were combined to investigate system dynamics involved in lexical decision making. Subjects performed a lexical decision task using word and nonword stimuli rotated 0°, 60°, or 120°. Averaged hemodynamic responses to repeated stimulation were fit to a Gamma-variate function convolved with a heavyside function of varying onset and duration to estimate each voxel's activation delay and width. Consistent with prolonged reaction times for the rotated stimuli and nonwords, the motor cortex showed delayed hemodynamic onset for both conditions. Language areas such as the lingual gyrus, middle temporal gyrus, fusiform gyrus, and precuneus all showed delayed hemodynamic onsets to rotated stimuli but not to nonword stimuli. The inferior frontal gyrus showed both increased onset latency for rotated stimuli and a wider hemodynamic response to nonwords, consistent with prolonged processing in this area during the lexical decision task. Phonological processing areas such as superior temporal and angular gyrus showed no delay or width difference for rotated stimuli. These results suggest that phonological routes but not semantic routes to the lexicon can proceed regardless of stimulus orientation. This study demonstrates the utility of estimating hemodynamic delay and width in addition to amplitude allowing for more quantitative measures of brain function such as mental chronometry. PMID:12552093
A Study on the Development of Service Quality Index for Incheon International Airport
NASA Technical Reports Server (NTRS)
Lee, Kang Seok; Lee, Seung Chang; Hong, Soon Kil
2003-01-01
The main purpose of this study is located at developing Ominibus Monitors System(OMS) for internal management, which will enable to establish standards, finding out matters to be improved, and appreciation for its treatment in a systematic way. It is through developing subjective or objective estimation tool with use importance, perceived level, and complex index at international airport by each principal service items. The direction of this study came towards for the purpose of developing a metric analysis tool, utilizing the Quantitative Second Data, Analysing Perceived Data through airport user surveys, systemizing the data collection-input-analysis process, making data image according to graph of results, planning Service Encounter and endowing control attribution, and ensuring competitiveness at the minimal international standards. It is much important to set up a pre-investigation plan on the base of existent foreign literature and actual inspection to international airport. Two tasks have been executed together on the base of this pre-investigation; one is developing subjective estimation standards for departing party, entering party, and airport residence and the other is developing objective standards as complementary methods. The study has processed for the purpose of monitoring services at airports regularly and irregularly through developing software system for operating standards after ensuring credibility and feasibility of estimation standards with substantial and statistical way.
NASA Astrophysics Data System (ADS)
Pan, J.; Smith, T.; McLaughlin, D.
2016-12-01
China, which had a population of 1.38 billion in 2013, is expected to peak at about 1.45 billion around 2030, with per capita food demand likely to increase significantly. The population growth and diet change make prospects of future available water and food worrisome for China. Quantitative estimates of crop specific blue and green water footprints provide useful insight about the roles of different water sources and give guidance for agricultural and water resource planning. This study uses reanalysis methods to merge diverse datasets, including information on water fluxes and land use, to estimate crop-specific green and blue water consumption at 0.5 degree spatial resolution. The estimates incorporate, through constraints in the reanalysis procedure, important physical connections between the water and land resources that support agriculture. These connections are important since land use affects evapotranspiration and runoff while water availability and crop area affect crop production and virtual water content. The results show that green water accounts for 86% and blue water accounts for 14% of the total national agricultural water footprint, respectively. The water footprints of cereals (wheat, maize and rice) and soybeans account for 51% of the total agricultural water footprint. Cereals and soybeans together account for 85% of the total blue water footprint.
Wotherspoon, Lisa M; Boyd, Kathleen Anne; Morris, Rachel K; Jackson, Lesley; Chandiramani, Manju; David, Anna L; Khalil, Asma; Shennan, Andrew; Hodgetts Morton, Victoria; Lavender, Tina; Khan, Khalid; Harper-Clarke, Susan; Mol, Ben; Riley, Richard D; Norrie, John; Norman, Jane
2018-01-01
Introduction The aim of the QUIDS study is to develop a decision support tool for the management of women with symptoms and signs of preterm labour, based on a validated prognostic model using quantitative fetal fibronectin (fFN) concentration, in combination with clinical risk factors. Methods and analysis The study will evaluate the Rapid fFN 10Q System (Hologic, Marlborough, Massachusetts, USA) which quantifies fFN in a vaginal swab. In QUIDS part 2, we will perform a prospective cohort study in at least eight UK consultant-led maternity units, in women with symptoms of preterm labour at 22+0 to 34+6 weeks gestation to externally validate a prognostic model developed in QUIDS part 1. The effects of quantitative fFN on anxiety will be assessed, and acceptability of the test and prognostic model will be evaluated in a subgroup of women and clinicians (n=30). The sample size is 1600 women (with estimated 96–192 events of preterm delivery within 7 days of testing). Clinicians will be informed of the qualitative fFN result (positive/negative) but be blinded to quantitative fFN result. Research midwives will collect outcome data from the maternal and neonatal clinical records. The final validated prognostic model will be presented as a mobile or web-based application. Ethics and dissemination The study is funded by the National Institute of Healthcare Research Health Technology Assessment (HTA 14/32/01). It has been approved by the West of Scotland Research Ethics Committee (16/WS/0068). Version Protocol V.2, Date 1 November 2016. Trial registration number ISRCTN41598423 and CPMS: 31277. PMID:29674373
NASA Astrophysics Data System (ADS)
Takahashi, Tomoko; Thornton, Blair
2017-12-01
This paper reviews methods to compensate for matrix effects and self-absorption during quantitative analysis of compositions of solids measured using Laser Induced Breakdown Spectroscopy (LIBS) and their applications to in-situ analysis. Methods to reduce matrix and self-absorption effects on calibration curves are first introduced. The conditions where calibration curves are applicable to quantification of compositions of solid samples and their limitations are discussed. While calibration-free LIBS (CF-LIBS), which corrects matrix effects theoretically based on the Boltzmann distribution law and Saha equation, has been applied in a number of studies, requirements need to be satisfied for the calculation of chemical compositions to be valid. Also, peaks of all elements contained in the target need to be detected, which is a bottleneck for in-situ analysis of unknown materials. Multivariate analysis techniques are gaining momentum in LIBS analysis. Among the available techniques, principal component regression (PCR) analysis and partial least squares (PLS) regression analysis, which can extract related information to compositions from all spectral data, are widely established methods and have been applied to various fields including in-situ applications in air and for planetary explorations. Artificial neural networks (ANNs), where non-linear effects can be modelled, have also been investigated as a quantitative method and their applications are introduced. The ability to make quantitative estimates based on LIBS signals is seen as a key element for the technique to gain wider acceptance as an analytical method, especially in in-situ applications. In order to accelerate this process, it is recommended that the accuracy should be described using common figures of merit which express the overall normalised accuracy, such as the normalised root mean square errors (NRMSEs), when comparing the accuracy obtained from different setups and analytical methods.
Using GPS To Teach More Than Accurate Positions.
ERIC Educational Resources Information Center
Johnson, Marie C.; Guth, Peter L.
2002-01-01
Undergraduate science majors need practice in critical thinking, quantitative analysis, and judging whether their calculated answers are physically reasonable. Develops exercises using handheld Global Positioning System (GPS) receivers. Reinforces students' abilities to think quantitatively, make realistic "back of the envelope"…
Boe, S G; Dalton, B H; Harwood, B; Doherty, T J; Rice, C L
2009-05-01
To establish the inter-rater reliability of decomposition-based quantitative electromyography (DQEMG) derived motor unit number estimates (MUNEs) and quantitative motor unit (MU) analysis. Using DQEMG, two examiners independently obtained a sample of needle and surface-detected motor unit potentials (MUPs) from the tibialis anterior muscle from 10 subjects. Coupled with a maximal M wave, surface-detected MUPs were used to derive a MUNE for each subject and each examiner. Additionally, size-related parameters of the individual MUs were obtained following quantitative MUP analysis. Test-retest MUNE values were similar with high reliability observed between examiners (ICC=0.87). Additionally, MUNE variability from test-retest as quantified by a 95% confidence interval was relatively low (+/-28 MUs). Lastly, quantitative data pertaining to MU size, complexity and firing rate were similar between examiners. MUNEs and quantitative MU data can be obtained with high reliability by two independent examiners using DQEMG. Establishing the inter-rater reliability of MUNEs and quantitative MU analysis using DQEMG is central to the clinical applicability of the technique. In addition to assessing response to treatments over time, multiple clinicians may be involved in the longitudinal assessment of the MU pool of individuals with disorders of the central or peripheral nervous system.
NASA Astrophysics Data System (ADS)
Zhou, Guoqing; Yan, Hongbo; Chen, Kunhua; Zhang, Rongting
2016-04-01
After a big karst sinkhole happened in Jili Village of Guangxi, China, the local government was eager to quantitatively analyze and map susceptible areas of the potential second-time karst sinkholes in order to make timely decisions whether the residents living in the first-time sinkhole areas should move. For this reason, karst sinkholes susceptibility geospatial analysis is investigated using multivariate spatial data, logistic regression model (LRM) and Geographical Information System (GIS). Ten major karst sinkholes related factors, including (1) formation lithology, (2) soil structure, (3) profile curvature, (4) groundwater depth, (5) fluctuation of groundwater level, (6) percolation rate of soil, (7) degree of karst development, (8) distance from fault, (9) distance from the traffic route, and (10) overburden thickness were selected, and then each of factors was classified and quantitated with the three or four levels. The LRM was applied to evaluate which factor makes significant contributions to sinkhole. The results demonstrated that formation lithology, soil structure, profile curvature, groundwater depth, ground water level, percolation rate of soil, and degree of karst development, the distance from fault, and overburden thickness are positive, while one factor, the distance from traffic routes is negative, which is deleted from LRM model. The susceptibility of the potential sinkholes in the study area is estimated and mapped using the solved impact factors. The susceptible degrees of the study area are classified into five levels, very high, high, moderate, low, and ignore susceptibility. It has been found that that both very high and high susceptibility areas are along Datou Hill and the foothills of the study area. This finding is verified by field observations. With the investigations conducted in this paper, it can be concluded that the susceptibility maps produced in this paper are reliable and accurate, and useful as a reference for local governments to make decisions regarding whether or not residents living within sinkhole areas should move.
PCA-based groupwise image registration for quantitative MRI.
Huizinga, W; Poot, D H J; Guyader, J-M; Klaassen, R; Coolen, B F; van Kranenburg, M; van Geuns, R J M; Uitterdijk, A; Polfliet, M; Vandemeulebroucke, J; Leemans, A; Niessen, W J; Klein, S
2016-04-01
Quantitative magnetic resonance imaging (qMRI) is a technique for estimating quantitative tissue properties, such as the T1 and T2 relaxation times, apparent diffusion coefficient (ADC), and various perfusion measures. This estimation is achieved by acquiring multiple images with different acquisition parameters (or at multiple time points after injection of a contrast agent) and by fitting a qMRI signal model to the image intensities. Image registration is often necessary to compensate for misalignments due to subject motion and/or geometric distortions caused by the acquisition. However, large differences in image appearance make accurate image registration challenging. In this work, we propose a groupwise image registration method for compensating misalignment in qMRI. The groupwise formulation of the method eliminates the requirement of choosing a reference image, thus avoiding a registration bias. The method minimizes a cost function that is based on principal component analysis (PCA), exploiting the fact that intensity changes in qMRI can be described by a low-dimensional signal model, but not requiring knowledge on the specific acquisition model. The method was evaluated on 4D CT data of the lungs, and both real and synthetic images of five different qMRI applications: T1 mapping in a porcine heart, combined T1 and T2 mapping in carotid arteries, ADC mapping in the abdomen, diffusion tensor mapping in the brain, and dynamic contrast-enhanced mapping in the abdomen. Each application is based on a different acquisition model. The method is compared to a mutual information-based pairwise registration method and four other state-of-the-art groupwise registration methods. Registration accuracy is evaluated in terms of the precision of the estimated qMRI parameters, overlap of segmented structures, distance between corresponding landmarks, and smoothness of the deformation. In all qMRI applications the proposed method performed better than or equally well as competing methods, while avoiding the need to choose a reference image. It is also shown that the results of the conventional pairwise approach do depend on the choice of this reference image. We therefore conclude that our groupwise registration method with a similarity measure based on PCA is the preferred technique for compensating misalignments in qMRI. Copyright © 2015 Elsevier B.V. All rights reserved.
Bohr effect of avian hemoglobins: Quantitative analyses based on the Wyman equation.
Okonjo, Kehinde O
2016-12-07
The Bohr effect data for bar-headed goose, greylag goose and pheasant hemoglobins can be fitted with the Wyman equation for the Bohr effect, but under one proviso: that the pK a of His146β does not change following the T→R quaternary transition. This assumption is based on the x-ray structure of bar-headed goose hemoglobin, which shows that the salt-bridge formed between His146β and Asp94β in human deoxyhemoglobin is not formed in goose deoxyhemoglobin. When the Bohr data for chicken hemoglobin were fitted by making the same assumption, the pK a of the NH 3 + terminal group of Val1α decreased from 7.76 to 6.48 following the T→R transition. When the data were fitted without making any assumption, the pK a of the NH 3 + terminal group increased from 7.57 to 7.77 following the T→R transition. We demonstrate that avian hemoglobin Bohr data are readily fitted with the Wyman equation because avian hemoglobins lack His77β. From curve-fitting to Bohr data we estimate the pK a s of the NH 3 + terminal group of Val1α in the R and T states to be 6.33±0.1 and 7.22±0.1, respectively. We provide evidence indicating that these pK a s are more accurate than estimates from kinetic studies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Masud, Tahir; Binkley, Neil; Boonen, Steven; Hannan, Marian T
2011-01-01
Risk factors for fracture can be purely skeletal, e.g., bone mass, microarchitecture or geometry, or a combination of bone and falls risk related factors such as age and functional status. The remit of this Task Force was to review the evidence and consider if falls should be incorporated into the FRAX® model or, alternatively, to provide guidance to assist clinicians in clinical decision-making for patients with a falls history. It is clear that falls are a risk factor for fracture. Fracture probability may be underestimated by FRAX® in individuals with a history of frequent falls. The substantial evidence that various interventions are effective in reducing falls risk was reviewed. Targeting falls risk reduction strategies towards frail older people at high risk for indoor falls is appropriate. This Task Force believes that further fracture reduction requires measures to reduce falls risk in addition to bone directed therapy. Clinicians should recognize that patients with frequent falls are at higher fracture risk than currently estimated by FRAX® and include this in decision-making. However, quantitative adjustment of the FRAX® estimated risk based on falls history is not currently possible. In the long term, incorporation of falls as a risk factor in the FRAX® model would be ideal. Copyright © 2011 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Teater, Barbra; Roy, Jessica; Carpenter, John; Forrester, Donald; Devaney, John; Scourfield, Jonathan
2017-01-01
Students in the United Kingdom (UK) are found to lack knowledge and skills in quantitative research methods. To address this gap, a quantitative research method and statistical analysis curriculum comprising 10 individual lessons was developed, piloted, and evaluated at two universities The evaluation found that BSW students' (N = 81)…
[Quantitative determination of 7-phenoxyacetamidodesacetoxycephalosporanic acid].
Dzegilenko, N B; Riabova, N M; Zinchenko, E Ia; Korchagin, V B
1976-11-01
7-Phenoxyacetamidodesacetoxycephalosporanic acid, an intermediate product in synthesis of cephalexin, was prepared by oxydation of phenoxymethylpenicillin into the respective sulphoxide and transformation of the latter. The UV-spectra of the reaction products were studied. A quantitative method is proposed for determination of 7-phenoxyacetamidodesacetoxycephalosporanic acid in the finished products based on estimation os the coefficient of specific extinction of the ethanol solutions at a wave length of 268 um in the UV-spectrum region in combination with semiquantitative estimation of the admixtures with the method of thin-layer chromatography.
NASA Astrophysics Data System (ADS)
Creagh, Dudley; Cameron, Alyce
2017-08-01
When skeletonized remains are found it becomes a police task to determine to identify the body and establish the cause of death. It assists investigators if the Post-Mortem Interval (PMI) can be established. Hitherto no reliable qualitative method of estimating the PMI has been found. A quantitative method has yet to be developed. This paper shows that IR spectroscopy and Raman microscopy have the potential to form the basis of a quantitative method.
Transcript copy number estimation using a mouse whole-genome oligonucleotide microarray
Carter, Mark G; Sharov, Alexei A; VanBuren, Vincent; Dudekula, Dawood B; Carmack, Condie E; Nelson, Charlie; Ko, Minoru SH
2005-01-01
The ability to quantitatively measure the expression of all genes in a given tissue or cell with a single assay is an exciting promise of gene-expression profiling technology. An in situ-synthesized 60-mer oligonucleotide microarray designed to detect transcripts from all mouse genes was validated, as well as a set of exogenous RNA controls derived from the yeast genome (made freely available without restriction), which allow quantitative estimation of absolute endogenous transcript abundance. PMID:15998450
Ley, P
1985-04-01
Patients frequently fail to understand what they are told. Further, they frequently forget the information given to them. These factors have effects on patients' satisfaction with the consultation. All three of these factors--understanding, memory and satisfaction--have effects on the probability that a patient will comply with advice. The levels of failure to understand and remember and levels of dissatisfaction are described. Quantitative estimates of the effects of these factors on non-compliance are presented.
USDA-ARS?s Scientific Manuscript database
We proposed a method to estimate the error variance among non-replicated genotypes, thus to estimate the genetic parameters by using replicated controls. We derived formulas to estimate sampling variances of the genetic parameters. Computer simulation indicated that the proposed methods of estimatin...
Cunningham, Charles G.; Zappettini, Eduardo O.; Vivallo S., Waldo; Celada, Carlos Mario; Quispe, Jorge; Singer, Donald A.; Briskey, Joseph A.; Sutphin, David M.; Gajardo M., Mariano; Diaz, Alejandro; Portigliati, Carlos; Berger, Vladimir I.; Carrasco, Rodrigo; Schulz, Klaus J.
2008-01-01
Quantitative information on the general locations and amounts of undiscovered porphyry copper resources of the world is important to exploration managers, land-use and environmental planners, economists, and policy makers. This publication contains the results of probabilistic estimates of the amounts of copper (Cu), molybdenum (Mo), gold (Au), and silver (Ag) in undiscovered porphyry copper deposits in the Andes Mountains of South America. The methodology used to make these estimates is called the 'Three-Part Form'. It was developed to explicitly express estimates of undiscovered resources and associated uncertainty in a form that allows economic analysis and is useful to decisionmakers. The three-part form of assessment includes: (1) delineation of tracts of land where the geology is permissive for porphyry copper deposits to form; (2) selection of grade and tonnage models appropriate for estimating grades and tonnages of the undiscovered porphyry copper deposits in each tract; and (3) estimation of the number of undiscovered porphyry copper deposits in each tract consistent with the grade and tonnage model. A Monte Carlo simulation computer program (EMINERS) was used to combine the probability distributions of the estimated number of undiscovered deposits, the grades, and the tonnages of the selected model to obtain the probability distributions for undiscovered metals in each tract. These distributions of grades and tonnages then can be used to conduct economic evaluations of undiscovered resources in a format usable by decisionmakers. Economic evaluations are not part of this report. The results of this assessment are presented in two principal parts. The first part identifies 26 regional tracts of land where the geology is permissive for the occurrence of undiscovered porphyry copper deposits of Phanerozoic age to a depth of 1 km below the Earth's surface. These tracts are believed to contain most of South America's undiscovered resources of copper. The second part presents probabilistic estimates of the amounts of copper, molybdenum, gold, and silver in undiscovered porphyry copper deposits in each tract. The study also provides tables showing the location, tract number, and age (if available) of discovered deposits and prospects. For each of the 26 permissive tracts delineated in this study, summary information is provided on: (1) the rationale for delineating the tract; (2) the rationale for choosing the mineral deposit model used to assess the tract; (3) discovered deposits and prospects; (4) exploration history; and (5) the distribution of undiscovered deposits in the tract. The scale used to evaluate geologic information and draw tracts is 1:1,000,000.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung.
Holman, Beverley F; Cuplov, Vesna; Hutton, Brian F; Groves, Ashley M; Thielemans, Kris
2016-04-21
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant (18)F-FDG and (18)F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
The effect of respiratory induced density variations on non-TOF PET quantitation in the lung
NASA Astrophysics Data System (ADS)
Holman, Beverley F.; Cuplov, Vesna; Hutton, Brian F.; Groves, Ashley M.; Thielemans, Kris
2016-04-01
Accurate PET quantitation requires a matched attenuation map. Obtaining matched CT attenuation maps in the thorax is difficult due to the respiratory cycle which causes both motion and density changes. Unlike with motion, little attention has been given to the effects of density changes in the lung on PET quantitation. This work aims to explore the extent of the errors caused by pulmonary density attenuation map mismatch on dynamic and static parameter estimates. Dynamic XCAT phantoms were utilised using clinically relevant 18F-FDG and 18F-FMISO time activity curves for all organs within the thorax to estimate the expected parameter errors. The simulations were then validated with PET data from 5 patients suffering from idiopathic pulmonary fibrosis who underwent PET/Cine-CT. The PET data were reconstructed with three gates obtained from the Cine-CT and the average Cine-CT. The lung TACs clearly displayed differences between true and measured curves with error depending on global activity distribution at the time of measurement. The density errors from using a mismatched attenuation map were found to have a considerable impact on PET quantitative accuracy. Maximum errors due to density mismatch were found to be as high as 25% in the XCAT simulation. Differences in patient derived kinetic parameter estimates and static concentration between the extreme gates were found to be as high as 31% and 14%, respectively. Overall our results show that respiratory associated density errors in the attenuation map affect quantitation throughout the lung, not just regions near boundaries. The extent of this error is dependent on the activity distribution in the thorax and hence on the tracer and time of acquisition. Consequently there may be a significant impact on estimated kinetic parameters throughout the lung.
A subagging regression method for estimating the qualitative and quantitative state of groundwater
NASA Astrophysics Data System (ADS)
Jeong, J.; Park, E.; Choi, J.; Han, W. S.; Yun, S. T.
2016-12-01
A subagging regression (SBR) method for the analysis of groundwater data pertaining to the estimation of trend and the associated uncertainty is proposed. The SBR method is validated against synthetic data competitively with other conventional robust and non-robust methods. From the results, it is verified that the estimation accuracies of the SBR method are consistent and superior to those of the other methods and the uncertainties are reasonably estimated where the others have no uncertainty analysis option. To validate further, real quantitative and qualitative data are employed and analyzed comparatively with Gaussian process regression (GPR). For all cases, the trend and the associated uncertainties are reasonably estimated by SBR, whereas the GPR has limitations in representing the variability of non-Gaussian skewed data. From the implementations, it is determined that the SBR method has potential to be further developed as an effective tool of anomaly detection or outlier identification in groundwater state data.
Comparison of Dynamic Contrast Enhanced MRI and Quantitative SPECT in a Rat Glioma Model
Skinner, Jack T.; Yankeelov, Thomas E.; Peterson, Todd E.; Does, Mark D.
2012-01-01
Pharmacokinetic modeling of dynamic contrast enhanced (DCE)-MRI data provides measures of the extracellular volume fraction (ve) and the volume transfer constant (Ktrans) in a given tissue. These parameter estimates may be biased, however, by confounding issues such as contrast agent and tissue water dynamics, or assumptions of vascularization and perfusion made by the commonly used model. In contrast to MRI, radiotracer imaging with SPECT is insensitive to water dynamics. A quantitative dual-isotope SPECT technique was developed to obtain an estimate of ve in a rat glioma model for comparison to the corresponding estimates obtained using DCE-MRI with a vascular input function (VIF) and reference region model (RR). Both DCE-MRI methods produced consistently larger estimates of ve in comparison to the SPECT estimates, and several experimental sources were postulated to contribute to these differences. PMID:22991315
Haines, Seth S.; Diffendorfer, Jay E.; Balistrieri, Laurie; ...
2013-05-15
Natural resource planning at all scales demands methods for assessing the impacts of resource development and use, and in particular it requires standardized methods that yield robust and unbiased results. Building from existing probabilistic methods for assessing the volumes of energy and mineral resources, we provide an algorithm for consistent, reproducible, quantitative assessment of resource development impacts. The approach combines probabilistic input data with Monte Carlo statistical methods to determine probabilistic outputs that convey the uncertainties inherent in the data. For example, one can utilize our algorithm to combine data from a natural gas resource assessment with maps of sagemore » grouse leks and pinon-juniper woodlands in the same area to estimate possible future habitat impacts due to possible future gas development. As another example: one could combine geochemical data and maps of lynx habitat with data from a mineral deposit assessment in the same area to determine possible future mining impacts on water resources and lynx habitat. The approach can be applied to a broad range of positive and negative resource development impacts, such as water quantity or quality, economic benefits, or air quality, limited only by the availability of necessary input data and quantified relationships among geologic resources, development alternatives, and impacts. In conclusion, the framework enables quantitative evaluation of the trade-offs inherent in resource management decision-making, including cumulative impacts, to address societal concerns and policy aspects of resource development.« less
Fundamental quantitative security in quantum key generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuen, Horace P.
2010-12-15
We analyze the fundamental security significance of the quantitative criteria on the final generated key K in quantum key generation including the quantum criterion d, the attacker's mutual information on K, and the statistical distance between her distribution on K and the uniform distribution. For operational significance a criterion has to produce a guarantee on the attacker's probability of correctly estimating some portions of K from her measurement, in particular her maximum probability of identifying the whole K. We distinguish between the raw security of K when the attacker just gets at K before it is used in a cryptographicmore » context and its composition security when the attacker may gain further information during its actual use to help get at K. We compare both of these securities of K to those obtainable from conventional key expansion with a symmetric key cipher. It is pointed out that a common belief in the superior security of a quantum generated K is based on an incorrect interpretation of d which cannot be true, and the security significance of d is uncertain. Generally, the quantum key distribution key K has no composition security guarantee and its raw security guarantee from concrete protocols is worse than that of conventional ciphers. Furthermore, for both raw and composition security there is an exponential catch-up problem that would make it difficult to quantitatively improve the security of K in a realistic protocol. Some possible ways to deal with the situation are suggested.« less
Haines, Seth S.; Diffendorfer, James; Balistrieri, Laurie S.; Berger, Byron R.; Cook, Troy A.; Gautier, Donald L.; Gallegos, Tanya J.; Gerritsen, Margot; Graffy, Elisabeth; Hawkins, Sarah; Johnson, Kathleen; Macknick, Jordan; McMahon, Peter; Modde, Tim; Pierce, Brenda; Schuenemeyer, John H.; Semmens, Darius; Simon, Benjamin; Taylor, Jason; Walton-Day, Katherine
2013-01-01
Natural resource planning at all scales demands methods for assessing the impacts of resource development and use, and in particular it requires standardized methods that yield robust and unbiased results. Building from existing probabilistic methods for assessing the volumes of energy and mineral resources, we provide an algorithm for consistent, reproducible, quantitative assessment of resource development impacts. The approach combines probabilistic input data with Monte Carlo statistical methods to determine probabilistic outputs that convey the uncertainties inherent in the data. For example, one can utilize our algorithm to combine data from a natural gas resource assessment with maps of sage grouse leks and piñon-juniper woodlands in the same area to estimate possible future habitat impacts due to possible future gas development. As another example: one could combine geochemical data and maps of lynx habitat with data from a mineral deposit assessment in the same area to determine possible future mining impacts on water resources and lynx habitat. The approach can be applied to a broad range of positive and negative resource development impacts, such as water quantity or quality, economic benefits, or air quality, limited only by the availability of necessary input data and quantified relationships among geologic resources, development alternatives, and impacts. The framework enables quantitative evaluation of the trade-offs inherent in resource management decision-making, including cumulative impacts, to address societal concerns and policy aspects of resource development.
Past developments and future directions for the AHP in natural resources
Daniel L. Schmoldt; G.A. Mendoza; Jyrki Kangas
2001-01-01
The analytic hierarchy process (AHP) possesses certain characteristics that make it a useful tool for natural resource decision making. The AHPâs capabilities include: participatory decision making, problem structuring and alternative development, group facilitation, consensus building, fairness, qualitative and quantitative information, conflict resolution, decision...
ERIC Educational Resources Information Center
Tyler, Teresa Ann
2012-01-01
This exploratory, quantitative study investigated the relationship between special educators' perceptions of workplace decision-making and two types of satisfaction, overall job satisfaction and satisfaction with school/organization decision-making. To address this purpose, literature-based contributors to job satisfaction were identified and…
NASA Astrophysics Data System (ADS)
Jablonski, A.
2018-01-01
Growing availability of synchrotron facilities stimulates an interest in quantitative applications of hard X-ray photoemission spectroscopy (HAXPES) using linearly polarized radiation. An advantage of this approach is the possibility of continuous variation of radiation energy that makes it possible to control the sampling depth for a measurement. Quantitative applications are based on accurate and reliable theory relating the measured spectral features to needed characteristics of the surface region of solids. A major complication in the case of polarized radiation is an involved structure of the photoemission cross-section for hard X-rays. In the present work, details of the relevant formalism are described and algorithms implementing this formalism for different experimental configurations are proposed. The photoelectron signal intensity may be considerably affected by variation in the positioning of the polarization vector with respect to the surface plane. This information is critical for any quantitative application of HAXPES by polarized X-rays. Different quantitative applications based on photoelectrons with energies up to 10 keV are considered here: (i) determination of surface composition, (ii) estimation of sampling depth, and (iii) measurements of an overlayer thickness. Parameters facilitating these applications (mean escape depths, information depths, effective attenuation lengths) were calculated for a number of photoelectron lines in four elemental solids (Si, Cu, Ag and Au) in different experimental configurations and locations of the polarization vector. One of the considered configurations, with polarization vector located in a plane perpendicular to the surface, was recommended for quantitative applications of HAXPES. In this configurations, it was found that the considered parameters vary weakly in the range of photoelectron emission angles from normal emission to about 50° with respect to the surface normal. The averaged values of the mean escape depth and effective attenuation length were approximated with accurate predictive formulas. The predicted effective attenuation lengths were compared with published values; major discrepancies observed can be ascribed to a possibility of discontinuous structure of the deposited overlayer.
Zan, Yunlong; Long, Yong; Chen, Kewei; Li, Biao; Huang, Qiu; Gullberg, Grant T
2017-07-01
Our previous works have found that quantitative analysis of 123 I-MIBG kinetics in the rat heart with dynamic single-photon emission computed tomography (SPECT) offers the potential to quantify the innervation integrity at an early stage of left ventricular hypertrophy. However, conventional protocols involving a long acquisition time for dynamic imaging reduce the animal survival rate and thus make longitudinal analysis difficult. The goal of this work was to develop a procedure to reduce the total acquisition time by selecting nonuniform acquisition times for projection views while maintaining the accuracy and precision of estimated physiologic parameters. Taking dynamic cardiac imaging with 123 I-MIBG in rats as an example, we generated time activity curves (TACs) of regions of interest (ROIs) as ground truths based on a direct four-dimensional reconstruction of experimental data acquired from a rotating SPECT camera, where TACs represented as the coefficients of B-spline basis functions were used to estimate compartmental model parameters. By iteratively adjusting the knots (i.e., control points) of B-spline basis functions, new TACs were created according to two rules: accuracy and precision. The accuracy criterion allocates the knots to achieve low relative entropy between the estimated left ventricular blood pool TAC and its ground truth so that the estimated input function approximates its real value and thus the procedure yields an accurate estimate of model parameters. The precision criterion, via the D-optimal method, forces the estimated parameters to be as precise as possible, with minimum variances. Based on the final knots obtained, a new protocol of 30 min was built with a shorter acquisition time that maintained a 5% error in estimating rate constants of the compartment model. This was evaluated through digital simulations. The simulation results showed that our method was able to reduce the acquisition time from 100 to 30 min for the cardiac study of rats with 123 I-MIBG. Compared to a uniform interval dynamic SPECT protocol (1 s acquisition interval, 30 min acquisition time), the newly proposed protocol with nonuniform interval achieved comparable (K1 and k2, P = 0.5745 for K1 and P = 0.0604 for k2) or better (Distribution Volume, DV, P = 0.0004) performance for parameter estimates with less storage and shorter computational time. In this study, a procedure was devised to shorten the acquisition time while maintaining the accuracy and precision of estimated physiologic parameters in dynamic SPECT imaging. The procedure was designed for 123 I-MIBG cardiac imaging in rat studies; however, it has the potential to be extended to other applications, including patient studies involving the acquisition of dynamic SPECT data. © 2017 American Association of Physicists in Medicine.
Rong, Xing; Du, Yong; Frey, Eric C
2012-06-21
Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.
Evaluation of a dietary targets monitor.
Lean, M E J; Anderson, A S; Morrison, C; Currall, J
2003-05-01
To evaluate a two-page food frequency list for use as a Dietary Targets Monitor in large scale surveys to quantify consumptions of the key foods groups targeted in health promotion. Intakes of fruit and vegetables, starchy foods and fish estimated from a validated food frequency questionnaire (FFQ) were compared with a short food frequency list (the Dietary Targets Monitor) specifically designed to assess habitual frequency of consumption of foods in relation to dietary targets which form the basis of a National (Scottish) Food and Health Policy. A total of 1085 adults aged 25-64 y from the Glasgow MONICA Study. : The two questionnaires both collected data on frequencies of food consumption for fruit and vegetables, starchy foods and fish. Comparing the two questionnaires, there were consistent biases, best expressed as ratios (FFQ:Dietary Targets Monitor) between the methods for fruit and vegetables (1.33, 95% CI 1.29, 1.38) and 'starchy foods' (1.08, 95% CI 1.05, 1.12), the DTM showing systematic under-reporting by men. For fish consumption, there was essentially no bias between the methods (0.99, 95% CI 0.94, 1.03). Using calibration factors to adjust for biases, the Dietary Targets Monitor indicated that 16% of the subjects were achieving the Scottish Diet food target (400 g/day) for fruit and vegetable consumption. Nearly one-third (32%) of the subjects were eating the recommended intakes of fish (three portions per week). The Dietary Targets Monitor measure of starchy foods consumption was calibrated using FFQ data to be able to make quantitative estimates: 20% of subjects were eating six or more portions of starchy food daily. A similar estimation of total fat intake and saturated fat intake (g/day) allowed the categorization of subjects as low, moderate or high fat consumers, with broad agreement between the methods. The levels of agreement demonstrated by Bland-Altman analysis, were insufficient to permit use of the adjusted DTM to estimate quantitative consumption in smaller subgroups. The Dietary Targets Monitor provides a short, easily administered, dietary assessment tool with the capacity to monitor intakes for changes towards national dietary targets for several key foods and nutrients.
NASA Astrophysics Data System (ADS)
Kukavskaya, Elena; Conard, Susan; Ivanova, Galina; Buryak, Ludmila; Soja, Amber; Zhila, Sergey
2015-04-01
Boreal forests play a crucial role in carbon budgets with Siberian carbon fluxes and pools making a major contribution to the regional and global carbon cycle. Wildfire is the main ecological disturbance in Siberia that leads to changes in forest species composition and structure and in carbon storage, as well as direct emissions of greenhouse gases and aerosols to the atmosphere. At present, the global scientific community is highly interested in quantitative and accurate estimates of fire emissions. Little research on wildland fuel consumption and carbon emission estimates has been carried out in Russia until recently. From 2000 to 2007 we conducted a series of experimental fires of varying fireline intensity in light-coniferous forest of central Siberia to obtain quantitative and qualitative data on fire behavior and carbon emissions due to fires of known behavior. From 2009 to 2013 we examined a number of burned logged areas to assess the potential impact of forest practices on fire emissions. In 2013-2014 burned areas in dark-coniferous and deciduous forests were examined to determine fuel consumption and carbon emissions. We have combined and analyzed the scarce data available in the literature with data obtained in the course of our long-term research to determine the impact of various factors on fuel consumption and to develop models of carbon emissions for different ecosystems of Siberia. Carbon emissions varied drastically (from 0.5 to 40.9 tC/ha) as a function of vegetation type, weather conditions, anthropogenic effects and fire behavior characteristics and periodicity. Our study provides a basis for better understanding of the feedbacks between wildland fire emissions and changing anthropogenic disturbance patterns and climate. The data obtained could be used by air quality agencies to calculate local emissions and by managers to develop strategies to mitigate negative smoke impacts on the environmentand human health.
Loescher, Henry; Ayres, Edward; Duffy, Paul; Luo, Hongyan; Brunke, Max
2014-01-01
Soils are highly variable at many spatial scales, which makes designing studies to accurately estimate the mean value of soil properties across space challenging. The spatial correlation structure is critical to develop robust sampling strategies (e.g., sample size and sample spacing). Current guidelines for designing studies recommend conducting preliminary investigation(s) to characterize this structure, but are rarely followed and sampling designs are often defined by logistics rather than quantitative considerations. The spatial variability of soils was assessed across ∼1 ha at 60 sites. Sites were chosen to represent key US ecosystems as part of a scaling strategy deployed by the National Ecological Observatory Network. We measured soil temperature (Ts) and water content (SWC) because these properties mediate biological/biogeochemical processes below- and above-ground, and quantified spatial variability using semivariograms to estimate spatial correlation. We developed quantitative guidelines to inform sample size and sample spacing for future soil studies, e.g., 20 samples were sufficient to measure Ts to within 10% of the mean with 90% confidence at every temperate and sub-tropical site during the growing season, whereas an order of magnitude more samples were needed to meet this accuracy at some high-latitude sites. SWC was significantly more variable than Ts at most sites, resulting in at least 10× more SWC samples needed to meet the same accuracy requirement. Previous studies investigated the relationship between the mean and variability (i.e., sill) of SWC across space at individual sites across time and have often (but not always) observed the variance or standard deviation peaking at intermediate values of SWC and decreasing at low and high SWC. Finally, we quantified how far apart samples must be spaced to be statistically independent. Semivariance structures from 10 of the 12-dominant soil orders across the US were estimated, advancing our continental-scale understanding of soil behavior. PMID:24465377
Empirical methods in the evaluation of estimators
Gerald S. Walton; C.J. DeMars; C.J. DeMars
1973-01-01
The authors discuss the problem of selecting estimators of density and survival by making use of data on a forest-defoliating larva, the spruce budworm. Varlous estimators are compared. The results show that, among the estimators considered, ratio-type estimators are superior in terms of bias and variance. The methods used in making comparisons, particularly simulation...
Jin, Yan; Huang, Jing-feng; Peng, Dai-liang
2009-01-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors. PMID:19353749
Jin, Yan; Huang, Jing-feng; Peng, Dai-liang
2009-04-01
Ecological compensation is becoming one of key and multidiscipline issues in the field of resources and environmental management. Considering the change relation between gross domestic product (GDP) and ecological capital (EC) based on remote sensing estimation, we construct a new quantitative estimate model for ecological compensation, using county as study unit, and determine standard value so as to evaluate ecological compensation from 2001 to 2004 in Zhejiang Province, China. Spatial differences of the ecological compensation were significant among all the counties or districts. This model fills up the gap in the field of quantitative evaluation of regional ecological compensation and provides a feasible way to reconcile the conflicts among benefits in the economic, social, and ecological sectors.
Connecting the Dots: Linking Environmental Justice Indicators to Daily Dose Model Estimates
Many different quantitative techniques have been developed to either assess Environmental Justice (EJ) issues or estimate exposure and dose for risk assessment. However, very few approaches have been applied to link EJ factors to exposure dose estimate and identify potential impa...
Predicting S2S in Deep Time Sedimentary Systems and Implications for Petroleum Systems
NASA Astrophysics Data System (ADS)
Bhattacharya, J.
2013-12-01
The source to sink concept is focused on quantification of the various components of siliciclastic sedimentary systems from initial source areas, through the dispersal system, and deposition within a number of potential ultimate sedimentary sinks. Sequence stratigraphy shows that depositional system are linked through time and show distinctively predictable 3D stratigraphic organization, which can be related to cycles of relative changes in accommodation and sediment supply. For example, erosion and formation of incised fluvial valleys generally occur during periods of falling base level with lowstand reservoir deposits favored in more basin distal settings (e.g. deepwater fans), whereas during highstands of sea level, significantly more sediment may be sequestered in the non-marine realm and more distal environments may favor deposition of slowly-deposited condensed sections, which may make excellent hydrocarbon source rocks. Only more recently have attempts been made to quantify the size and scaling relationships of the ultimate source areas on the basis of analysis of ancient depositional systems, and the use of these scaling relationships to predict the sixe of linked depositional systems along the S2S tract. The maximum size of depositional systems, such as rivers, deltas, and submarine fans, is significantly controlled by the area, relief, and climate regime of the source area, which in turn may linked to the plate tectonic and paleogeographic setting. Classic provenance studies, and more recent use of detrital zircons, provide critical information about source-areas, and may help place limits on the size and relief of a drainage basin. Provenance studies may also provide key information about rates of exhumation of source areas and the link to the tectonic setting, Examination of ancient river systems in the rock record, and especially the largest trunk rivers, which are typically within incised valleys, can also be used to estimate paleodischarge, which in turn can be linked to the drainage basin to make estimates about the size and sale of the source area. The best estimates can be made in basins with well-constrained data that allow details of cross-sectional or plan-view channel-architecture to be determined, such as extensive outcrops, or abundant subsurface data, and especially where higher resolution 3D seismic data are available. Paleodischarge estimates of lowstand Quaternary-age continental-scale ancient rivers from passive continental margins, using seismic data, are orders-of-magnitude higher (1000's of cumecs) than smaller-scale Cretaceous lowstand systems that drained into the Western-Interior Seaway of North America (100s of cumecs). Paleodischarge of rivers can also be estimated independently by integrating estimates of drainage basin area and paleoclimate. These can be compared with paleodischarge estimates based on the river deposits themselves. The integration of paleodischarge estimates with more sophisticated provenance analysis should enable improved use of the sedimentary record to make estimates about the entire S2S system, as opposed to primarily the depositional component. A more quantitative approach to estimating the scale of sedimentary systems, and especially in the context of source areas, also puts constraints on the size and scale of potential hydrocarbon reservoirs and thus has economic value.
A quantitative method for evaluating alternatives. [aid to decision making
NASA Technical Reports Server (NTRS)
Forthofer, M. J.
1981-01-01
When faced with choosing between alternatives, people tend to use a number of criteria (often subjective, rather than objective) to decide which is the best alternative for them given their unique situation. The subjectivity inherent in the decision-making process can be reduced by the definition and use of a quantitative method for evaluating alternatives. This type of method can help decision makers achieve degree of uniformity and completeness in the evaluation process, as well as an increased sensitivity to the factors involved. Additional side-effects are better documentation and visibility of the rationale behind the resulting decisions. General guidelines for defining a quantitative method are presented and a particular method (called 'hierarchical weighted average') is defined and applied to the evaluation of design alternatives for a hypothetical computer system capability.
Generating Linear Equations Based on Quantitative Reasoning
ERIC Educational Resources Information Center
Lee, Mi Yeon
2017-01-01
The Common Core's Standards for Mathematical Practice encourage teachers to develop their students' ability to reason abstractly and quantitatively by helping students make sense of quantities and their relationships within problem situations. The seventh-grade content standards include objectives pertaining to developing linear equations in…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stegen, James C.; Lin, Xueju; Fredrickson, Jim K.
Across a set of ecological communities connected to each other through organismal dispersal (a ‘meta-community’), turnover in composition is governed by (ecological) Drift, Selection, and Dispersal Limitation. Quantitative estimates of these processes remain elusive, but would represent a common currency needed to unify community ecology. Using a novel analytical framework we quantitatively estimate the relative influences of Drift, Selection, and Dispersal Limitation on subsurface, sediment-associated microbial meta-communities. The communities we study are distributed across two geologic formations encompassing ~12,500m3 of uranium-contaminated sediments within the Hanford Site in eastern Washington State. We find that Drift consistently governs ~25% of spatial turnovermore » in community composition; Selection dominates (governing ~60% of turnover) across spatially-structured habitats associated with fine-grained, low permeability sediments; and Dispersal Limitation is most influential (governing ~40% of turnover) across spatially-unstructured habitats associated with coarse-grained, highly-permeable sediments. Quantitative influences of Selection and Dispersal Limitation may therefore be predictable from knowledge of environmental structure. To develop a system-level conceptual model we extend our analytical framework to compare process estimates across formations, characterize measured and unmeasured environmental variables that impose Selection, and identify abiotic features that limit dispersal. Insights gained here suggest that community ecology can benefit from a shift in perspective; the quantitative approach developed here goes beyond the ‘niche vs. neutral’ dichotomy by moving towards a style of natural history in which estimates of Selection, Dispersal Limitation and Drift can be described, mapped and compared across ecological systems.« less
Bayesian parameter estimation in spectral quantitative photoacoustic tomography
NASA Astrophysics Data System (ADS)
Pulkkinen, Aki; Cox, Ben T.; Arridge, Simon R.; Kaipio, Jari P.; Tarvainen, Tanja
2016-03-01
Photoacoustic tomography (PAT) is an imaging technique combining strong contrast of optical imaging to high spatial resolution of ultrasound imaging. These strengths are achieved via photoacoustic effect, where a spatial absorption of light pulse is converted into a measurable propagating ultrasound wave. The method is seen as a potential tool for small animal imaging, pre-clinical investigations, study of blood vessels and vasculature, as well as for cancer imaging. The goal in PAT is to form an image of the absorbed optical energy density field via acoustic inverse problem approaches from the measured ultrasound data. Quantitative PAT (QPAT) proceeds from these images and forms quantitative estimates of the optical properties of the target. This optical inverse problem of QPAT is illposed. To alleviate the issue, spectral QPAT (SQPAT) utilizes PAT data formed at multiple optical wavelengths simultaneously with optical parameter models of tissue to form quantitative estimates of the parameters of interest. In this work, the inverse problem of SQPAT is investigated. Light propagation is modelled using the diffusion equation. Optical absorption is described with chromophore concentration weighted sum of known chromophore absorption spectra. Scattering is described by Mie scattering theory with an exponential power law. In the inverse problem, the spatially varying unknown parameters of interest are the chromophore concentrations, the Mie scattering parameters (power law factor and the exponent), and Gruneisen parameter. The inverse problem is approached with a Bayesian method. It is numerically demonstrated, that estimation of all parameters of interest is possible with the approach.
EPA's methodology for estimation of inhalation reference concentrations (RfCs) as benchmark estimates of the quantitative dose-response assessment of chronic noncancer toxicity for individual inhaled chemicals.
50 CFR 600.340 - National Standard 7-Costs and Benefits.
Code of Federal Regulations, 2013 CFR
2013-10-01
... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...
50 CFR 600.340 - National Standard 7-Costs and Benefits.
Code of Federal Regulations, 2014 CFR
2014-10-01
... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...
50 CFR 600.340 - National Standard 7-Costs and Benefits.
Code of Federal Regulations, 2011 CFR
2011-10-01
... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...
50 CFR 600.340 - National Standard 7-Costs and Benefits.
Code of Federal Regulations, 2010 CFR
2010-10-01
... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...
50 CFR 600.340 - National Standard 7-Costs and Benefits.
Code of Federal Regulations, 2012 CFR
2012-10-01
... regulation are real and substantial relative to the added research, administrative, and enforcement costs, as..., including the status quo, is adequate. If quantitative estimates are not possible, qualitative estimates...
Measuring Aircraft Capability for Military and Political Analysis
1976-03-01
challenged in 1932 when a panel of distinguished British scientists discussed the feasibility of quantitatively estimating sensory events... Quantitative Analysis of Social Problems , E.R. Tufte (ed.), p. 407, Addison-Wesley, 1970. 17 "artificial" boundaries are imposed on the data. Less...of arms transfers in various parts of the world as well. Quantitative research (and hence measurement) contributes to theoretical development by
Couch, James R; Petersen, Martin; Rice, Carol; Schubauer-Berigan, Mary K
2011-05-01
To construct a job-exposure matrix (JEM) for an Ohio beryllium processing facility between 1953 and 2006 and to evaluate temporal changes in airborne beryllium exposures. Quantitative area- and breathing-zone-based exposure measurements of airborne beryllium were made between 1953 and 2006 and used by plant personnel to estimate daily weighted average (DWA) exposure concentrations for sampled departments and operations. These DWA measurements were used to create a JEM with 18 exposure metrics, which was linked to the plant cohort consisting of 18,568 unique job, department and year combinations. The exposure metrics ranged from quantitative metrics (annual arithmetic/geometric average DWA exposures, maximum DWA and peak exposures) to descriptive qualitative metrics (chemical beryllium species and physical form) to qualitative assignment of exposure to other risk factors (yes/no). Twelve collapsed job titles with long-term consistent industrial hygiene samples were evaluated using regression analysis for time trends in DWA estimates. Annual arithmetic mean DWA estimates (overall plant-wide exposures including administration, non-production, and production estimates) for the data by decade ranged from a high of 1.39 μg/m(3) in the 1950s to a low of 0.33 μg/m(3) in the 2000s. Of the 12 jobs evaluated for temporal trend, the average arithmetic DWA mean was 2.46 μg/m(3) and the average geometric mean DWA was 1.53 μg/m(3). After the DWA calculations were log-transformed, 11 of the 12 had a statistically significant (p < 0.05) decrease in reported exposure over time. The constructed JEM successfully differentiated beryllium exposures across jobs and over time. This is the only quantitative JEM containing exposure estimates (average and peak) for the entire plant history.
Temporal Data Set Reduction Based on D-Optimality for Quantitative FLIM-FRET Imaging.
Omer, Travis; Intes, Xavier; Hahn, Juergen
2015-01-01
Fluorescence lifetime imaging (FLIM) when paired with Förster resonance energy transfer (FLIM-FRET) enables the monitoring of nanoscale interactions in living biological samples. FLIM-FRET model-based estimation methods allow the quantitative retrieval of parameters such as the quenched (interacting) and unquenched (non-interacting) fractional populations of the donor fluorophore and/or the distance of the interactions. The quantitative accuracy of such model-based approaches is dependent on multiple factors such as signal-to-noise ratio and number of temporal points acquired when sampling the fluorescence decays. For high-throughput or in vivo applications of FLIM-FRET, it is desirable to acquire a limited number of temporal points for fast acquisition times. Yet, it is critical to acquire temporal data sets with sufficient information content to allow for accurate FLIM-FRET parameter estimation. Herein, an optimal experimental design approach based upon sensitivity analysis is presented in order to identify the time points that provide the best quantitative estimates of the parameters for a determined number of temporal sampling points. More specifically, the D-optimality criterion is employed to identify, within a sparse temporal data set, the set of time points leading to optimal estimations of the quenched fractional population of the donor fluorophore. Overall, a reduced set of 10 time points (compared to a typical complete set of 90 time points) was identified to have minimal impact on parameter estimation accuracy (≈5%), with in silico and in vivo experiment validations. This reduction of the number of needed time points by almost an order of magnitude allows the use of FLIM-FRET for certain high-throughput applications which would be infeasible if the entire number of time sampling points were used.
Fernandes, Ricardo; Grootes, Pieter; Nadeau, Marie-Josée; Nehlich, Olaf
2015-07-14
The island cemetery site of Ostorf (Germany) consists of individual human graves containing Funnel Beaker ceramics dating to the Early or Middle Neolithic. However, previous isotope and radiocarbon analysis demonstrated that the Ostorf individuals had a diet rich in freshwater fish. The present study was undertaken to quantitatively reconstruct the diet of the Ostorf population and establish if dietary habits are consistent with the traditional characterization of a Neolithic diet. Quantitative diet reconstruction was achieved through a novel approach consisting of the use of the Bayesian mixing model Food Reconstruction Using Isotopic Transferred Signals (FRUITS) to model isotope measurements from multiple dietary proxies (δ 13 C collagen , δ 15 N collagen , δ 13 C bioapatite , δ 34 S methione , 14 C collagen ). The accuracy of model estimates was verified by comparing the agreement between observed and estimated human dietary radiocarbon reservoir effects. Quantitative diet reconstruction estimates confirm that the Ostorf individuals had a high protein intake due to the consumption of fish and terrestrial animal products. However, FRUITS estimates also show that plant foods represented a significant source of calories. Observed and estimated human dietary radiocarbon reservoir effects are in good agreement provided that the aquatic reservoir effect at Lake Ostorf is taken as reference. The Ostorf population apparently adopted elements associated with a Neolithic culture but adapted to available local food resources and implemented a subsistence strategy that involved a large proportion of fish and terrestrial meat consumption. This case study exemplifies the diversity of subsistence strategies followed during the Neolithic. Am J Phys Anthropol, 2015. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Renewal processes based on generalized Mittag-Leffler waiting times
NASA Astrophysics Data System (ADS)
Cahoy, Dexter O.; Polito, Federico
2013-03-01
The fractional Poisson process has recently attracted experts from several fields of study. Its natural generalization of the ordinary Poisson process made the model more appealing for real-world applications. In this paper, we generalized the standard and fractional Poisson processes through the waiting time distribution, and showed their relations to an integral operator with a generalized Mittag-Leffler function in the kernel. The waiting times of the proposed renewal processes have the generalized Mittag-Leffler and stretched-squashed Mittag-Leffler distributions. Note that the generalizations naturally provide greater flexibility in modeling real-life renewal processes. Algorithms to simulate sample paths and to estimate the model parameters are derived. Note also that these procedures are necessary to make these models more usable in practice. State probabilities and other qualitative or quantitative features of the models are also discussed.
Conducting meta-analyses of HIV prevention literatures from a theory-testing perspective.
Marsh, K L; Johnson, B T; Carey, M P
2001-09-01
Using illustrations from HIV prevention research, the current article advocates approaching meta-analysis as a theory-testing scientific method rather than as merely a set of rules for quantitative analysis. Like other scientific methods, meta-analysis has central concerns with internal, external, and construct validity. The focus of a meta-analysis should only rarely be merely describing the effects of health promotion, but rather should be on understanding and explaining phenomena and the processes underlying them. The methodological decisions meta-analysts make in conducting reviews should be guided by a consideration of the underlying goals of the review (e.g., simply effect size estimation or, preferably theory testing). From the advocated perspective that a health behavior meta-analyst should test theory, the authors present a number of issues to be considered during the conduct of meta-analyses.
Periotto, N A; Tundisi, J G
2013-08-01
The aim of this study was to identify and make an initial accounting of the ecosystem services of the hydroelectric power generation plant, UHE Carlos Botelho (Itirapina Municipality, São Paulo State, Brazil), and its most extensive wetlands--total of 2,640 ha--and also identify the drivers of change of these services. Twenty (20) ecosystem services were identified and the estimated quantitative total value obtained was USD 120,445,657.87. year(-1) or USD 45,623.35 ha(-1).year(-1). Investments on restoration of spatial heterogeneity along Tietê-Jacaré hydrographic basin and new technologies for regional economic activities must maintain ecological functions as well as increase marginal values of ecosystem services and the potential annual economic return of ecological functions.
Lee, Eun Gyung; Kim, Seung Won; Feigley, Charles E.; Harper, Martin
2015-01-01
This study introduces two semi-quantitative methods, Structured Subjective Assessment (SSA) and Control of Substances Hazardous to Health (COSHH) Essentials, in conjunction with two-dimensional Monte Carlo simulations for determining prior probabilities. Prior distribution using expert judgment was included for comparison. Practical applications of the proposed methods were demonstrated using personal exposure measurements of isoamyl acetate in an electronics manufacturing facility and of isopropanol in a printing shop. Applicability of these methods in real workplaces was discussed based on the advantages and disadvantages of each method. Although these methods could not be completely independent of expert judgments, this study demonstrated a methodological improvement in the estimation of the prior distribution for the Bayesian decision analysis tool. The proposed methods provide a logical basis for the decision process by considering determinants of worker exposure. PMID:23252451
Comparison of molecular structure of alkali metal o-, m- and p-nitrobenzoates
NASA Astrophysics Data System (ADS)
Regulska, E.; Świsłocka, R.; Samsonowicz, M.; Lewandowski, W.
2008-09-01
The influence of nitro-substituent in ortho, meta and para positions as well as lithium, sodium, potassium, rubidium and cesium on the electronic system of aromatic ring and the distribution of electronic charge in carboxylic group of the nitrobenzoates were estimated. Optimized geometrical structures were calculated (B3LYP/6-311++G ∗∗). To make quantitative evaluation of aromaticity of studied molecules the geometric (A J, BAC, I 6 and HOMA) as well as magnetic (NICS) aromaticity indices were calculated. Electronic charge distribution was also examined by molecular spectroscopic study, which may be the source of quality criterion for aromaticity. Experimental and theoretical FT-IR, FT-Raman and NMR ( 1H and 13C) spectra of the title compounds were analyzed. The calculated parameters were compared to experimental characteristics of these molecules.
Estimating and mapping the population at risk of sleeping sickness.
Simarro, Pere P; Cecchi, Giuliano; Franco, José R; Paone, Massimo; Diarra, Abdoulaye; Ruiz-Postigo, José Antonio; Fèvre, Eric M; Mattioli, Raffaele C; Jannin, Jean G
2012-01-01
Human African trypanosomiasis (HAT), also known as sleeping sickness, persists as a public health problem in several sub-Saharan countries. Evidence-based, spatially explicit estimates of population at risk are needed to inform planning and implementation of field interventions, monitor disease trends, raise awareness and support advocacy. Comprehensive, geo-referenced epidemiological records from HAT-affected countries were combined with human population layers to map five categories of risk, ranging from "very high" to "very low," and to estimate the corresponding at-risk population. Approximately 70 million people distributed over a surface of 1.55 million km(2) are estimated to be at different levels of risk of contracting HAT. Trypanosoma brucei gambiense accounts for 82.2% of the population at risk, the remaining 17.8% being at risk of infection from T. b. rhodesiense. Twenty-one million people live in areas classified as moderate to very high risk, where more than 1 HAT case per 10,000 inhabitants per annum is reported. Updated estimates of the population at risk of sleeping sickness were made, based on quantitative information on the reported cases and the geographic distribution of human population. Due to substantial methodological differences, it is not possible to make direct comparisons with previous figures for at-risk population. By contrast, it will be possible to explore trends in the future. The presented maps of different HAT risk levels will help to develop site-specific strategies for control and surveillance, and to monitor progress achieved by ongoing efforts aimed at the elimination of sleeping sickness.
Fjordic Environments of Scotland: A National Inventory of Sedimentary Blue Carbon.
NASA Astrophysics Data System (ADS)
Smeaton, Craig; Austin, William; Davies, Althea; Baltzer, Agnes; Howe, John
2016-04-01
Coastal sediments potentially hold a significant store of carbon; yet there has been no comprehensive attempt to quantitatively determine the quantity of carbon in these stores. Using Scottish sea lochs (fjords) we have established a Holocene record of the quantity and type of carbon held within the sediment store of a typical Scottish sea loch. Through the use of both seismic geophysics and geochemical measurements we have developed a methodology to make first-order estimations of the carbon held with the sediment of sea lochs. This methodology was applied to four sea lochs with differing geographical locations, catchments, freshwater inputs to produce the first sedimentary Blue Carbon estimates. The resulting carbon inventories show clearly that these sea lochs hold a significant store of sedimentary carbon; for example, Loch Sunart in Argyll stores an estimated 26.88 ± 0.52 Mt C. A direct comparison of the organic carbon content per unit area suggest sea lochs have a greater OC storage potential between than Scottish peatlands on long, Holocene timescales (Loch Sunart = 0.234 Mt OC km-2; Peatland = 0.093 Mt OC km-2 (Chapman et al. 2009). The carbon values calculated for these sea lochs have been used to estimate the total carbon held within Scotland's 110 sea lochs and these up-scaled estimations are for the first time, reviewed in the context of Scotland's known terrestrial stores. Chapman, S. J., Bell, J., Donnelly, D. and Lilly, A.: Carbon stocks in Scottish peatlands, Soil Use Manag., 25(2), 105-112, doi:10.1111/j.1475-2743.2009.00219.x, 2009.
Seasonal Variability in Global Eddy Diffusion and the Effect on Thermospheric Neutral Density
NASA Astrophysics Data System (ADS)
Pilinski, M.; Crowley, G.
2014-12-01
We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time between January 2004 and January 2008 were estimated from residuals of neutral density measurements made by the CHallenging Minisatellite Payload (CHAMP) and simulations made using the Thermosphere Ionosphere Mesosphere Electrodynamics - Global Circulation Model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy-diffusivity models. The eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the RMS difference between the TIME-GCM model and density data from a variety of satellites is reduced by an average of 5%. This result, indicates that global thermospheric density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates how eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are some limitations of this method, which are discussed, including that the latitude-dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion consistent with diffusion observations made by other techniques.
Seasonal variability in global eddy diffusion and the effect on neutral density
NASA Astrophysics Data System (ADS)
Pilinski, M. D.; Crowley, G.
2015-04-01
We describe a method for making single-satellite estimates of the seasonal variability in global-average eddy diffusion coefficients. Eddy diffusion values as a function of time were estimated from residuals of neutral density measurements made by the Challenging Minisatellite Payload (CHAMP) and simulations made using the thermosphere-ionosphere-mesosphere electrodynamics global circulation model (TIME-GCM). The eddy diffusion coefficient results are quantitatively consistent with previous estimates based on satellite drag observations and are qualitatively consistent with other measurement methods such as sodium lidar observations and eddy diffusivity models. Eddy diffusion coefficient values estimated between January 2004 and January 2008 were then used to generate new TIME-GCM results. Based on these results, the root-mean-square sum for the TIME-GCM model is reduced by an average of 5% when compared to density data from a variety of satellites, indicating that the fidelity of global density modeling can be improved by using data from a single satellite like CHAMP. This approach also demonstrates that eddy diffusion could be estimated in near real-time from satellite observations and used to drive a global circulation model like TIME-GCM. Although the use of global values improves modeled neutral densities, there are limitations to this method, which are discussed, including that the latitude dependence of the seasonal neutral-density signal is not completely captured by a global variation of eddy diffusion coefficients. This demonstrates the need for a latitude-dependent specification of eddy diffusion which is also consistent with diffusion observations made by other techniques.
The linearized multistage model and the future of quantitative risk assessment.
Crump, K S
1996-10-01
The linearized multistage (LMS) model has for over 15 years been the default dose-response model used by the U.S. Environmental Protection Agency (USEPA) and other federal and state regulatory agencies in the United States for calculating quantitative estimates of low-dose carcinogenic risks from animal data. The LMS model is in essence a flexible statistical model that can describe both linear and non-linear dose-response patterns, and that produces an upper confidence bound on the linear low-dose slope of the dose-response curve. Unlike its namesake, the Armitage-Doll multistage model, the parameters of the LMS do not correspond to actual physiological phenomena. Thus the LMS is 'biological' only to the extent that the true biological dose response is linear at low dose and that low-dose slope is reflected in the experimental data. If the true dose response is non-linear the LMS upper bound may overestimate the true risk by many orders of magnitude. However, competing low-dose extrapolation models, including those derived from 'biologically-based models' that are capable of incorporating additional biological information, have not shown evidence to date of being able to produce quantitative estimates of low-dose risks that are any more accurate than those obtained from the LMS model. Further, even if these attempts were successful, the extent to which more accurate estimates of low-dose risks in a test animal species would translate into improved estimates of human risk is questionable. Thus, it does not appear possible at present to develop a quantitative approach that would be generally applicable and that would offer significant improvements upon the crude bounding estimates of the type provided by the LMS model. Draft USEPA guidelines for cancer risk assessment incorporate an approach similar to the LMS for carcinogens having a linear mode of action. However, under these guidelines quantitative estimates of low-dose risks would not be developed for carcinogens having a non-linear mode of action; instead dose-response modelling would be used in the experimental range to calculate an LED10* (a statistical lower bound on the dose corresponding to a 10% increase in risk), and safety factors would be applied to the LED10* to determine acceptable exposure levels for humans. This approach is very similar to the one presently used by USEPA for non-carcinogens. Rather than using one approach for carcinogens believed to have a linear mode of action and a different approach for all other health effects, it is suggested herein that it would be more appropriate to use an approach conceptually similar to the 'LED10*-safety factor' approach for all health effects, and not to routinely develop quantitative risk estimates from animal data.
2013-01-01
Background Previous studies have reported the lower reference limit (LRL) of quantitative cord glucose-6-phosphate dehydrogenase (G6PD), but they have not used approved international statistical methodology. Using common standards is expecting to yield more true findings. Therefore, we aimed to estimate LRL of quantitative G6PD detection in healthy term neonates by using statistical analyses endorsed by the International Federation of Clinical Chemistry (IFCC) and the Clinical and Laboratory Standards Institute (CLSI) for reference interval estimation. Methods This cross sectional retrospective study was performed at King Abdulaziz Hospital, Saudi Arabia, between March 2010 and June 2012. The study monitored consecutive neonates born to mothers from one Arab Muslim tribe that was assumed to have a low prevalence of G6PD-deficiency. Neonates that satisfied the following criteria were included: full-term birth (37 weeks); no admission to the special care nursery; no phototherapy treatment; negative direct antiglobulin test; and fathers of female neonates were from the same mothers’ tribe. The G6PD activity (Units/gram Hemoglobin) was measured spectrophotometrically by an automated kit. This study used statistical analyses endorsed by IFCC and CLSI for reference interval estimation. The 2.5th percentiles and the corresponding 95% confidence intervals (CI) were estimated as LRLs, both in presence and absence of outliers. Results 207 males and 188 females term neonates who had cord blood quantitative G6PD testing met the inclusion criteria. Method of Horn detected 20 G6PD values as outliers (8 males and 12 females). Distributions of quantitative cord G6PD values exhibited a normal distribution in absence of the outliers only. The Harris-Boyd method and proportion criteria revealed that combined gender LRLs were reliable. The combined bootstrap LRL in presence of the outliers was 10.0 (95% CI: 7.5-10.7) and the combined parametric LRL in absence of the outliers was 11.0 (95% CI: 10.5-11.3). Conclusion These results contribute to the LRL of quantitative cord G6PD detection in full-term neonates. They are transferable to another laboratory when pre-analytical factors and testing methods are comparable and the IFCC-CLSI requirements of transference are satisfied. We are suggesting using estimated LRL in absence of the outliers as mislabeling G6PD-deficient neonates as normal is intolerable whereas mislabeling G6PD-normal neonates as deficient is tolerable. PMID:24016342
Waldmann, P; García-Gil, M R; Sillanpää, M J
2005-06-01
Comparison of the level of differentiation at neutral molecular markers (estimated as F(ST) or G(ST)) with the level of differentiation at quantitative traits (estimated as Q(ST)) has become a standard tool for inferring that there is differential selection between populations. We estimated Q(ST) of timing of bud set from a latitudinal cline of Pinus sylvestris with a Bayesian hierarchical variance component method utilizing the information on the pre-estimated population structure from neutral molecular markers. Unfortunately, the between-family variances differed substantially between populations that resulted in a bimodal posterior of Q(ST) that could not be compared in any sensible way with the unimodal posterior of the microsatellite F(ST). In order to avoid publishing studies with flawed Q(ST) estimates, we recommend that future studies should present heritability estimates for each trait and population. Moreover, to detect variance heterogeneity in frequentist methods (ANOVA and REML), it is of essential importance to check also that the residuals are normally distributed and do not follow any systematically deviating trends.
Robust estimation of adaptive tensors of curvature by tensor voting.
Tong, Wai-Shun; Tang, Chi-Keung
2005-03-01
Although curvature estimation from a given mesh or regularly sampled point set is a well-studied problem, it is still challenging when the input consists of a cloud of unstructured points corrupted by misalignment error and outlier noise. Such input is ubiquitous in computer vision. In this paper, we propose a three-pass tensor voting algorithm to robustly estimate curvature tensors, from which accurate principal curvatures and directions can be calculated. Our quantitative estimation is an improvement over the previous two-pass algorithm, where only qualitative curvature estimation (sign of Gaussian curvature) is performed. To overcome misalignment errors, our improved method automatically corrects input point locations at subvoxel precision, which also rejects outliers that are uncorrectable. To adapt to different scales locally, we define the RadiusHit of a curvature tensor to quantify estimation accuracy and applicability. Our curvature estimation algorithm has been proven with detailed quantitative experiments, performing better in a variety of standard error metrics (percentage error in curvature magnitudes, absolute angle difference in curvature direction) in the presence of a large amount of misalignment noise.
Probing lipid membrane electrostatics
NASA Astrophysics Data System (ADS)
Yang, Yi
The electrostatic properties of lipid bilayer membranes play a significant role in many biological processes. Atomic force microscopy (AFM) is highly sensitive to membrane surface potential in electrolyte solutions. With fully characterized probe tips, AFM can perform quantitative electrostatic analysis of lipid membranes. Electrostatic interactions between Silicon nitride probes and supported zwitterionic dioleoylphosphatidylcholine (DOPC) bilayer with a variable fraction of anionic dioleoylphosphatidylserine (DOPS) were measured by AFM. Classical Gouy-Chapman theory was used to model the membrane electrostatics. The nonlinear Poisson-Boltzmann equation was numerically solved with finite element method to provide the potential distribution around the AFM tips. Theoretical tip-sample electrostatic interactions were calculated with the surface integral of both Maxwell and osmotic stress tensors on tip surface. The measured forces were interpreted with theoretical forces and the resulting surface charge densities of the membrane surfaces were in quantitative agreement with the Gouy-Chapman-Stern model of membrane charge regulation. It was demonstrated that the AFM can quantitatively detect membrane surface potential at a separation of several screening lengths, and that the AFM probe only perturbs the membrane surface potential by <2%. One important application of this technique is to estimate the dipole density of lipid membrane. Electrostatic analysis of DOPC lipid bilayers with the AFM reveals a repulsive force between the negatively charged probe tips and the zwitterionic lipid bilayers. This unexpected interaction has been analyzed quantitatively to reveal that the repulsion is due to a weak external field created by the internai membrane dipole moment. The analysis yields a dipole moment of 1.5 Debye per lipid with a dipole potential of +275 mV for supported DOPC membranes. This new ability to quantitatively measure the membrane dipole density in a noninvasive manner will be useful in identifying the biological effects of the dipole potential. Finally, heterogeneous model membranes were studied with fluid electric force microscopy (FEFM). Electrostatic mapping was demonstrated with 50 nm resolution. The capabilities of quantitative electrostatic measurement and lateral charge density mapping make AFM a unique and powerful probe of membrane electrostatics.
Austin, Peter C
2018-01-01
Propensity score methods are frequently used to estimate the effects of interventions using observational data. The propensity score was originally developed for use with binary exposures. The generalized propensity score (GPS) is an extension of the propensity score for use with quantitative or continuous exposures (e.g. pack-years of cigarettes smoked, dose of medication, or years of education). We describe how the GPS can be used to estimate the effect of continuous exposures on survival or time-to-event outcomes. To do so we modified the concept of the dose-response function for use with time-to-event outcomes. We used Monte Carlo simulations to examine the performance of different methods of using the GPS to estimate the effect of quantitative exposures on survival or time-to-event outcomes. We examined covariate adjustment using the GPS and weighting using weights based on the inverse of the GPS. The use of methods based on the GPS was compared with the use of conventional G-computation and weighted G-computation. Conventional G-computation resulted in estimates of the dose-response function that displayed the lowest bias and the lowest variability. Amongst the two GPS-based methods, covariate adjustment using the GPS tended to have the better performance. We illustrate the application of these methods by estimating the effect of average neighbourhood income on the probability of survival following hospitalization for an acute myocardial infarction.
Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin
2015-11-01
Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
A Satellite Infrared Technique for Diurnal Rainfall Variability Studies
NASA Technical Reports Server (NTRS)
Anagnostou, Emmanouil
1998-01-01
Reliable information on the distribution of precipitation at high temporal resolution (
Green, Mark B; Campbell, John L; Yanai, Ruth D; Bailey, Scott W; Bailey, Amey S; Grant, Nicholas; Halm, Ian; Kelsey, Eric P; Rustad, Lindsey E
2018-01-01
The design of a precipitation monitoring network must balance the demand for accurate estimates with the resources needed to build and maintain the network. If there are changes in the objectives of the monitoring or the availability of resources, network designs should be adjusted. At the Hubbard Brook Experimental Forest in New Hampshire, USA, precipitation has been monitored with a network established in 1955 that has grown to 23 gauges distributed across nine small catchments. This high sampling intensity allowed us to simulate reduced sampling schemes and thereby evaluate the effect of decommissioning gauges on the quality of precipitation estimates. We considered all possible scenarios of sampling intensity for the catchments on the south-facing slope (2047 combinations) and the north-facing slope (4095 combinations), from the current scenario with 11 or 12 gauges to only 1 gauge remaining. Gauge scenarios differed by as much as 6.0% from the best estimate (based on all the gauges), depending on the catchment, but 95% of the scenarios gave estimates within 2% of the long-term average annual precipitation. The insensitivity of precipitation estimates and the catchment fluxes that depend on them under many reduced monitoring scenarios allowed us to base our reduction decision on other factors such as technician safety, the time required for monitoring, and co-location with other hydrometeorological measurements (snow, air temperature). At Hubbard Brook, precipitation gauges could be reduced from 23 to 10 with a change of <2% in the long-term precipitation estimates. The decision-making approach illustrated in this case study is applicable to the redesign of monitoring networks when reduction of effort seems warranted.
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Quantitative Reasoning and the Sine Function: The Case of Zac
ERIC Educational Resources Information Center
Moore, Kevin C.
2014-01-01
A growing body of literature has identified quantitative and covariational reasoning as critical for secondary and undergraduate student learning, particularly for topics that require students to make sense of relationships between quantities. The present study extends this body of literature by characterizing an undergraduate precalculus…
Decision support intended to improve ecosystem sustainability requires that we link stakeholder priorities directly to quantitative tools and measures of desired outcomes. Actions taken at the community level can have large impacts on production and delivery of ecosystem service...
77 FR 72831 - Agency Information Collection Activities: Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-06
... consumers to make better informed financial decisions. Together with the CFPB's Office of Research, OFE is also responsible for conducting ``research related to consumer financial education and counseling... quantitative data through in-person and telephone surveys. The information collected through quantitative...
NASA Astrophysics Data System (ADS)
Secan, James A.; Reinleitner, Lee A.; Bussey, Robert M.
1990-03-01
Modern military communication, navigation, and surveillance systems depend on reliable, noise-free transionospheric radio frequency channels. They can be severely impacted by small scale electron-density irregularities in the ionosphere, which cause both phase and amplitude scintillation. Basic tools used in planning and mitigation schemes are climatological in nature and thus may greatly over- and under-estimate the effects of scintillation in a given scenario. The results are summarized of a 3 year investigation into the feasibility of using in-situ observations of the ionosphere from the USAF DMSP satellite to calculate estimates of irregularity parameters that could be used to update scintillation models in near real time. Estimates for the level of intensity and phase scintillation on a transionospheric UHF radio link in the early evening auroral zone were calculated from DMSP Scintillation Meter (SM) data and compared to the levels actually observed. The intensity scintillation levels predicted and observed compared quite well, but the comparison with the phase scintillation data was complicated by low-frequency phase noise on the UHF radio link. Results are presented from analysis of DMSP SSIES data collected near Kwajalein Island in conjunction with a propagation-effects experiment. Preliminary conclusions to the assessment study are: (1) the DMSP SM data can be used to make quantitative estimates of the level of scintillation at auroral latitudes, and (2) it may be possible to use the data as a qualitative indicator of scintillation activity levels at equatorial latitudes.
Reduction of bias and variance for evaluation of computer-aided diagnostic schemes.
Li, Qiang; Doi, Kunio
2006-04-01
Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists in detecting various lesions in medical images. In addition to the development, an equally important problem is the reliable evaluation of the performance levels of various CAD schemes. It is good to see that more and more investigators are employing more reliable evaluation methods such as leave-one-out and cross validation, instead of less reliable methods such as resubstitution, for assessing their CAD schemes. However, the common applications of leave-one-out and cross-validation evaluation methods do not necessarily imply that the estimated performance levels are accurate and precise. Pitfalls often occur in the use of leave-one-out and cross-validation evaluation methods, and they lead to unreliable estimation of performance levels. In this study, we first identified a number of typical pitfalls for the evaluation of CAD schemes, and conducted a Monte Carlo simulation experiment for each of the pitfalls to demonstrate quantitatively the extent of bias and/or variance caused by the pitfall. Our experimental results indicate that considerable bias and variance may exist in the estimated performance levels of CAD schemes if one employs various flawed leave-one-out and cross-validation evaluation methods. In addition, for promoting and utilizing a high standard for reliable evaluation of CAD schemes, we attempt to make recommendations, whenever possible, for overcoming these pitfalls. We believe that, with the recommended evaluation methods, we can considerably reduce the bias and variance in the estimated performance levels of CAD schemes.
Design of a practical model-observer-based image quality assessment method for CT imaging systems
NASA Astrophysics Data System (ADS)
Tseng, Hsin-Wu; Fan, Jiahua; Cao, Guangzhi; Kupinski, Matthew A.; Sainath, Paavana
2014-03-01
The channelized Hotelling observer (CHO) is a powerful method for quantitative image quality evaluations of CT systems and their image reconstruction algorithms. It has recently been used to validate the dose reduction capability of iterative image-reconstruction algorithms implemented on CT imaging systems. The use of the CHO for routine and frequent system evaluations is desirable both for quality assurance evaluations as well as further system optimizations. The use of channels substantially reduces the amount of data required to achieve accurate estimates of observer performance. However, the number of scans required is still large even with the use of channels. This work explores different data reduction schemes and designs a new approach that requires only a few CT scans of a phantom. For this work, the leave-one-out likelihood (LOOL) method developed by Hoffbeck and Landgrebe is studied as an efficient method of estimating the covariance matrices needed to compute CHO performance. Three different kinds of approaches are included in the study: a conventional CHO estimation technique with a large sample size, a conventional technique with fewer samples, and the new LOOL-based approach with fewer samples. The mean value and standard deviation of area under ROC curve (AUC) is estimated by shuffle method. Both simulation and real data results indicate that an 80% data reduction can be achieved without loss of accuracy. This data reduction makes the proposed approach a practical tool for routine CT system assessment.
Understanding Pre-Quantitative Risk in Projects
NASA Technical Reports Server (NTRS)
Cooper, Lynne P.
2011-01-01
Standard approaches to risk management in projects depend on the ability of teams to identify risks and quantify the probabilities and consequences of these risks (e.g., the 5 x 5 risk matrix). However, long before quantification does - or even can - occur, and long after, teams make decisions based on their pre-quantitative understanding of risk. These decisions can have long-lasting impacts on the project. While significant research has looked at the process of how to quantify risk, our understanding of how teams conceive of and manage pre-quantitative risk is lacking. This paper introduces the concept of pre-quantitative risk and discusses the implications of addressing pre-quantitative risk in projects.
Reiffsteck, A; Dehennin, L; Scholler, R
1982-11-01
Estrone, 2-methoxyestrone and estradiol-17 beta have been definitely identified in seminal plasma of man, bull, boar and stallion by high resolution gas chromatography associated with selective monitoring of characteristic ions of suitable derivatives. Quantitative estimations were performed by isotope dilution with deuterated analogues and by monitoring molecular ions of trimethylsilyl ethers of labelled and unlabelled compounds. Concentrations of unconjugated and total estrogens are reported together with the statistical evaluation of accuracy and precision.
Planning Robot-Control Parameters With Qualitative Reasoning
NASA Technical Reports Server (NTRS)
Peters, Stephen F.
1993-01-01
Qualitative-reasoning planning algorithm helps to determine quantitative parameters controlling motion of robot. Algorithm regarded as performing search in multidimensional space of control parameters from starting point to goal region in which desired result of robotic manipulation achieved. Makes use of directed graph representing qualitative physical equations describing task, and interacts, at each sampling period, with history of quantitative control parameters and sensory data, to narrow search for reliable values of quantitative control parameters.
2015-12-01
FINAL REPORT Development and Validation of a Quantitative Framework and Management Expectation Tool for the Selection of Bioremediation ...TITLE AND SUBTITLE Development and Validation of a Quantitative Framework and Management Expectation Tool for the Selection of Bioremediation ...project ER-201129 was to develop and validate a framework used to make bioremediation decisions based on site-specific physical and biogeochemical
A quantitative microbial risk assessment for center pivot irrigation of dairy wastewaters
USDA-ARS?s Scientific Manuscript database
In the western United States where livestock wastewaters are commonly land applied, there are concerns over individuals being exposed to airborne pathogens. In response, a quantitative microbial risk assessment (QMRA) was performed to estimate infectious risks from inhaling pathogens aerosolized dur...
76 FR 50904 - Thiamethoxam; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-17
... exposure and risk. A separate assessment was done for clothianidin. i. Acute exposure. Quantitative acute... not expected to pose a cancer risk, a quantitative dietary exposure assessment for the purposes of...-dietary sources of post application exposure to obtain an estimate of potential combined exposure. These...
Quantitative Analysis of Radar Returns from Insects
NASA Technical Reports Server (NTRS)
Riley, J. R.
1979-01-01
When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.
Bound Pool Fractions Complement Diffusion Measures to Describe White Matter Micro and Macrostructure
Stikov, Nikola; Perry, Lee M.; Mezer, Aviv; Rykhlevskaia, Elena; Wandell, Brian A.; Pauly, John M.; Dougherty, Robert F.
2010-01-01
Diffusion imaging and bound pool fraction (BPF) mapping are two quantitative magnetic resonance imaging techniques that measure microstructural features of the white matter of the brain. Diffusion imaging provides a quantitative measure of the diffusivity of water in tissue. BPF mapping is a quantitative magnetization transfer (qMT) technique that estimates the proportion of exchanging protons bound to macromolecules, such as those found in myelin, and is thus a more direct measure of myelin content than diffusion. In this work, we combine BPF estimates of macromolecular content with measurements of diffusivity within human white matter tracts. Within the white matter, the correlation between BPFs and diffusivity measures such as fractional anisotropy and radial diffusivity was modest, suggesting that diffusion tensor imaging and bound pool fractions are complementary techniques. We found that several major tracts have high BPF, suggesting a higher density of myelin in these tracts. We interpret these results in the context of a quantitative tissue model. PMID:20828622
ERIC Educational Resources Information Center
Trexler, Grant Lewis
2012-01-01
This dissertation set out to identify effective qualitative and quantitative management tools used by financial officers (CFOs) in carrying out their management functions of planning, decision making, organizing, staffing, communicating, motivating, leading and controlling at a public research university. In addition, impediments to the use of…
Paradigm Privilege: Determining the Value of Research in Teacher Education Policy Making.
ERIC Educational Resources Information Center
Bales, Barbara L.
This paper explains that despite the long debate over the relative value of quantitative and qualitative educational research and attempts to talk across disciplines, quantitative research dominates educational policy circles. As a result, quality qualitative research may not enter into educational policy conversations. The paper discusses whether…
The ease and rapidity of quantitative DNA sequence detection by real-time PCR instruments promises to make their use increasingly common for the microbial analysis many different types of environmental samples. To fully exploit the capabilities of these instruments, correspondin...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-28
... systems. E. Quantitative Methods for Comparing Capital Frameworks The NPR sought comment on how the... industry while assessing levels of capital. This commenter points out maintaining reliable comparative data over time could make quantitative methods for this purpose difficult. For example, evaluating asset...
Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E
2014-01-01
Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.
Spectral analysis of pair-correlation bandwidth: application to cell biology images.
Binder, Benjamin J; Simpson, Matthew J
2015-02-01
Images from cell biology experiments often indicate the presence of cell clustering, which can provide insight into the mechanisms driving the collective cell behaviour. Pair-correlation functions provide quantitative information about the presence, or absence, of clustering in a spatial distribution of cells. This is because the pair-correlation function describes the ratio of the abundance of pairs of cells, separated by a particular distance, relative to a randomly distributed reference population. Pair-correlation functions are often presented as a kernel density estimate where the frequency of pairs of objects are grouped using a particular bandwidth (or bin width), Δ>0. The choice of bandwidth has a dramatic impact: choosing Δ too large produces a pair-correlation function that contains insufficient information, whereas choosing Δ too small produces a pair-correlation signal dominated by fluctuations. Presently, there is little guidance available regarding how to make an objective choice of Δ. We present a new technique to choose Δ by analysing the power spectrum of the discrete Fourier transform of the pair-correlation function. Using synthetic simulation data, we confirm that our approach allows us to objectively choose Δ such that the appropriately binned pair-correlation function captures known features in uniform and clustered synthetic images. We also apply our technique to images from two different cell biology assays. The first assay corresponds to an approximately uniform distribution of cells, while the second assay involves a time series of images of a cell population which forms aggregates over time. The appropriately binned pair-correlation function allows us to make quantitative inferences about the average aggregate size, as well as quantifying how the average aggregate size changes with time.
Applications of Microfluidics in Quantitative Biology.
Bai, Yang; Gao, Meng; Wen, Lingling; He, Caiyun; Chen, Yuan; Liu, Chenli; Fu, Xiongfei; Huang, Shuqiang
2018-05-01
Quantitative biology is dedicated to taking advantage of quantitative reasoning and advanced engineering technologies to make biology more predictable. Microfluidics, as an emerging technique, provides new approaches to precisely control fluidic conditions on small scales and collect data in high-throughput and quantitative manners. In this review, the authors present the relevant applications of microfluidics to quantitative biology based on two major categories (channel-based microfluidics and droplet-based microfluidics), and their typical features. We also envision some other microfluidic techniques that may not be employed in quantitative biology right now, but have great potential in the near future. © 2017 Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. Biotechnology Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.
A prioritization of generic safety issues. Supplement 19, Revision insertion instructions
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1995-11-01
The report presents the safety priority ranking for generic safety issues related to nuclear power plants. The purpose of these rankings is to assist in the timely and efficient allocation of NRC resources for the resolution of those safety issues that have a significant potential for reducing risk. The safety priority rankings are HIGH, MEDIUM, LOW, and DROP, and have been assigned on the basis of risk significance estimates, the ratio of risk to costs and other impacts estimated to result if resolution of the safety issues were implemented, and the consideration of uncertainties and other quantitative or qualitative factors.more » To the extent practical, estimates are quantitative. This document provides revisions and amendments to the report.« less
ERIC Educational Resources Information Center
Roessger, Kevin M.
2017-01-01
Translating theory to practice has been a historical concern of adult education. It remains unclear, though, if adult education's theoretical and epistemological focus on meaning making transcends the academy. A manifest content analysis was conducted to determine if the frequency of meaning making language differed between the field's U.S.…
Rayes, Ibrahim K; Hassali, Mohamed A; Abduelkarem, Abduelmula R
2015-01-01
In many developing countries, pharmacists are facing many challenges while they try to enhance the quality of services provided to patients approaching community pharmacies. To explore perception of community pharmacists in Dubai regarding the obstacles to enhanced pharmacy services using a part of the results from a nation-wide quantitative survey. A questionnaire was distributed to 281 full-time licensed community pharmacists in Dubai. The questionnaire had 5 inter-linked sections: demographic information, information about the pharmacy, interaction with physicians, pharmacists' current professional role, and barriers to enhanced pharmacy services. About half of the respondents (45.4%, n=90) agreed that pharmacy clients under-estimate them and 52.5% (n=104) felt the same by physicians. About 47.5% (n=94) of the respondents felt that they are legally unprotected against profession's malpractice. Moreover, 64.7% (n=128) stated that pharmacy practice in Dubai turned to be business-focused. In addition, 76.8% (n=252) found that one of the major barriers to enhanced pharmacy services is the high business running cost. Pharmacists screened tried to prove that they are not one of the barriers to optimized pharmacy services as 62.7% (n=124) disagreed that they lack appropriate knowledge needed to serve community and 67.7% (n=134) gave the same response when asked whether pharmacy staff lack confidence when treating consumers or not. Although being well established within the community, pharmacists in Dubai negatively perceived their own professional role. They stated that there are number of barriers which hinder optimized delivery of pharmacy services like under-estimation by pharmacy clients and other healthcare professionals, pressure to make sales, and high running cost.
Rayes, Ibrahim K.; Hassali, Mohamed A.; Abduelkarem, Abduelmula R.
2014-01-01
Background: In many developing countries, pharmacists are facing many challenges while they try to enhance the quality of services provided to patients approaching community pharmacies. Objective: To explore perception of community pharmacists in Dubai regarding the obstacles to enhanced pharmacy services using a part of the results from a nation-wide quantitative survey. Methods: A questionnaire was distributed to 281 full-time licensed community pharmacists in Dubai. The questionnaire had 5 inter-linked sections: demographic information, information about the pharmacy, interaction with physicians, pharmacists’ current professional role, and barriers to enhanced pharmacy services. Results: About half of the respondents (45.4%, n=90) agreed that pharmacy clients under-estimate them and 52.5% (n=104) felt the same by physicians. About 47.5% (n=94) of the respondents felt that they are legally unprotected against profession’s malpractice. Moreover, 64.7% (n=128) stated that pharmacy practice in Dubai turned to be business-focused. In addition, 76.8% (n=252) found that one of the major barriers to enhanced pharmacy services is the high business running cost. Pharmacists screened tried to prove that they are not one of the barriers to optimized pharmacy services as 62.7% (n=124) disagreed that they lack appropriate knowledge needed to serve community and 67.7% (n=134) gave the same response when asked whether pharmacy staff lack confidence when treating consumers or not. Conclusions: Although being well established within the community, pharmacists in Dubai negatively perceived their own professional role. They stated that there are number of barriers which hinder optimized delivery of pharmacy services like under-estimation by pharmacy clients and other healthcare professionals, pressure to make sales, and high running cost. PMID:26131039
Raina, Manzoor A; Khan, Mosin S; Malik, Showkat A; Raina, Ab Hameed; Makhdoomi, Mudassir J; Bhat, Javed I; Mudassar, Syed
2016-12-01
Cystic Fibrosis (CF) is an autosomal recessive disorder and the incidence of this disease is undermined in Northern India. The distinguishable salty character of the sweat belonging to individuals suffering from CF makes sweat chloride estimation essential for diagnosis of CF disease. The aim of this prospective study was to elucidate the relationship of sweat chloride levels with clinical features and pattern of CF. A total of 182 patients, with clinical features of CF were included in this study for quantitative measurement of sweat chloride. Sweat stimulation and collection involved pilocarpine iontophoresis based on the Gibson and Cooks methodology. The quantitative estimation of chloride was done by Schales and Schales method with some modifications. Cystic Fibrosis Trans Membrane Conductance Regulator (CFTR) mutation status was recorded in case of patients with borderline sweat chloride levels to correlate the results and for follow-up. Out of 182 patients having clinical features consistent with CF, borderline and elevated sweat chloride levels were present in 9 (5%) and 41 (22.5%) subjects respectively. Elevated sweat chloride levels were significantly associated with wheeze, Failure To Thrive (FTT), history of CF in Siblings, product of Consanguineous Marriage (CM), digital clubbing and steatorrhoea on univariate analysis. On multivariate analysis only wheeze, FTT and steatorrhoea were found to be significantly associated with elevated sweat chloride levels (p<0.05). Among the nine borderline cases six cases were positive for at least two CFTR mutations and rest of the three cases were not having any mutation in CFTR gene. The diagnosis is often delayed and the disease is advanced in most patients at the time of diagnosis. Sweat testing is a gold standard for diagnosis of CF patients as genetic mutation profile being heterozygous and unlikely to become diagnostic test.
Fakhri, Georges El
2011-01-01
82Rb cardiac PET allows the assessment of myocardial perfusion using a column generator in clinics that lack a cyclotron. We and others have previously shown that quantitation of myocardial blood flow (MBF) and coronary flow reserve (CFR) is feasible using dynamic 82Rb PET and factor and compartment analyses. The aim of the present work was to determine the intra- and inter-observer variability of MBF estimation using 82Rb PET as well as the reproducibility of our generalized factor + compartment analyses methodology to estimate MBF and assess its accuracy by comparing, in the same subjects, 82Rb estimates of MBF to those obtained using 13N-ammonia. Methods Twenty-two subjects were included in the reproducibility and twenty subjects in the validation study. Patients were injected with 60±5mCi of 82Rb and imaged dynamically for 6 minutes at rest and during dipyridamole stress Left and right ventricular (LV+RV) time-activity curves were estimated by GFADS and used as input to a 2-compartment kinetic analysis that estimates parametric maps of myocardial tissue extraction (K1) and egress (k2), as well as LV+RV contributions (fv,rv). Results Our results show excellent reproducibility of the quantitative dynamic approach itself with coefficients of repeatability of 1.7% for estimation of MBF at rest, 1.4% for MBF at peak stress and 2.8% for CFR estimation. The inter-observer reproducibility between the four observers that participated in this study was also very good with correlation coefficients greater than 0.87 between any two given observers when estimating coronary flow reserve. The reproducibility of MBF in repeated 82Rb studies was good at rest and excellent at peak stress (r2=0.835). Furthermore, the slope of the correlation line was very close to 1 when estimating stress MBF and CFR in repeated 82Rb studies. The correlation between myocardial flow estimates obtained at rest and during peak stress in 82Rb and 13N-ammonia studies was very good at rest (r2=0.843) and stress (r2=0.761). The Bland-Altman plots show no significant presence of proportional error at rest or stress, nor a dependence of the variations on the amplitude of the myocardial blood flow at rest or stress. A small systematic overestimation of 13N-ammonia MBF was observed with 82Rb at rest (0.129 ml/g/min) and the opposite, i.e., underestimation, at stress (0.22 ml/g/min). Conclusions Our results show that absolute quantitation of myocardial bloof flow is reproducible and accurate with 82Rb dynamic cardiac PET as compared to 13N-ammonia. The reproducibility of the quantitation approach itself was very good as well as inter-observer reproducibility. PMID:19525467
Quantitative body DW-MRI biomarkers uncertainty estimation using unscented wild-bootstrap.
Freiman, M; Voss, S D; Mulkern, R V; Perez-Rossello, J M; Warfield, S K
2011-01-01
We present a new method for the uncertainty estimation of diffusion parameters for quantitative body DW-MRI assessment. Diffusion parameters uncertainty estimation from DW-MRI is necessary for clinical applications that use these parameters to assess pathology. However, uncertainty estimation using traditional techniques requires repeated acquisitions, which is undesirable in routine clinical use. Model-based bootstrap techniques, for example, assume an underlying linear model for residuals rescaling and cannot be utilized directly for body diffusion parameters uncertainty estimation due to the non-linearity of the body diffusion model. To offset this limitation, our method uses the Unscented transform to compute the residuals rescaling parameters from the non-linear body diffusion model, and then applies the wild-bootstrap method to infer the body diffusion parameters uncertainty. Validation through phantom and human subject experiments shows that our method identify the regions with higher uncertainty in body DWI-MRI model parameters correctly with realtive error of -36% in the uncertainty values.
NASA Astrophysics Data System (ADS)
Sarradj, Ennes
2010-04-01
Phased microphone arrays are used in a variety of applications for the estimation of acoustic source location and spectra. The popular conventional delay-and-sum beamforming methods used with such arrays suffer from inaccurate estimations of absolute source levels and in some cases also from low resolution. Deconvolution approaches such as DAMAS have better performance, but require high computational effort. A fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra. This method bases on an eigenvalue decomposition of the cross spectral matrix of microphone signals and uses the eigenvalues from the signal subspace to estimate absolute source levels. The theoretical basis of the method is discussed together with an assessment of the quality of the estimation. Experimental tests using a loudspeaker setup and an airfoil trailing edge noise setup in an aeroacoustic wind tunnel show that the proposed method is robust and leads to reliable quantitative results.
2014-05-13
the information needed to effectively (1) manage its assets, (2) assess program performance and make budget decisions , (3) make cost- effective ... decision making, including the information needed to effectively (1) manage its assets, (2) assess program performance and make budget decisions , (3...incorporating key elements of a comprehensive management approach , such as a complete analysis of the return on investment, quantitatively -defined goals
ERIC Educational Resources Information Center
Bermani, Michelle Ines
2017-01-01
In this quantitative and qualitative mixed study, the researcher focused on a range of factors that drive principals' decision making and examined the variables that affect principals' decision-making. The study assessed the extent to which principals' leadership and decision-making processes exert influence on the operations of inclusion…
Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...
Code of Federal Regulations, 2010 CFR
2010-07-01
... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...
Code of Federal Regulations, 2013 CFR
2013-07-01
... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...
Code of Federal Regulations, 2014 CFR
2014-07-01
... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...
Code of Federal Regulations, 2011 CFR
2011-07-01
... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...
Code of Federal Regulations, 2012 CFR
2012-07-01
... appropriate for a quantitative survey and include consideration of the methods used in other studies performed... basis for any assumptions and quantitative estimates. If you plan to use an entrainment survival rate...
Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C
2011-09-01
Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.
Marques, João P; Gener, Isabelle; Ayrault, Philippe; Lopes, José M; Ribeiro, F Ramôa; Guisnet, Michel
2004-10-21
A simple method based on the characterization (composition, Bronsted and Lewis acidities) of acid treated HBEA zeolites was developed for estimating the concentrations of framework, extraframework and defect Al species.
Quantitative Compactness Estimates for Hamilton-Jacobi Equations
NASA Astrophysics Data System (ADS)
Ancona, Fabio; Cannarsa, Piermarco; Nguyen, Khai T.
2016-02-01
We study quantitative compactness estimates in {W^{1,1}_{loc}} for the map {S_t}, {t > 0} that is associated with the given initial data {u_0in Lip (R^N)} for the corresponding solution {S_t u_0} of a Hamilton-Jacobi equation u_t+Hbig(nabla_{x} ubig)=0, qquad t≥ 0,quad xinR^N, with a uniformly convex Hamiltonian {H=H(p)}. We provide upper and lower estimates of order {1/\\varepsilon^N} on the Kolmogorov {\\varepsilon}-entropy in {W^{1,1}} of the image through the map S t of sets of bounded, compactly supported initial data. Estimates of this type are inspired by a question posed by Lax (Course on Hyperbolic Systems of Conservation Laws. XXVII Scuola Estiva di Fisica Matematica, Ravello, 2002) within the context of conservation laws, and could provide a measure of the order of "resolution" of a numerical method implemented for this equation.
Systems engineering and integration: Cost estimation and benefits analysis
NASA Technical Reports Server (NTRS)
Dean, ED; Fridge, Ernie; Hamaker, Joe
1990-01-01
Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
Reliability and precision of pellet-group counts for estimating landscape-level deer density
David S. deCalesta
2013-01-01
This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...
REVIEW OF DRAFT REVISED BLUE BOOK ON ESTIMATING CANCER RISKS FROM EXPOSURE TO IONIZING RADIATION
In 1994, EPA published a report, referred to as the “Blue Book,” which lays out EPA’s current methodology for quantitatively estimating radiogenic cancer risks. A follow-on report made minor adjustments to the previous estimates and presented a partial analysis of the uncertainti...
NASA Astrophysics Data System (ADS)
Maurya, S. P.; Singh, K. H.; Singh, N. P.
2018-05-01
In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.
Development of risk-based nanomaterial groups for occupational exposure control
NASA Astrophysics Data System (ADS)
Kuempel, E. D.; Castranova, V.; Geraci, C. L.; Schulte, P. A.
2012-09-01
Given the almost limitless variety of nanomaterials, it will be virtually impossible to assess the possible occupational health hazard of each nanomaterial individually. The development of science-based hazard and risk categories for nanomaterials is needed for decision-making about exposure control practices in the workplace. A possible strategy would be to select representative (benchmark) materials from various mode of action (MOA) classes, evaluate the hazard and develop risk estimates, and then apply a systematic comparison of new nanomaterials with the benchmark materials in the same MOA class. Poorly soluble particles are used here as an example to illustrate quantitative risk assessment methods for possible benchmark particles and occupational exposure control groups, given mode of action and relative toxicity. Linking such benchmark particles to specific exposure control bands would facilitate the translation of health hazard and quantitative risk information to the development of effective exposure control practices in the workplace. A key challenge is obtaining sufficient dose-response data, based on standard testing, to systematically evaluate the nanomaterials' physical-chemical factors influencing their biological activity. Categorization processes involve both science-based analyses and default assumptions in the absence of substance-specific information. Utilizing data and information from related materials may facilitate initial determinations of exposure control systems for nanomaterials.
Silver, Matt; Montana, Giovanni
2012-01-01
Where causal SNPs (single nucleotide polymorphisms) tend to accumulate within biological pathways, the incorporation of prior pathways information into a statistical model is expected to increase the power to detect true associations in a genetic association study. Most existing pathways-based methods rely on marginal SNP statistics and do not fully exploit the dependence patterns among SNPs within pathways. We use a sparse regression model, with SNPs grouped into pathways, to identify causal pathways associated with a quantitative trait. Notable features of our “pathways group lasso with adaptive weights” (P-GLAW) algorithm include the incorporation of all pathways in a single regression model, an adaptive pathway weighting procedure that accounts for factors biasing pathway selection, and the use of a bootstrap sampling procedure for the ranking of important pathways. P-GLAW takes account of the presence of overlapping pathways and uses a novel combination of techniques to optimise model estimation, making it fast to run, even on whole genome datasets. In a comparison study with an alternative pathways method based on univariate SNP statistics, our method demonstrates high sensitivity and specificity for the detection of important pathways, showing the greatest relative gains in performance where marginal SNP effect sizes are small. PMID:22499682
Shi, Lei; Shuai, Jian; Xu, Kui
2014-08-15
Fire and explosion accidents of steel oil storage tanks (FEASOST) occur occasionally during the petroleum and chemical industry production and storage processes and often have devastating impact on lives, the environment and property. To contribute towards the development of a quantitative approach for assessing the occurrence probability of FEASOST, a fault tree of FEASOST is constructed that identifies various potential causes. Traditional fault tree analysis (FTA) can achieve quantitative evaluation if the failure data of all of the basic events (BEs) are available, which is almost impossible due to the lack of detailed data, as well as other uncertainties. This paper makes an attempt to perform FTA of FEASOST by a hybrid application between an expert elicitation based improved analysis hierarchy process (AHP) and fuzzy set theory, and the occurrence possibility of FEASOST is estimated for an oil depot in China. A comparison between statistical data and calculated data using fuzzy fault tree analysis (FFTA) based on traditional and improved AHP is also made. Sensitivity and importance analysis has been performed to identify the most crucial BEs leading to FEASOST that will provide insights into how managers should focus effective mitigation. Copyright © 2014 Elsevier B.V. All rights reserved.
[Image processing applying in analysis of motion features of cultured cardiac myocyte in rat].
Teng, Qizhi; He, Xiaohai; Luo, Daisheng; Wang, Zhengrong; Zhou, Beiyi; Yuan, Zhirun; Tao, Dachang
2007-02-01
Study of mechanism of medicine actions, by quantitative analysis of cultured cardiac myocyte, is one of the cutting edge researches in myocyte dynamics and molecular biology. The characteristics of cardiac myocyte auto-beating without external stimulation make the research sense. Research of the morphology and cardiac myocyte motion using image analysis can reveal the fundamental mechanism of medical actions, increase the accuracy of medicine filtering, and design the optimal formula of medicine for best medical treatments. A system of hardware and software has been built with complete sets of functions including living cardiac myocyte image acquisition, image processing, motion image analysis, and image recognition. In this paper, theories and approaches are introduced for analysis of living cardiac myocyte motion images and implementing quantitative analysis of cardiac myocyte features. A motion estimation algorithm is used for motion vector detection of particular points and amplitude and frequency detection of a cardiac myocyte. Beatings of cardiac myocytes are sometimes very small. In such case, it is difficult to detect the motion vectors from the particular points in a time sequence of images. For this reason, an image correlation theory is employed to detect the beating frequencies. Active contour algorithm in terms of energy function is proposed to approximate the boundary and detect the changes of edge of myocyte.
NASA Astrophysics Data System (ADS)
Mai, Fei; Chang, Chunqi; Liu, Wenqing; Xu, Weichao; Hung, Yeung S.
2009-10-01
Due to the inherent imperfections in the imaging process, fluorescence microscopy images often suffer from spurious intensity variations, which is usually referred to as intensity inhomogeneity, intensity non uniformity, shading or bias field. In this paper, a retrospective shading correction method for fluorescence microscopy Escherichia coli (E. Coli) images is proposed based on segmentation result. Segmentation and shading correction are coupled together, so we iteratively correct the shading effects based on segmentation result and refine the segmentation by segmenting the image after shading correction. A fluorescence microscopy E. Coli image can be segmented (based on its intensity value) into two classes: the background and the cells, where the intensity variation within each class is close to zero if there is no shading. Therefore, we make use of this characteristics to correct the shading in each iteration. Shading is mathematically modeled as a multiplicative component and an additive noise component. The additive component is removed by a denoising process, and the multiplicative component is estimated using a fast algorithm to minimize the intra-class intensity variation. We tested our method on synthetic images and real fluorescence E.coli images. It works well not only for visual inspection, but also for numerical evaluation. Our proposed method should be useful for further quantitative analysis especially for protein expression value comparison.
Liu, L; Kan, A; Leckie, C; Hodgkin, P D
2017-04-01
Time-lapse fluorescence microscopy is a valuable technology in cell biology, but it suffers from the inherent problem of intensity inhomogeneity due to uneven illumination or camera nonlinearity, known as shading artefacts. This will lead to inaccurate estimates of single-cell features such as average and total intensity. Numerous shading correction methods have been proposed to remove this effect. In order to compare the performance of different methods, many quantitative performance measures have been developed. However, there is little discussion about which performance measure should be generally applied for evaluation on real data, where the ground truth is absent. In this paper, the state-of-the-art shading correction methods and performance evaluation methods are reviewed. We implement 10 popular shading correction methods on two artificial datasets and four real ones. In order to make an objective comparison between those methods, we employ a number of quantitative performance measures. Extensive validation demonstrates that the coefficient of joint variation (CJV) is the most applicable measure in time-lapse fluorescence images. Based on this measure, we have proposed a novel shading correction method that performs better compared to well-established methods for a range of real data tested. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Hein, L R
2001-10-01
A set of NIH Image macro programs was developed to make qualitative and quantitative analyses from digital stereo pictures produced by scanning electron microscopes. These tools were designed for image alignment, anaglyph representation, animation, reconstruction of true elevation surfaces, reconstruction of elevation profiles, true-scale elevation mapping and, for the quantitative approach, surface area and roughness calculations. Limitations on time processing, scanning techniques and programming concepts are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berryman, J. G.
While the well-known Voigt and Reuss (VR) bounds, and the Voigt-Reuss-Hill (VRH) elastic constant estimators for random polycrystals are all straightforwardly calculated once the elastic constants of anisotropic crystals are known, the Hashin-Shtrikman (HS) bounds and related self-consistent (SC) estimators for the same constants are, by comparison, more difficult to compute. Recent work has shown how to simplify (to some extent) these harder to compute HS bounds and SC estimators. An overview and analysis of a subsampling of these results is presented here with the main point being to show whether or not this extra work (i.e., in calculating bothmore » the HS bounds and the SC estimates) does provide added value since, in particular, the VRH estimators often do not fall within the HS bounds, while the SC estimators (for good reasons) have always been found to do so. The quantitative differences between the SC and the VRH estimators in the eight cases considered are often quite small however, being on the order of ±1%. These quantitative results hold true even though these polycrystal Voigt-Reuss-Hill estimators more typically (but not always) fall outside the Hashin-Shtrikman bounds, while the self-consistent estimators always fall inside (or on the boundaries of) these same bounds.« less
Effects of finite spatial resolution on quantitative CBF images from dynamic PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phelps, M.E.; Huang, S.C.; Mahoney, D.K.
1985-05-01
The finite spatial resolution of PET causes the time-activity responses on pixels around the boundaries between gray and white matter regions to contain kinetic components from tissues of different CBF's. CBF values estimated from kinetics of such mixtures are underestimated because of the nonlinear relationship between the time-activity response and the estimated CBF. Computer simulation is used to investigate these effects on phantoms of circular structures and realistic brain slice in terms of object size and quantitative CBF values. The CBF image calculated is compared to the case of having resolution loss alone. Results show that the size of amore » high flow region in the CBF image is decreased while that of a low flow region is increased. For brain phantoms, the qualitative appearance of CBF images is not seriously affected, but the estimated CBF's are underestimated by 11 to 16 percent in local gray matter regions (of size 1 cm/sup 2/) with about 14 percent reduction in global CBF over the whole slice. It is concluded that the combined effect of finite spatial resolution and the nonlinearity in estimating CBF from dynamic PET is quite significant and must be considered in processing and interpreting quantitative CBF images.« less
Status of the Microbial Census
Schloss, Patrick D.; Handelsman, Jo
2004-01-01
Over the past 20 years, more than 78,000 16S rRNA gene sequences have been deposited in GenBank and the Ribosomal Database Project, making the 16S rRNA gene the most widely studied gene for reconstructing bacterial phylogeny. While there is a general appreciation that these sequences are largely unique and derived from diverse species of bacteria, there has not been a quantitative attempt to describe the extent of sequencing efforts to date. We constructed rarefaction curves for each bacterial phylum and for the entire bacterial domain to assess the current state of sampling and the relative taxonomic richness of each phylum. This analysis quantifies the general sense among microbiologists that we are a long way from a complete census of the bacteria on Earth. Moreover, the analysis indicates that current sampling strategies might not be the most effective ones to describe novel diversity because there remain numerous phyla that are globally distributed yet poorly sampled. Based on the current level of sampling, it is not possible to estimate the total number of bacterial species on Earth, but the minimum species richness is 35,498. Considering previous global species richness estimates of 107 to 109, we are certain that this estimate will increase with additional sequencing efforts. The data support previous calls for extensive surveys of multiple chemically disparate environments and of specific phylogenetic groups to advance the census most rapidly. PMID:15590780
The Interrupted Power Law and the Size of Shadow Banking
Fiaschi, Davide; Kondor, Imre; Marsili, Matteo; Volpati, Valerio
2014-01-01
Using public data (Forbes Global 2000) we show that the asset sizes for the largest global firms follow a Pareto distribution in an intermediate range, that is “interrupted” by a sharp cut-off in its upper tail, where it is totally dominated by financial firms. This flattening of the distribution contrasts with a large body of empirical literature which finds a Pareto distribution for firm sizes both across countries and over time. Pareto distributions are generally traced back to a mechanism of proportional random growth, based on a regime of constant returns to scale. This makes our findings of an “interrupted” Pareto distribution all the more puzzling, because we provide evidence that financial firms in our sample should operate in such a regime. We claim that the missing mass from the upper tail of the asset size distribution is a consequence of shadow banking activity and that it provides an (upper) estimate of the size of the shadow banking system. This estimate–which we propose as a shadow banking index–compares well with estimates of the Financial Stability Board until 2009, but it shows a sharper rise in shadow banking activity after 2010. Finally, we propose a proportional random growth model that reproduces the observed distribution, thereby providing a quantitative estimate of the intensity of shadow banking activity. PMID:24728096
Allstadt, Kate E.; Thompson, Eric M.; Wald, David J.; Hamburger, Michael W.; Godt, Jonathan W.; Knudsen, Keith L.; Jibson, Randall W.; Jessee, M. Anna; Zhu, Jing; Hearne, Michael; Baise, Laurie G.; Tanyas, Hakan; Marano, Kristin D.
2016-03-30
The U.S. Geological Survey (USGS) Earthquake Hazards and Landslide Hazards Programs are developing plans to add quantitative hazard assessments of earthquake-triggered landsliding and liquefaction to existing real-time earthquake products (ShakeMap, ShakeCast, PAGER) using open and readily available methodologies and products. To date, prototype global statistical models have been developed and are being refined, improved, and tested. These models are a good foundation, but much work remains to achieve robust and defensible models that meet the needs of end users. In order to establish an implementation plan and identify research priorities, the USGS convened a workshop in Golden, Colorado, in October 2015. This document summarizes current (as of early 2016) capabilities, research and operational priorities, and plans for further studies that were established at this workshop. Specific priorities established during the meeting include (1) developing a suite of alternative models; (2) making use of higher resolution and higher quality data where possible; (3) incorporating newer global and regional datasets and inventories; (4) reducing barriers to accessing inventory datasets; (5) developing methods for using inconsistent or incomplete datasets in aggregate; (6) developing standardized model testing and evaluation methods; (7) improving ShakeMap shaking estimates, particularly as relevant to ground failure, such as including topographic amplification and accounting for spatial variability; and (8) developing vulnerability functions for loss estimates.
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Jeong, Hoonyoung; González-Nicolás, Ana; Templeton, Thomas C.
2018-04-01
Carbon capture and storage (CCS) is being evaluated globally as a geoengineering measure for significantly reducing greenhouse emission. However, long-term liability associated with potential leakage from these geologic repositories is perceived as a main barrier of entry to site operators. Risk quantification and impact assessment help CCS operators to screen candidate sites for suitability of CO2 storage. Leakage risks are highly site dependent, and a quantitative understanding and categorization of these risks can only be made possible through broad participation and deliberation of stakeholders, with the use of site-specific, process-based models as the decision basis. Online decision making, however, requires that scenarios be run in real time. In this work, a Python based, Leakage Assessment and Cost Estimation (PyLACE) web application was developed for quantifying financial risks associated with potential leakage from geologic carbon sequestration sites. PyLACE aims to assist a collaborative, analytic-deliberative decision making processes by automating metamodel creation, knowledge sharing, and online collaboration. In PyLACE, metamodeling, which is a process of developing faster-to-run surrogates of process-level models, is enabled using a special stochastic response surface method and the Gaussian process regression. Both methods allow consideration of model parameter uncertainties and the use of that information to generate confidence intervals on model outputs. Training of the metamodels is delegated to a high performance computing cluster and is orchestrated by a set of asynchronous job scheduling tools for job submission and result retrieval. As a case study, workflow and main features of PyLACE are demonstrated using a multilayer, carbon storage model.
The effect of sensor spacing on wind measurements at the Shuttle Landing Facility
NASA Technical Reports Server (NTRS)
Merceret, Francis J.
1995-01-01
This document presents results of a field study of the effect of sensor spacing on the validity of wind measurements at the Space Shuttle landing Facility (SLF). Standard measurements are made at one second intervals from 30 foot (9.1m) towers located 500 feet (152m) from the SLF centerline. The centerline winds are not exactly the same as those measured by the towers. This study quantifies the differences as a function of statistics of the observed winds and distance between the measurements and points of interest. The field program used logarithmically spaced portable wind towers to measure wind speed and direction over a range of conditions. Correlations, spectra, moments, and structure functions were computed. A universal normalization for structure functions was devised. The normalized structure functions increase as the 2/3 power of separation distance until an asymptotic value is approached. This occurs at spacings of several hundred feet (about 100m). At larger spacings, the structure functions are bounded by the asymptote. This enables quantitative estimates of the expected differences between the winds at the measurement point and the points of interest to be made from the measured wind statistics. A procedure is provided for making these estimates.
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots.
Lee, Tae-Jae; Yi, Dong-Hoon; Cho, Dong-Il Dan
2016-03-01
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%.
Gap state analysis in electric-field-induced band gap for bilayer graphene.
Kanayama, Kaoru; Nagashio, Kosuke
2015-10-29
The origin of the low current on/off ratio at room temperature in dual-gated bilayer graphene field-effect transistors is considered to be the variable range hopping in gap states. However, the quantitative estimation of gap states has not been conducted. Here, we report the systematic estimation of the energy gap by both quantum capacitance and transport measurements and the density of states for gap states by the conductance method. An energy gap of ~ 250 meV is obtained at the maximum displacement field of ~ 3.1 V/nm, where the current on/off ratio of ~ 3 × 10(3) is demonstrated at 20 K. The density of states for the gap states are in the range from the latter half of 10(12) to 10(13) eV(-1) cm(-2). Although the large amount of gap states at the interface of high-k oxide/bilayer graphene limits the current on/off ratio at present, our results suggest that the reduction of gap states below ~ 10(11) eV(-1) cm(-2) by continual improvement of the gate stack makes bilayer graphene a promising candidate for future nanoelectronic device applications.
Network meta-analysis in health psychology and behavioural medicine: a primer.
Molloy, G J; Noone, C; Caldwell, D; Welton, N J; Newell, J
2018-04-05
Progress in the science and practice of health psychology depends on the systematic synthesis of quantitative psychological evidence. Meta-analyses of experimental studies have led to important advances in understanding health-related behaviour change interventions. Fundamental questions regarding such interventions have been systematically investigated through synthesising relevant experimental evidence using standard pairwise meta-analytic procedures that provide reliable estimates of the magnitude, homogeneity and potential biases in effects observed. However, these syntheses only provide information about whether particular types of interventions work better than a control condition or specific alternative approaches. To increase the impact of health psychology on health-related policy-making, evidence regarding the comparative efficacy of all relevant intervention approaches - which may include biomedical approaches - is necessary. With the development of network meta-analysis (NMA), such evidence can be synthesised, even when direct head-to-head trials do not exist. However, care must be taken in its application to ensure reliable estimates of the effect sizes between interventions are revealed. This review paper describes the potential importance of NMA to health psychology, how the technique works and important considerations for its appropriate application within health psychology.
Spin Contamination Error in Optimized Geometry of Singlet Carbene (1A1) by Broken-Symmetry Method
NASA Astrophysics Data System (ADS)
Kitagawa, Yasutaka; Saito, Toru; Nakanishi, Yasuyuki; Kataoka, Yusuke; Matsui, Toru; Kawakami, Takashi; Okumura, Mitsutaka; Yamaguchi, Kizashi
2009-10-01
Spin contamination errors of a broken-symmetry (BS) method in optimized structural parameters of the singlet methylene (1A1) molecule are quantitatively estimated for the Hartree-Fock (HF) method, post-HF methods (CID, CCD, MP2, MP3, MP4(SDQ)), and a hybrid DFT (B3LYP) method. For the purpose, the optimized geometry by the BS method is compared with that of an approximate spin projection (AP) method. The difference between the BS and the AP methods is about 10-20° in the HCH angle. In order to examine the basis set dependency of the spin contamination error, calculated results by STO-3G, 6-31G*, and 6-311++G** are compared. The error depends on the basis sets, but the tendencies of each method are classified into two types. Calculated energy splitting values between the triplet and the singlet states (ST gap) indicate that the contamination of the stable triplet state makes the BS singlet solution stable and the ST gap becomes small. The energy order of the spin contamination error in the ST gap is estimated to be 10-1 eV.
Detection of Natural Fractures from Observed Surface Seismic Data Based on a Linear-Slip Model
NASA Astrophysics Data System (ADS)
Chen, Huaizhen; Zhang, Guangzhi
2018-03-01
Natural fractures play an important role in migration of hydrocarbon fluids. Based on a rock physics effective model, the linear-slip model, which defines fracture parameters (fracture compliances) for quantitatively characterizing the effects of fractures on rock total compliance, we propose a method to detect natural fractures from observed seismic data via inversion for the fracture compliances. We first derive an approximate PP-wave reflection coefficient in terms of fracture compliances. Using the approximate reflection coefficient, we derive azimuthal elastic impedance as a function of fracture compliances. An inversion method to estimate fracture compliances from seismic data is presented based on a Bayesian framework and azimuthal elastic impedance, which is implemented in a two-step procedure: a least-squares inversion for azimuthal elastic impedance and an iterative inversion for fracture compliances. We apply the inversion method to synthetic and real data to verify its stability and reasonability. Synthetic tests confirm that the method can make a stable estimation of fracture compliances in the case of seismic data containing a moderate signal-to-noise ratio for Gaussian noise, and the test on real data reveals that reasonable fracture compliances are obtained using the proposed method.
An overview of meta-analysis for clinicians.
Lee, Young Ho
2018-03-01
The number of medical studies being published is increasing exponentially, and clinicians must routinely process large amounts of new information. Moreover, the results of individual studies are often insufficient to provide confident answers, as their results are not consistently reproducible. A meta-analysis is a statistical method for combining the results of different studies on the same topic and it may resolve conflicts among studies. Meta-analysis is being used increasingly and plays an important role in medical research. This review introduces the basic concepts, steps, advantages, and caveats of meta-analysis, to help clinicians understand it in clinical practice and research. A major advantage of a meta-analysis is that it produces a precise estimate of the effect size, with considerably increased statistical power, which is important when the power of the primary study is limited because of a small sample size. A meta-analysis may yield conclusive results when individual studies are inconclusive. Furthermore, meta-analyses investigate the source of variation and different effects among subgroups. In summary, a meta-analysis is an objective, quantitative method that provides less biased estimates on a specific topic. Understanding how to conduct a meta-analysis aids clinicians in the process of making clinical decisions.
Anomalous chiral transport in heavy ion collisions from Anomalous-Viscous Fluid Dynamics
NASA Astrophysics Data System (ADS)
Shi, Shuzhe; Jiang, Yin; Lilleskov, Elias; Liao, Jinfeng
2018-07-01
Chiral anomaly is a fundamental aspect of quantum theories with chiral fermions. How such microscopic anomaly manifests itself in a macroscopic many-body system with chiral fermions, is a highly nontrivial question that has recently attracted significant interest. As it turns out, unusual transport currents can be induced by chiral anomaly under suitable conditions in such systems, with the notable example of the Chiral Magnetic Effect (CME) where a vector current (e.g. electric current) is generated along an external magnetic field. A lot of efforts have been made to search for CME in heavy ion collisions, by measuring the charge separation effect induced by the CME transport. A crucial challenge in such effort, is the quantitative prediction for the CME signal. In this paper, we develop the Anomalous-Viscous Fluid Dynamics (AVFD) framework, which implements the anomalous fluid dynamics to describe the evolution of fermion currents in QGP, on top of the neutral bulk background described by the VISH2+1 hydrodynamic simulations for heavy ion collisions. With this new tool, we quantitatively and systematically investigate the dependence of the CME signal to a series of theoretical inputs and associated uncertainties. With realistic estimates of initial conditions and magnetic field lifetime, the predicted CME signal is quantitatively consistent with measured change separation data in 200GeV Au-Au collisions. Based on analysis of Au-Au collisions, we further make predictions for the CME observable to be measured in the planned isobaric (Ru-Ru v.s. Zr-Zr) collision experiment, which could provide a most decisive test of the CME in heavy ion collisions.
Prediction of trabecular bone qualitative properties using scanning quantitative ultrasound
Qin, Yi-Xian; Lin, Wei; Mittra, Erik; Xia, Yi; Cheng, Jiqi; Judex, Stefan; Rubin, Clint; Müller, Ralph
2012-01-01
Microgravity induced bone loss represents a critical health problem in astronauts, particularly occurred in weight-supporting skeleton, which leads to osteopenia and increase of fracture risk. Lack of suitable evaluation modality makes it difficult for monitoring skeletal status in long term space mission and increases potential risk of complication. Such disuse osteopenia and osteoporosis compromise trabecular bone density, and architectural and mechanical properties. While X-ray based imaging would not be practical in space, quantitative ultrasound may provide advantages to characterize bone density and strength through wave propagation in complex trabecular structure. This study used a scanning confocal acoustic diagnostic and navigation system (SCAN) to evaluate trabecular bone quality in 60 cubic trabecular samples harvested from adult sheep. Ultrasound image based SCAN measurements in structural and strength properties were validated by μCT and compressive mechanical testing. This result indicated a moderately strong negative correlations observed between broadband ultrasonic attenuation (BUA) and μCT-determined bone volume fraction (BV/TV, R2=0.53). Strong correlations were observed between ultrasound velocity (UV) and bone’s mechanical strength and structural parameters, i.e., bulk Young’s modulus (R2=0.67) and BV/TV (R2=0.85). The predictions for bone density and mechanical strength were significantly improved by using a linear combination of both BUA and UV, yielding R2=0.92 for BV/TV and R2=0.71 for bulk Young’s modulus. These results imply that quantitative ultrasound can characterize trabecular structural and mechanical properties through measurements of particular ultrasound parameters, and potentially provide an excellent estimation for bone’s structural integrity. PMID:23976803
Vidueira, Pablo; Díaz-Puente, José M; Rivera, María
2014-08-01
Ex ante impact assessment has become a fundamental tool for effective program management, and thus, a compulsory task when establishing a new program in the European Union (EU). This article aims to analyze benefits from ex ante impact assessment, methodologies followed, and difficulties encountered. This is done through the case study on the rural development programs (RDPs) in the EU. Results regarding methodologies are then contrasted with the international context in order to provide solid insights to evaluators and program managing authorities facing ex ante impact assessment. All European RDPs from the period 2007 through 2013 (a total of 88) and their corresponding available ex ante evaluations (a total of 70) were analyzed focusing on the socioeconomic impact assessment. Only 46.6% of the regions provide quantified impact estimations on socioeconomic impacts in spite of it being a compulsory task demanded by the European Commission (EC). Recommended methods by the EC are mostly used, but there is a lack of mixed method approaches since qualitative methods are used in substitution of quantitative ones. Two main difficulties argued were the complexity of program impacts and the lack of needed program information. Qualitative approaches on their own have been found as not suitable for ex ante impact assessment, while quantitative approaches-such as microsimulation models-provide a good approximation to actual impacts. However, time and budgetary constraints make that quantitative and mixed methods should be mainly applied on the most relevant impacts for the program success. © The Author(s) 2014.
Prediction of trabecular bone qualitative properties using scanning quantitative ultrasound
NASA Astrophysics Data System (ADS)
Qin, Yi-Xian; Lin, Wei; Mittra, Erik; Xia, Yi; Cheng, Jiqi; Judex, Stefan; Rubin, Clint; Müller, Ralph
2013-11-01
Microgravity induced bone loss represents a critical health problem in astronauts, particularly occurred in weight-supporting skeleton, which leads to osteopenia and increase of fracture risk. Lack of suitable evaluation modality makes it difficult for monitoring skeletal status in long term space mission and increases potential risk of complication. Such disuse osteopenia and osteoporosis compromise trabecular bone density, and architectural and mechanical properties. While X-ray based imaging would not be practical in space, quantitative ultrasound may provide advantages to characterize bone density and strength through wave propagation in complex trabecular structure. This study used a scanning confocal acoustic diagnostic and navigation system (SCAN) to evaluate trabecular bone quality in 60 cubic trabecular samples harvested from adult sheep. Ultrasound image based SCAN measurements in structural and strength properties were validated by μCT and compressive mechanical testing. This result indicated a moderately strong negative correlations observed between broadband ultrasonic attenuation (BUA) and μCT-determined bone volume fraction (BV/TV, R2=0.53). Strong correlations were observed between ultrasound velocity (UV) and bone's mechanical strength and structural parameters, i.e., bulk Young's modulus (R2=0.67) and BV/TV (R2=0.85). The predictions for bone density and mechanical strength were significantly improved by using a linear combination of both BUA and UV, yielding R2=0.92 for BV/TV and R2=0.71 for bulk Young's modulus. These results imply that quantitative ultrasound can characterize trabecular structural and mechanical properties through measurements of particular ultrasound parameters, and potentially provide an excellent estimation for bone's structural integrity.
Random forests on Hadoop for genome-wide association studies of multivariate neuroimaging phenotypes
2013-01-01
Motivation Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. Results We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. Availability The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana. PMID:24564704
Wang, Yue; Goh, Wilson; Wong, Limsoon; Montana, Giovanni
2013-01-01
Multivariate quantitative traits arise naturally in recent neuroimaging genetics studies, in which both structural and functional variability of the human brain is measured non-invasively through techniques such as magnetic resonance imaging (MRI). There is growing interest in detecting genetic variants associated with such multivariate traits, especially in genome-wide studies. Random forests (RFs) classifiers, which are ensembles of decision trees, are amongst the best performing machine learning algorithms and have been successfully employed for the prioritisation of genetic variants in case-control studies. RFs can also be applied to produce gene rankings in association studies with multivariate quantitative traits, and to estimate genetic similarities measures that are predictive of the trait. However, in studies involving hundreds of thousands of SNPs and high-dimensional traits, a very large ensemble of trees must be inferred from the data in order to obtain reliable rankings, which makes the application of these algorithms computationally prohibitive. We have developed a parallel version of the RF algorithm for regression and genetic similarity learning tasks in large-scale population genetic association studies involving multivariate traits, called PaRFR (Parallel Random Forest Regression). Our implementation takes advantage of the MapReduce programming model and is deployed on Hadoop, an open-source software framework that supports data-intensive distributed applications. Notable speed-ups are obtained by introducing a distance-based criterion for node splitting in the tree estimation process. PaRFR has been applied to a genome-wide association study on Alzheimer's disease (AD) in which the quantitative trait consists of a high-dimensional neuroimaging phenotype describing longitudinal changes in the human brain structure. PaRFR provides a ranking of SNPs associated to this trait, and produces pair-wise measures of genetic proximity that can be directly compared to pair-wise measures of phenotypic proximity. Several known AD-related variants have been identified, including APOE4 and TOMM40. We also present experimental evidence supporting the hypothesis of a linear relationship between the number of top-ranked mutated states, or frequent mutation patterns, and an indicator of disease severity. The Java codes are freely available at http://www2.imperial.ac.uk/~gmontana.
Lau, Darryl; Hervey-Jumper, Shawn L; Han, Seunggu J; Berger, Mitchel S
2018-05-01
OBJECTIVE There is ample evidence that extent of resection (EOR) is associated with improved outcomes for glioma surgery. However, it is often difficult to accurately estimate EOR intraoperatively, and surgeon accuracy has yet to be reviewed. In this study, the authors quantitatively assessed the accuracy of intraoperative perception of EOR during awake craniotomy for tumor resection. METHODS A single-surgeon experience of performing awake craniotomies for tumor resection over a 17-year period was examined. Retrospective review of operative reports for quantitative estimation of EOR was recorded. Definitive EOR was based on postoperative MRI. Analysis of accuracy of EOR estimation was examined both as a general outcome (gross-total resection [GTR] or subtotal resection [STR]), and quantitatively (5% within EOR on postoperative MRI). Patient demographics, tumor characteristics, and surgeon experience were examined. The effects of accuracy on motor and language outcomes were assessed. RESULTS A total of 451 patients were included in the study. Overall accuracy of intraoperative perception of whether GTR or STR was achieved was 79.6%, and overall accuracy of quantitative perception of resection (within 5% of postoperative MRI) was 81.4%. There was a significant difference (p = 0.049) in accuracy for gross perception over the 17-year period, with improvement over the later years: 1997-2000 (72.6%), 2001-2004 (78.5%), 2005-2008 (80.7%), and 2009-2013 (84.4%). Similarly, there was a significant improvement (p = 0.015) in accuracy of quantitative perception of EOR over the 17-year period: 1997-2000 (72.2%), 2001-2004 (69.8%), 2005-2008 (84.8%), and 2009-2013 (93.4%). This improvement in accuracy is demonstrated by the significantly higher odds of correctly estimating quantitative EOR in the later years of the series on multivariate logistic regression. Insular tumors were associated with the highest accuracy of gross perception (89.3%; p = 0.034), but lowest accuracy of quantitative perception (61.1% correct; p < 0.001) compared with tumors in other locations. Even after adjusting for surgeon experience, this particular trend for insular tumors remained true. The absence of 1p19q co-deletion was associated with higher quantitative perception accuracy (96.9% vs 81.5%; p = 0.051). Tumor grade, recurrence, diagnosis, and isocitrate dehydrogenase-1 (IDH-1) status were not associated with accurate perception of EOR. Overall, new neurological deficits occurred in 8.4% of cases, and 42.1% of those new neurological deficits persisted after the 3-month follow-up. Correct quantitative perception was associated with lower postoperative motor deficits (2.4%) compared with incorrect perceptions (8.0%; p = 0.029). There were no detectable differences in language outcomes based on perception of EOR. CONCLUSIONS The findings from this study suggest that there is a learning curve associated with the ability to accurately assess intraoperative EOR during glioma surgery, and it may take more than a decade to be truly proficient. Understanding the factors associated with this ability to accurately assess EOR will provide safer surgeries while maximizing tumor resection.
Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Ake; Winter, Reidar
2009-08-25
Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 +/- 3.7% and -0.2 +/- 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.
In real-time quantitative PCR studies using absolute plasmid DNA standards, a calibration curve is developed to estimate an unknown DNA concentration. However, potential differences in the amplification performance of plasmid DNA compared to genomic DNA standards are often ignore...