Sample records for availability measure estimation

  1. Estimating Coherence Measures from Limited Experimental Data Available

    NASA Astrophysics Data System (ADS)

    Zhang, Da-Jian; Liu, C. L.; Yu, Xiao-Dong; Tong, D. M.

    2018-04-01

    Quantifying coherence has received increasing attention, and considerable work has been directed towards finding coherence measures. While various coherence measures have been proposed in theory, an important issue following is how to estimate these coherence measures in experiments. This is a challenging task, since the state of a system is often unknown in practical applications and the accessible measurements in a real experiment are typically limited. In this Letter, we put forward an approach to estimate coherence measures of an unknown state from any limited experimental data available. Our approach is not only applicable to coherence measures but can be extended to other resource measures.

  2. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  3. Robust best linear estimator for Cox regression with instrumental variables in whole cohort and surrogates with additive measurement error in calibration sample

    PubMed Central

    Wang, Ching-Yun; Song, Xiao

    2017-01-01

    SUMMARY Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women’s Health Initiative. PMID:27546625

  4. An Evaluation of Available Models for Estimating the Reliability and Validity of Criterion Referenced Measures.

    ERIC Educational Resources Information Center

    Oakland, Thomas

    New strategies for evaluation criterion referenced measures (CRM) are discussed. These strategies examine the following issues: (1) the use of normed referenced measures (NRM) as CRM and then estimating the reliability and validity of such measures in terms of variance from an arbitrarily specified criterion score, (2) estimation of the…

  5. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  6. Robust best linear estimator for Cox regression with instrumental variables in whole cohort and surrogates with additive measurement error in calibration sample.

    PubMed

    Wang, Ching-Yun; Song, Xiao

    2016-11-01

    Biomedical researchers are often interested in estimating the effect of an environmental exposure in relation to a chronic disease endpoint. However, the exposure variable of interest may be measured with errors. In a subset of the whole cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies an additive measurement error model, but it may not have repeated measurements. The subset in which the surrogate variables are available is called a calibration sample. In addition to the surrogate variables that are available among the subjects in the calibration sample, we consider the situation when there is an instrumental variable available for all study subjects. An instrumental variable is correlated with the unobserved true exposure variable, and hence can be useful in the estimation of the regression coefficients. In this paper, we propose a nonparametric method for Cox regression using the observed data from the whole cohort. The nonparametric estimator is the best linear combination of a nonparametric correction estimator from the calibration sample and the difference of the naive estimators from the calibration sample and the whole cohort. The asymptotic distribution is derived, and the finite sample performance of the proposed estimator is examined via intensive simulation studies. The methods are applied to the Nutritional Biomarkers Study of the Women's Health Initiative. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Astrophysics Data System (ADS)

    Guo, T. H.; Musgrave, J.

    1992-11-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.

  8. A neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Guo, T. H.; Musgrave, J.

    1992-01-01

    In order to properly utilize the available fuel and oxidizer of a liquid propellant rocket engine, the mixture ratio is closed loop controlled during main stage (65 percent - 109 percent power) operation. However, because of the lack of flight-capable instrumentation for measuring mixture ratio, the value of mixture ratio in the control loop is estimated using available sensor measurements such as the combustion chamber pressure and the volumetric flow, and the temperature and pressure at the exit duct on the low pressure fuel pump. This estimation scheme has two limitations. First, the estimation formula is based on an empirical curve fitting which is accurate only within a narrow operating range. Second, the mixture ratio estimate relies on a few sensor measurements and loss of any of these measurements will make the estimate invalid. In this paper, we propose a neural network-based estimator for the mixture ratio of the Space Shuttle Main Engine. The estimator is an extension of a previously developed neural network based sensor failure detection and recovery algorithm (sensor validation). This neural network uses an auto associative structure which utilizes the redundant information of dissimilar sensors to detect inconsistent measurements. Two approaches have been identified for synthesizing mixture ratio from measurement data using a neural network. The first approach uses an auto associative neural network for sensor validation which is modified to include the mixture ratio as an additional output. The second uses a new network for the mixture ratio estimation in addition to the sensor validation network. Although mixture ratio is not directly measured in flight, it is generally available in simulation and in test bed firing data from facility measurements of fuel and oxidizer volumetric flows. The pros and cons of these two approaches will be discussed in terms of robustness to sensor failures and accuracy of the estimate during typical transients using simulation data.

  9. Defining Tsunami Magnitude as Measure of Potential Impact

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Tang, L.

    2016-12-01

    The goal of tsunami forecast, as a system for predicting potential impact of a tsunami at coastlines, requires quick estimate of a tsunami magnitude. This goal has been recognized since the beginning of tsunami research. The work of Kajiura, Soloviev, Abe, Murty, and many others discussed several scales for tsunami magnitude based on estimates of tsunami energy. However, difficulties of estimating tsunami energy based on available tsunami measurements at coastal sea-level stations has carried significant uncertainties and has been virtually impossible in real time, before tsunami impacts coastlines. The slow process of tsunami magnitude estimates, including collection of vast amount of available coastal sea-level data from affected coastlines, made it impractical to use any tsunami magnitude scales in tsunami warning operations. Uncertainties of estimates made tsunami magnitudes difficult to use as universal scale for tsunami analysis. Historically, the earthquake magnitude has been used as a proxy of tsunami impact estimates, since real-time seismic data is available of real-time processing and ample amount of seismic data is available for an elaborate post event analysis. This measure of tsunami impact carries significant uncertainties in quantitative tsunami impact estimates, since the relation between the earthquake and generated tsunami energy varies from case to case. In this work, we argue that current tsunami measurement capabilities and real-time modeling tools allow for establishing robust tsunami magnitude that will be useful for tsunami warning as a quick estimate for tsunami impact and for post-event analysis as a universal scale for tsunamis inter-comparison. We present a method for estimating the tsunami magnitude based on tsunami energy and present application of the magnitude analysis for several historical events for inter-comparison with existing methods.

  10. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error

    PubMed Central

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.

    2017-01-01

    SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018

  11. Exploring Linear Measure.

    ERIC Educational Resources Information Center

    Cox, Philip L.

    This material is an instructional unit on measuring and estimating. A variety of activities are used with manipulative devices, worksheets, and discussion questions included. Major topics are estimating lengths, accuracy of measurement, metric system, scale drawings, and conversion between different units. A teacher's guide is also available.…

  12. A minimalist approach to bias estimation for passive sensor measurements with targets of opportunity

    NASA Astrophysics Data System (ADS)

    Belfadel, Djedjiga; Osborne, Richard W.; Bar-Shalom, Yaakov

    2013-09-01

    In order to carry out data fusion, registration error correction is crucial in multisensor systems. This requires estimation of the sensor measurement biases. It is important to correct for these bias errors so that the multiple sensor measurements and/or tracks can be referenced as accurately as possible to a common tracking coordinate system. This paper provides a solution for bias estimation for the minimum number of passive sensors (two), when only targets of opportunity are available. The sensor measurements are assumed time-coincident (synchronous) and perfectly associated. Since these sensors provide only line of sight (LOS) measurements, the formation of a single composite Cartesian measurement obtained from fusing the LOS measurements from different sensors is needed to avoid the need for nonlinear filtering. We evaluate the Cramer-Rao Lower Bound (CRLB) on the covariance of the bias estimate, i.e., the quantification of the available information about the biases. Statistical tests on the results of simulations show that this method is statistically efficient, even for small sample sizes (as few as two sensors and six points on the trajectory of a single target of opportunity). We also show that the RMS position error is significantly improved with bias estimation compared with the target position estimation using the original biased measurements.

  13. Spectral estimates of net radiation and soil heat flux

    USGS Publications Warehouse

    Daughtry, C.S.T.; Kustas, William P.; Moran, M.S.; Pinter, P. J.; Jackson, R. D.; Brown, P.W.; Nichols, W.D.; Gay, L.W.

    1990-01-01

    Conventional methods of measuring surface energy balance are point measurements and represent only a small area. Remote sensing offers a potential means of measuring outgoing fluxes over large areas at the spatial resolution of the sensor. The objective of this study was to estimate net radiation (Rn) and soil heat flux (G) using remotely sensed multispectral data acquired from an aircraft over large agricultural fields. Ground-based instruments measured Rn and G at nine locations along the flight lines. Incoming fluxes were also measured by ground-based instruments. Outgoing fluxes were estimated using remotely sensed data. Remote Rn, estimated as the algebraic sum of incoming and outgoing fluxes, slightly underestimated Rn measured by the ground-based net radiometers. The mean absolute errors for remote Rn minus measured Rn were less than 7%. Remote G, estimated as a function of a spectral vegetation index and remote Rn, slightly overestimated measured G; however, the mean absolute error for remote G was 13%. Some of the differences between measured and remote values of Rn and G are associated with differences in instrument designs and measurement techniques. The root mean square error for available energy (Rn - G) was 12%. Thus, methods using both ground-based and remotely sensed data can provide reliable estimates of the available energy which can be partitioned into sensible and latent heat under nonadvective conditions. ?? 1990.

  14. Design of a two-level power system linear state estimator

    NASA Astrophysics Data System (ADS)

    Yang, Tao

    The availability of synchro-phasor data has raised the possibility of a linear state estimator if the inputs are only complex currents and voltages and if there are enough such measurements to meet observability and redundancy requirements. Moreover, the new digital substations can perform some of the computation at the substation itself resulting in a more accurate two-level state estimator. The objective of this research is to develop a two-level linear state estimator processing synchro-phasor data and estimating the states at both the substation level and the control center level. Both the mathematical algorithms that are different from those in the present state estimation procedure and the layered architecture of databases, communications and application programs that are required to support this two-level linear state estimator are described in this dissertation. Besides, as the availability of phasor measurements at substations will increase gradually, this research also describes how the state estimator can be enhanced to handle both the traditional state estimator and the proposed linear state estimator simultaneously. This provides a way to immediately utilize the benefits in those parts of the system where such phasor measurements become available and provides a pathway to transition to the smart grid of the future. The design procedure of the two-level state estimator is applied to two study systems. The first study system is the IEEE-14 bus system. The second one is the 179 bus Western Electricity Coordinating Council (WECC) system. The static database for the substations is constructed from the power flow data of these systems and the real-time measurement database is produced by a power system dynamic simulating tool (TSAT). Time-skew problems that may be caused by communication delays are also considered and simulated. We used the Network Simulator (NS) tool to simulate a simple communication system and analyse its time delay performance. These time delays were too small to affect the results especially since the measurement data is time-stamped and the state estimator for these small systems could be run with subseconf frequency. Keywords: State Estimation, Synchro-Phasor Measurement, Distributed System, Energy Control Center, Substation, Time-skew

  15. Optimal estimation of two-qubit pure-state entanglement

    NASA Astrophysics Data System (ADS)

    Acín, Antonio; Tarrach, Rolf; Vidal, Guifré

    2000-06-01

    We present optimal measuring strategies for an estimation of the entanglement of unknown two-qubit pure states and of the degree of mixing of unknown single-qubit mixed states, of which N identical copies are available. The most general measuring strategies are considered in both situations, to conclude in the first case that a local, although collective, measurement suffices to estimate entanglement, a nonlocal property, optimally.

  16. A novel measure of effect size for mediation analysis.

    PubMed

    Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken

    2018-06-01

    Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Quality of anthropometric measurements in Spanish Intensive Care Units (The CAMIES Study).

    PubMed

    García-Martínez, M A; Cherednichenko, T; Hidalgo Encinas, Y; Catalá Espinosa, A I; Arrascaeta Llanes, A; Acosta Escribano, J A

    2017-11-11

    Real body weight and height are essential data to be obtained in all critically ill patients (CIP), due to their influence in the designing of therapies and monitoring. Visual estimation is a very inaccurate practice. No precise descriptions of anthropometric measurements among CIP are available in the clinical practice guides. To describe anthropometric quality in CIP, health professional perception of such quality, and its influencing factors. Computer-assisted telephone or self-interviewing. Doctors and nurses of all Spanish Intensive Care Units (ICU) attending adults. Anthropometric practices were described in detail, along with the proclivity to obtain real measurements, and the influence of professional experience, the number of ICU beds, and the health professional group involved. A total of 481 questionnaires were collected from 176 hospitals (36.8% from physicians). The availability of measuring tools is limited (weight 68.7% - height 76.7%), with no relation to the number of ICU beds (weight P=.343, height P=.61). Visual estimation was the most frequent way of obtaining measurements (weight 65.9% - height 64.8%), even when measuring tools were available. Willingness to take real measurements was very low, especially among physicians, and professional experience was associated to increased rejection (P<.001). Visually estimated measurements exceed real measurements in the routine practice of Spanish ICUs. Measurement tools are not widely available in the ICU, and even when available, their use is not guaranteed. The surveyed population does not view anthropometric measures as being important for clinical practice. An effort should be made by scientific societies to promote reliable anthropometric practice in Spanish ICUs. Copyright © 2017 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.

  18. Monitoring of Batch Industrial Crystallization with Growth, Nucleation, and Agglomeration. Part 2: Structure Design for State Estimation with Secondary Measurements

    PubMed Central

    2017-01-01

    This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements. PMID:28890604

  19. Monitoring of Batch Industrial Crystallization with Growth, Nucleation, and Agglomeration. Part 2: Structure Design for State Estimation with Secondary Measurements.

    PubMed

    Porru, Marcella; Özkan, Leyla

    2017-08-30

    This work investigates the design of alternative monitoring tools based on state estimators for industrial crystallization systems with nucleation, growth, and agglomeration kinetics. The estimation problem is regarded as a structure design problem where the estimation model and the set of innovated states have to be chosen; the estimator is driven by the available measurements of secondary variables. On the basis of Robust Exponential estimability arguments, it is found that the concentration is distinguishable with temperature and solid fraction measurements while the crystal size distribution (CSD) is not. Accordingly, a state estimator structure is selected such that (i) the concentration (and other distinguishable states) are innovated by means of the secondary measurements processed with the geometric estimator (GE), and (ii) the CSD is estimated by means of a rigorous model in open loop mode. The proposed estimator has been tested through simulations showing good performance in the case of mismatch in the initial conditions, parametric plant-model mismatch, and noisy measurements.

  20. On Estimating End-to-End Network Path Properties

    NASA Technical Reports Server (NTRS)

    Allman, Mark; Paxson, Vern

    1999-01-01

    The more information about current network conditions available to a transport protocol, the more efficiently it can use the network to transfer its data. In networks such as the Internet, the transport protocol must often form its own estimates of network properties based on measurements per-formed by the connection endpoints. We consider two basic transport estimation problems: determining the setting of the retransmission timer (RTO) for are reliable protocol, and estimating the bandwidth available to a connection as it begins. We look at both of these problems in the context of TCP, using a large TCP measurement set [Pax97b] for trace-driven simulations. For RTO estimation, we evaluate a number of different algorithms, finding that the performance of the estimators is dominated by their minimum values, and to a lesser extent, the timer granularity, while being virtually unaffected by how often round-trip time measurements are made or the settings of the parameters in the exponentially-weighted moving average estimators commonly used. For bandwidth estimation, we explore techniques previously sketched in the literature [Hoe96, AD98] and find that in practice they perform less well than anticipated. We then develop a receiver-side algorithm that performs significantly better.

  1. Comparison of medicine availability measurements at health facilities: evidence from Service Provision Assessment surveys in five sub-Saharan African countries.

    PubMed

    Choi, Yoonjoung; Ametepi, Paul

    2013-07-09

    With growing emphasis on health systems strengthening in global health, various health facility assessment methods have been used increasingly to measure medicine and commodity availability. However, few studies have systematically compared estimates of availability based on different definitions. The objective of this study was to compare estimates of medicine availability based on different definitions. A secondary data analysis was conducted using data from the Service Provision Assessment (SPA)--a nationally representative sample survey of health facilities--conducted in five countries: Kenya SPA 2010, Namibia SPA 2009, Rwanda SPA 2007, Tanzania SPA 2006, and Uganda SPA 2007. For 32 medicines, percent of facilities having the medicine were estimated using five definitions: four for current availability and one for six-month period availability. 'Observed availability of at least one valid unit' was used as a reference definition, and ratios between the reference and each of the other four estimates were calculated. Summary statistics of the ratios among the 32 medicines were calculated by country. The ratios were compared further between public and non-public facilities within each country. Across five countries, compared to current observed availability of at least one valid unit, 'reported availability without observation' was on average 6% higher (ranging from 3% in Rwanda to 8% in Namibia), 'observed availability where all units were valid' was 11% lower (ranging from 2% in Tanzania to 19% in Uganda), and 'six-month period availability' was 14% lower (ranging from 5% in Namibia to 25% in Uganda). Medicine availability estimates vary substantially across definitions, and need to be interpreted with careful consideration of the methods used.

  2. Constraints on the photolysis and the equilibrium constant of ClO-dimer from airborne and balloon-borne measurements of chlorine compounds

    NASA Astrophysics Data System (ADS)

    Kleinboehl, A.; Canty, T. P.; Salawitch, R. J.; Khosravi, M.; Urban, J.; Toon, G. C.; Kuellmann, H.; Notholt, J.

    2011-12-01

    Significant differences exist between different laboratory measurements of the photolysis cross-sections of ClO-dimer, and the rate constant controlling the thermal equilibrium between ClO-dimer and ClO. This leads to uncertainties in the calculations of stratospheric ozone loss in the winter polar regions. One way to constrain the plausibility of these parameters is the measurement of ClO across the terminator in the activated polar vortex. Here we analyze measurements of ClO taken by the airborne submillimeter radiometer ASUR in the Arctic winter of 1999/2000. We use measured ClO at low solar zenith angles (SZA) to estimate the total active chlorine (ClOx). We estimate total available inorganic chlorine (Cly) using ASUR measurements of N2O in January 2000 and a N2O-Cly correlation established by a balloon measurement of the MarkIV interferometer in December 1999. We compare the ClOx estimates based on different photolysis rates of ClO-Dimer. Our results show that cross-sections leading to fast photolysis rates like the ones by Burkholder et al. [1990] or Papanastasiou et al. [2009] give ClOx mixing ratios that overlap with our estimated range of available Cly. Slower photolysis rates like the ones by von Hobe et al. [2009] and Pope et al. [2007] lead to ClOx values that are significantly higher than the available Cly. We use the calculated ClOx from low SZA to estimate the ClO in darkness with different equilibrium constants, and compare it with ASUR ClO measurements before sunrise (SZA > 95). We find that calculations with equilibrium constants published in the JPL evaluation of the last few years all give good agreement with observed ClO mixing ratios. The equilibrium constant estimated by von Hobe et al. [2005] yields ClO values that are higher than the ones observed.

  3. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  4. Acceptance of New Medicaid Patients by Primary Care Physicians and Experiences with Physician Availability among Children on Medicaid or the Children's Health Insurance Program

    PubMed Central

    Decker, Sandra L

    2015-01-01

    Objective To estimate the relationship between physicians' acceptance of new Medicaid patients and access to health care. Data Sources The National Ambulatory Medical Care Survey (NAMCS) Electronic Health Records Survey and the National Health Interview Survey (NHIS) 2011/2012. Study Design Linear probability models estimated the relationship between measures of experiences with physician availability among children on Medicaid or the Children's Health Insurance Program (CHIP) from the NHIS and state-level estimates of the percent of primary care physicians accepting new Medicaid patients from the NAMCS, controlling for other factors. Principal Findings Nearly 16 percent of children with a significant health condition or development delay had a doctor's office or clinic indicate that the child's health insurance was not accepted in states with less than 60 percent of physicians accepting new Medicaid patients, compared to less than 4 percent in states with at least 75 percent of physicians accepting new Medicaid patients. Adjusted estimates and estimates for other measures of access to care were similar. Conclusions Measures of experiences with physician availability for children on Medicaid/CHIP were generally good, though better in states where more primary care physicians accepted new Medicaid patients. PMID:25683869

  5. Estimating the dose response relationship for occupational radiation exposure measured with minimum detection level.

    PubMed

    Xue, Xiaonan; Shore, Roy E; Ye, Xiangyang; Kim, Mimi Y

    2004-10-01

    Occupational exposures are often recorded as zero when the exposure is below the minimum detection level (BMDL). This can lead to an underestimation of the doses received by individuals and can lead to biased estimates of risk in occupational epidemiologic studies. The extent of the exposure underestimation is increased with the magnitude of the minimum detection level (MDL) and the frequency of monitoring. This paper uses multiple imputation methods to impute values for the missing doses due to BMDL. A Gibbs sampling algorithm is developed to implement the method, which is applied to two distinct scenarios: when dose information is available for each measurement (but BMDL is recorded as zero or some other arbitrary value), or when the dose information available represents the summation of a series of measurements (e.g., only yearly cumulative exposure is available but based on, say, weekly measurements). Then the average of the multiple imputed exposure realizations for each individual is used to obtain an unbiased estimate of the relative risk associated with exposure. Simulation studies are used to evaluate the performance of the estimators. As an illustration, the method is applied to a sample of historical occupational radiation exposure data from the Oak Ridge National Laboratory.

  6. State estimator for multisensor systems with irregular sampling and time-varying delays

    NASA Astrophysics Data System (ADS)

    Peñarrocha, I.; Sanchis, R.; Romero, J. A.

    2012-08-01

    This article addresses the state estimation in linear time-varying systems with several sensors with different availability, randomly sampled in time and whose measurements have a time-varying delay. The approach is based on a modification of the Kalman filter with the negative-time measurement update strategy, avoiding running back the full standard Kalman filter, the use of full augmented order models or the use of reorganisation techniques, leading to a lower implementation cost algorithm. The update equations are run every time a new measurement is available, independently of the time when it was taken. The approach is useful for networked control systems, systems with long delays and scarce measurements and for out-of-sequence measurements.

  7. Precipitation estimates and comparison of satellite rainfall data to in situ rain gauge observations to further develop the watershed-modeling capabilities for the Lower Mekong River Basin

    NASA Astrophysics Data System (ADS)

    Dandridge, C.; Lakshmi, V.; Sutton, J. R. P.; Bolten, J. D.

    2017-12-01

    This study focuses on the lower region of the Mekong River Basin (MRB), an area including Burma, Cambodia, Vietnam, Laos, and Thailand. This region is home to expansive agriculture that relies heavily on annual precipitation over the basin for its prosperity. Annual precipitation amounts are regulated by the global monsoon system and therefore vary throughout the year. This research will lead to improved prediction of floods and management of floodwaters for the MRB. We compare different satellite estimates of precipitation to each other and to in-situ precipitation estimates for the Mekong River Basin. These comparisons will help us determine which satellite precipitation estimates are better at predicting precipitation in the MRB and will help further our understanding of watershed-modeling capabilities for the basin. In this study we use: 1) NOAA's PERSIANN daily 0.25° precipitation estimate Climate Data Record (CDR), 2) NASA's Tropical Rainfall Measuring Mission (TRMM) daily 0.25° estimate, and 3) NASA's Global Precipitation Measurement (GPM) daily 0.1 estimate and 4) 488 in-situ stations located in the lower MRB provide daily precipitation estimates. The PERSIANN CDR precipitation estimate was able to provide the longest data record because it is available from 1983 to present. The TRMM precipitation estimate is available from 2000 to present and the GPM precipitation estimates are available from 2015 to present. It is for this reason that we provide several comparisons between our precipitation estimates. Comparisons were done between each satellite product and the in-situ precipitation estimates based on geographical location and date using the entire available data record for each satellite product for daily, monthly, and yearly precipitation estimates. We found that monthly PERSIANN precipitation estimates were able to explain up to 90% of the variability in station precipitation depending on station location.

  8. Estimation of the sea surface's two-scale backscatter parameters

    NASA Technical Reports Server (NTRS)

    Wentz, F. J.

    1978-01-01

    The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.

  9. Estimation of water flux in urban area using eddy covariance measurements in Riverside, Southern California

    USDA-ARS?s Scientific Manuscript database

    Micrometeorological methods can direct measure the sensible and latent heat flux in specific sites and provide robust estimates of the evaporative fraction (EF), which is the fraction of available surface energy contained in latent heat. Across a vegetation coverage gradient in urban area, an empir...

  10. Soft sensor based composition estimation and controller design for an ideal reactive distillation column.

    PubMed

    Vijaya Raghavan, S R; Radhakrishnan, T K; Srinivasan, K

    2011-01-01

    In this research work, the authors have presented the design and implementation of a recurrent neural network (RNN) based inferential state estimation scheme for an ideal reactive distillation column. Decentralized PI controllers are designed and implemented. The reactive distillation process is controlled by controlling the composition which has been estimated from the available temperature measurements using a type of RNN called Time Delayed Neural Network (TDNN). The performance of the RNN based state estimation scheme under both open loop and closed loop have been compared with a standard Extended Kalman filter (EKF) and a Feed forward Neural Network (FNN). The online training/correction has been done for both RNN and FNN schemes for every ten minutes whenever new un-trained measurements are available from a conventional composition analyzer. The performance of RNN shows better state estimation capability as compared to other state estimation schemes in terms of qualitative and quantitative performance indices. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Improvement of Vehicle Positioning Using Car-to-Car Communications in Consideration of Communication Delay

    NASA Astrophysics Data System (ADS)

    Hontani, Hidekata; Higuchi, Yuya

    In this article, we propose a vehicle positioning method that can estimate positions of cars even in areas where the GPS is not available. For the estimation, each car measures the relative distance to a car running in front, communicates the measurements with other cars, and uses the received measurements for estimating its position. In order to estimate the position even if the measurements are received with time-delay, we employed the time-delay tolerant Kalman filtering. For sharing the measurements, it is assumed that a car-to-car communication system is used. Then, the measurements sent from farther cars are received with larger time-delay. It follows that the accuracy of the estimates of farther cars become worse. Hence, the proposed method manages only the states of nearby cars to reduce computing effort. The authors simulated the proposed filtering method and found that the proposed method estimates the positions of nearby cars as accurate as the distributed Kalman filtering.

  12. Estimation of effective wind speed

    NASA Astrophysics Data System (ADS)

    Østergaard, K. Z.; Brath, P.; Stoustrup, J.

    2007-07-01

    The wind speed has a huge impact on the dynamic response of wind turbine. Because of this, many control algorithms use a measure of the wind speed to increase performance, e.g. by gain scheduling and feed forward. Unfortunately, no accurate measurement of the effective wind speed is online available from direct measurements, which means that it must be estimated in order to make such control methods applicable in practice. In this paper a new method is presented for the estimation of the effective wind speed. First, the rotor speed and aerodynamic torque are estimated by a combined state and input observer. These two variables combined with the measured pitch angle is then used to calculate the effective wind speed by an inversion of a static aerodynamic model.

  13. On the estimation of wall pressure coherence using time-resolved tomographic PIV

    NASA Astrophysics Data System (ADS)

    Pröbsting, Stefan; Scarano, Fulvio; Bernardini, Matteo; Pirozzoli, Sergio

    2013-07-01

    Three-dimensional time-resolved velocity field measurements are obtained using a high-speed tomographic Particle Image Velocimetry (PIV) system on a fully developed flat plate turbulent boundary layer for the estimation of wall pressure fluctuations. The work focuses on the applicability of tomographic PIV to compute the coherence of pressure fluctuations, with attention to the estimation of the stream and spanwise coherence length. The latter is required for estimations of aeroacoustic noise radiation by boundary layers and trailing edge flows, but is also of interest for vibro-structural problems. The pressure field is obtained by solving the Poisson equation for incompressible flows, where the source terms are provided by time-resolved velocity field measurements. Measured 3D velocity data is compared to results obtained from planar PIV, and a Direct Numerical Simulation (DNS) at similar Reynolds number. An improved method for the estimation of the material based on a least squares estimator of the velocity derivative along a particle trajectory is proposed and applied. Computed surface pressure fluctuations are further verified by means of simultaneous measurements by a pinhole microphone and compared to the DNS results and a semi-empirical model available from literature. The correlation coefficient for the reconstructed pressure time series with respect to pinhole microphone measurements attains approximately 0.5 for the band-pass filtered signal over the range of frequencies resolved by the velocity field measurements. Scaled power spectra of the pressure at a single point compare favorably to the DNS results and those available from literature. Finally, the coherence of surface pressure fluctuations and the resulting span- and streamwise coherence lengths are estimated and compared to semi-empirical models and DNS results.

  14. Inference on energetics of deep-sea fish that cannot be aged: The case of the hagfish

    NASA Astrophysics Data System (ADS)

    van der Meer, Jaap; Kooijman, Sebastiaan (Bas) A. L. M.

    2014-11-01

    Dynamic Energy Budget (DEB) theory is used to estimate maximum growth rates and age at maturity for two hagfish species, the Atlantic hagfish Myxine glutinosa and the Pacific hagfish Eptatretus stoutii. Neither direct measurements on growth nor aging methods are available for these species. Only limited information on, for example, length and mass at birth and at puberty, and on oxygen consumption versus mass is available. For the Atlantic hagfish, but not for the Pacific hagfish, estimated growth rates are much higher and estimated age at maturity is much lower than previously thought, which may have implications for fisheries management. Yet, whether or not these results are due to erroneous oxygen consumption measurements remains to be seen.

  15. A simulation of air pollution model parameter estimation using data from a ground-based LIDAR remote sensor

    NASA Technical Reports Server (NTRS)

    Kibler, J. F.; Suttles, J. T.

    1977-01-01

    One way to obtain estimates of the unknown parameters in a pollution dispersion model is to compare the model predictions with remotely sensed air quality data. A ground-based LIDAR sensor provides relative pollution concentration measurements as a function of space and time. The measured sensor data are compared with the dispersion model output through a numerical estimation procedure to yield parameter estimates which best fit the data. This overall process is tested in a computer simulation to study the effects of various measurement strategies. Such a simulation is useful prior to a field measurement exercise to maximize the information content in the collected data. Parametric studies of simulated data matched to a Gaussian plume dispersion model indicate the trade offs available between estimation accuracy and data acquisition strategy.

  16. Estimating Renewable Energy Economic Potential in the United States: Methodology and Initial Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Austin; Beiter, Philipp; Heimiller, Donna

    The report describes a geospatial analysis method to estimate the economic potential of several renewable resources available for electricity generation in the United States. Economic potential, one measure of renewable generation potential, is defined in this report as the subset of the available resource technical potential where the cost required to generate the electricity (which determines the minimum revenue requirements for development of the resource) is below the revenue available in terms of displaced energy and displaced capacity.

  17. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  18. State Estimation of International Space Station Centrifuge Rotor With Incomplete Knowledge of Disturbance Inputs

    NASA Technical Reports Server (NTRS)

    Sullivan, Michael J.

    2005-01-01

    This thesis develops a state estimation algorithm for the Centrifuge Rotor (CR) system where only relative measurements are available with limited knowledge of both rotor imbalance disturbances and International Space Station (ISS) thruster disturbances. A Kalman filter is applied to a plant model augmented with sinusoidal disturbance states used to model both the effect of the rotor imbalance and the 155 thrusters on the CR relative motion measurement. The sinusoidal disturbance states compensate for the lack of the availability of plant inputs for use in the Kalman filter. Testing confirms that complete disturbance modeling is necessary to ensure reliable estimation. Further testing goes on to show that increased estimator operational bandwidth can be achieved through the expansion of the disturbance model within the filter dynamics. In addition, Monte Carlo analysis shows the varying levels of robustness against defined plant/filter uncertainty variations.

  19. Low-flow characteristics for streams on the Islands of Kauaʻi, Oʻahu, Molokaʻi, Maui, and Hawaiʻi, State of Hawaiʻi

    USGS Publications Warehouse

    Cheng, Chui Ling

    2016-08-03

    Statistical models were developed to estimate natural streamflow under low-flow conditions for streams with existing streamflow data at measurement sites on the Islands of Kauaʻi, Oʻahu, Molokaʻi, Maui, and Hawaiʻi. Streamflow statistics used to describe the low-flow characteristics are flow-duration discharges that are equaled or exceeded between 50 and 95 percent of the time during the 30-year base period 1984–2013. Record-augmentation techniques were applied to develop statistical models relating concurrent streamflow data at the measurement sites and long-term data from nearby continuous-record streamflow-gaging stations that were in operation during the base period and were selected as index stations. Existing data and subsequent low-flow analyses of the available data help to identify streams in under-represented geographic areas and hydrogeologic settings where additional data collection is suggested.Low-flow duration discharges were estimated for 107 measurement sites (including long-term and short-term continuous-record streamflow-gaging stations, and partial-record stations) and 27 index stations. The adequacy of statistical models was evaluated with correlation coefficients and modified Nash-Sutcliff coefficients of efficiency, and a majority of the low-flow duration-discharge estimates are satisfactory based on these regression statistics.Molokaʻi and Hawaiʻi have the fewest number of measurement sites (that are not located on ephemeral stream reaches) at which flow-duration discharges were estimated, which can be partially explained by the limited number of index stations available on these islands that could be used for record augmentation. At measurement sites on some tributary streams, low-flow duration discharges could not be estimated because no adequate correlations could be developed with the index stations. These measurement sites are located on streams where duration-discharge estimates are available at long-term stations at other locations on the main stream channel to provide at least some definition of low-flow characteristics on that stream. In terms of general natural streamflow data availability, data are scarce in the leeward areas for all five islands as many leeward streams are dry or have minimal flow. Other under-represented areas include central Oʻahu, central Maui, and southeastern Maui.

  20. Using mixture-tuned match filtering to measure changes in subpixel vegetation area in Las Vegas, Nevada

    NASA Astrophysics Data System (ADS)

    Brelsford, Christa; Shepherd, Doug

    2014-01-01

    In desert cities, accurate measurements of vegetation area within residential lots are necessary to understand drivers of change in water consumption. Most residential lots are smaller than an individual 30-m pixel from Landsat satellite images and have a mixture of vegetation and other land covers. Quantifying vegetation change in this environment requires estimating subpixel vegetation area. Mixture-tuned match filtering (MTMF) has been successfully used for subpixel target detection. There have been few successful applications of MTMF to subpixel abundance estimation because the relationship observed between MTMF estimates and ground measurements of abundance is noisy. We use a ground truth dataset over 10 times larger than that available for any previous MTMF application to estimate the bias between ground data and MTMF results. We find that MTMF underestimates the fractional area of vegetation by 5% to 10% and show that averaging over multiple pixels is necessary to reduce noise in the dataset. We conclude that MTMF is a viable technique for fractional area estimation when a large dataset is available for calibration. When this method is applied to estimating vegetation area in Las Vegas, Nevada, spatial and temporal trends are consistent with expectations from known population growth and policy changes.

  1. Calibration and Measurement Uncertainty Estimation of Radiometric Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, A.; Sengupta, M.; Reda, I.

    2014-11-01

    Evaluating the performance of photovoltaic cells, modules, and arrays that form large solar deployments relies on accurate measurements of the available solar resource. Therefore, determining the accuracy of these solar radiation measurements provides a better understanding of investment risks. This paper provides guidelines and recommended procedures for estimating the uncertainty in calibrations and measurements by radiometers using methods that follow the International Bureau of Weights and Measures Guide to the Expression of Uncertainty (GUM). Standardized analysis based on these procedures ensures that the uncertainty quoted is well documented.

  2. Estimating rainforest biomass stocks and carbon loss from deforestation and degradation in Papua New Guinea 1972-2002: Best estimates, uncertainties and research needs.

    PubMed

    Bryan, Jane; Shearman, Phil; Ash, Julian; Kirkpatrick, J B

    2010-01-01

    Reduction of carbon emissions from tropical deforestation and forest degradation is being considered a cost-effective way of mitigating the impacts of global warming. If such reductions are to be implemented, accurate and repeatable measurements of forest cover change and biomass will be required. In Papua New Guinea (PNG), which has one of the world's largest remaining areas of tropical forest, we used the best available data to estimate rainforest carbon stocks, and emissions from deforestation and degradation. We collated all available PNG field measurements which could be used to estimate carbon stocks in logged and unlogged forest. We extrapolated these plot-level estimates across the forested landscape using high-resolution forest mapping. We found the best estimate of forest carbon stocks contained in logged and unlogged forest in 2002 to be 4770 Mt (+/-13%). Our best estimate of gross forest carbon released through deforestation and degradation between 1972 and 2002 was 1178 Mt (+/-18%). By applying a long-term forest change model, we estimated that the carbon loss resulting from deforestation and degradation in 2001 was 53 Mt (+/-18%), rising from 24 Mt (+/-15%) in 1972. Forty-one percent of 2001 emissions resulted from logging, rising from 21% in 1972. Reducing emissions from logging is therefore a priority for PNG. The large uncertainty in our estimates of carbon stocks and fluxes is primarily due to the dearth of field measurements in both logged and unlogged forest, and the lack of PNG logging damage studies. Research priorities for PNG to increase the accuracy of forest carbon stock assessments are the collection of field measurements in unlogged forest and more spatially explicit logging damage studies. Copyright 2009 Elsevier Ltd. All rights reserved.

  3. 40 CFR 98.195 - Procedures for estimating missing data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... estimating missing data. For the procedure in § 98.193(b)(1), a complete record of all measured parameters... all available process data or data used for accounting purposes. (b) For missing values related to the...

  4. Tree grade distribution in Allegheny hardwoods

    Treesearch

    Richard L. Ernst; David A. Marquis

    1978-01-01

    Estimates of the distribution of tree grades by diameter class were developed for six hardwood species on the Allegheny Plateau. These estimates can be used to calculate present and projected stand values when actual tree grade measurements are not available.

  5. Determination of power system component parameters using nonlinear dead beat estimation method

    NASA Astrophysics Data System (ADS)

    Kolluru, Lakshmi

    Power systems are considered the most complex man-made wonders in existence today. In order to effectively supply the ever increasing demands of the consumers, power systems are required to remain stable at all times. Stability and monitoring of these complex systems are achieved by strategically placed computerized control centers. State and parameter estimation is an integral part of these facilities, as they deal with identifying the unknown states and/or parameters of the systems. Advancements in measurement technologies and the introduction of phasor measurement units (PMU) provide detailed and dynamic information of all measurements. Accurate availability of dynamic measurements provides engineers the opportunity to expand and explore various possibilities in power system dynamic analysis/control. This thesis discusses the development of a parameter determination algorithm for nonlinear power systems, using dynamic data obtained from local measurements. The proposed algorithm was developed by observing the dead beat estimator used in state space estimation of linear systems. The dead beat estimator is considered to be very effective as it is capable of obtaining the required results in a fixed number of steps. The number of steps required is related to the order of the system and the number of parameters to be estimated. The proposed algorithm uses the idea of dead beat estimator and nonlinear finite difference methods to create an algorithm which is user friendly and can determine the parameters fairly accurately and effectively. The proposed algorithm is based on a deterministic approach, which uses dynamic data and mathematical models of power system components to determine the unknown parameters. The effectiveness of the algorithm is tested by implementing it to identify the unknown parameters of a synchronous machine. MATLAB environment is used to create three test cases for dynamic analysis of the system with assumed known parameters. Faults are introduced in the virtual test systems and the dynamic data obtained in each case is analyzed and recorded. Ideally, actual measurements are to be provided to the algorithm. As the measurements are not readily available the data obtained from simulations is fed into the determination algorithm as inputs. The obtained results are then compared to the original (or assumed) values of the parameters. The results obtained suggest that the algorithm is able to determine the parameters of a synchronous machine when crisp data is available.

  6. The Best Estimated Trajectory Analysis for Pad Abort One

    NASA Technical Reports Server (NTRS)

    Kutty, Prasad; Noonan, Meghan; Karlgaard, Christopher; Beck, Roger

    2011-01-01

    I. Best Estimated Trajectory (BET) objective: a) Produce reconstructed trajectory of the PA-1 flight to understand vehicle dynamics and aid other post flight analyses. b) Leverage all measurement sources taken of vehicle during flight to produce the most accurate estimate of vehicle trajectory. c) Generate trajectory reconstructions of the Crew Module (CM), Launch Abort System (LAS), and Forward Bay Cover (FBC). II. BET analysis was started immediately following the PA-1 mission and was completed in September, 2010 a) Quick look version of BET released 5/25/2010: initial repackaging of SIGI data. b) Preliminary version of BET released 7/6/2010: first blended solution using available sources of external measurements. c) Final version of BET released 9/1/2010: final blended solution using all available sources of data.

  7. Estimated generic prices of cancer medicines deemed cost-ineffective in England: a cost estimation analysis

    PubMed Central

    Hill, Andrew; Redd, Christopher; Gotham, Dzintars; Erbacher, Isabelle; Meldrum, Jonathan; Harada, Ryo

    2017-01-01

    Objectives The aim of this study was to estimate lowest possible treatment costs for four novel cancer drugs, hypothesising that generic manufacturing could significantly reduce treatment costs. Setting This research was carried out in a non-clinical research setting using secondary data. Participants There were no human participants in the study. Four drugs were selected for the study: bortezomib, dasatinib, everolimus and gefitinib. These medications were selected according to their clinical importance, novel pharmaceutical actions and the availability of generic price data. Primary and secondary outcome measures Target costs for treatment were to be generated for each indication for each treatment. The primary outcome measure was the target cost according to a production cost calculation algorithm. The secondary outcome measure was the target cost as the lowest available generic price; this was necessary where export data were not available to generate an estimate from our cost calculation algorithm. Other outcomes included patent expiry dates and total eligible treatment populations. Results Target prices were £411 per cycle for bortezomib, £9 per month for dasatinib, £852 per month for everolimus and £10 per month for gefitinib. Compared with current list prices in England, these target prices would represent reductions of 74–99.6%. Patent expiry dates were bortezomib 2014–22, dasatinib 2020–26, everolimus 2019–25 and gefitinib 2017. The total global eligible treatment population in 1 year is 769 736. Conclusions Our findings demonstrate that affordable drug treatment costs are possible for novel cancer drugs, suggesting that new therapeutic options can be made available to patients and doctors worldwide. Assessing treatment cost estimations alongside cost-effectiveness evaluations is an important area of future research. PMID:28110283

  8. Estimation of photosynthetically available radiation (PAR) from OCEANSAT-I OCM using a simple atmospheric radiative transfer model

    NASA Astrophysics Data System (ADS)

    Tripathy, Madhumita; Raman, Mini; Chauhan, Prakash

    2015-10-01

    Photosynthetically available radiation (PAR) is an important variable for radiation budget, marine and terrestrial ecosystem models. OCEANSAT-1 Ocean Color Monitor (OCM) PAR was estimated using two different methods under both clear and cloudy sky conditions. In the first approach, aerosol optical depth (AOD) and cloud optical depth (COD) were estimated from OCEANSAT-1 OCM TOA (top-of-atmosphere) radiance data on a pixel by pixel basis and PAR was estimated from extraterrestrial solar flux for fifteen spectral bands using a radiative transfer model. The second approach used TOA radiances measured by OCM in the PAR spectral range to compute PAR. This approach also included surface albedo and cloud albedo as inputs. Comparison between OCEANSAT-1 OCM PAR at noon with in situ measured PAR shows that root mean square difference was 5.82% for the method I and 7.24% for the method II in daily time scales. Results indicate that methodology adopted to estimate PAR from OCEANSAT-1 OCM can produce reasonably accurate PAR estimates over the tropical Indian Ocean region. This approach can be extended to OCEANSAT-2 OCM and future OCEANSAT-3 OCM data for operational estimation of PAR for regional marine ecosystem applications.

  9. Joint inversion of acoustic and resistivity data for the estimation of gas hydrate concentration

    USGS Publications Warehouse

    Lee, Myung W.

    2002-01-01

    Downhole log measurements, such as acoustic or electrical resistivity logs, are frequently used to estimate in situ gas hydrate concentrations in the pore space of sedimentary rocks. Usually the gas hydrate concentration is estimated separately based on each log measurement. However, measurements are related to each other through the gas hydrate concentration, so the gas hydrate concentrations can be estimated by jointly inverting available logs. Because the magnitude of slowness of acoustic and resistivity values differs by more than an order of magnitude, a least-squares method, weighted by the inverse of the observed values, is attempted. Estimating the resistivity of connate water and gas hydrate concentration simultaneously is problematic, because the resistivity of connate water is independent of acoustics. In order to overcome this problem, a coupling constant is introduced in the Jacobian matrix. In the use of different logs to estimate gas hydrate concentration, a joint inversion of different measurements is preferred to the averaging of each inversion result.

  10. PROcEED: Probabilistic reverse dosimetry approaches for estimating exposure distributions

    EPA Science Inventory

    As increasing amounts of biomonitoring survey data become available, a new discipline focused on converting such data into estimates of chemical exposures has developed. Reverse dosimetry uses a pharmacokinetic model along with measured biomarker concentrations to determine the p...

  11. Forensic dental age estimation by measuring root dentin translucency area using a new digital technique.

    PubMed

    Acharya, Ashith B

    2014-05-01

    Dentin translucency measurement is an easy yet relatively accurate approach to postmortem age estimation. Translucency area represents a two-dimensional change and may reflect age variations better than length. Manually measuring area is challenging and this paper proposes a new digital method using commercially available computer hardware and software. Area and length were measured on 100 tooth sections (age range, 19-82 years) of 250 μm thickness. Regression analysis revealed lower standard error of estimate and higher correlation with age for length than for area (R = 0.62 vs. 0.60). However, test of regression formulae on a control sample (n = 33, 21-85 years) showed smaller mean absolute difference (8.3 vs. 8.8 years) and greater frequency of smaller errors (73% vs. 67% age estimates ≤ ± 10 years) for area than for length. These suggest that digital area measurements of root translucency may be used as an alternative to length in forensic age estimation. © 2014 American Academy of Forensic Sciences.

  12. Use of NEXRAD radar-based observations for quality control of in-situ rain gauge measurements

    NASA Astrophysics Data System (ADS)

    Nelson, B. R.; Prat, O.; Leeper, R.

    2017-12-01

    Rain gauge quality control is an often over looked important step in the archive of historical precipitation estimates. We investigate the possibilities that exist for using ground based radar networks for quality control of rain gauge measurements. This process includes the point to pixel comparisons of the rain gauge measurements with NEXRAD observations. There are two NEXRAD based data sets used for reference; the NCEP stage IV and the NWS MRMS gridded data sets. The NCEP stage IV data set is available at 4km hourly for the period 2002-present and includes the radar-gauge bias adjusted precipitation estimate. The NWS MRMS data set includes several different variables such as reflectivity, radar-only estimates, precipitation flag, and radar-gauge bias adjusted precipitation estimates. The latter product provides for much more information to apply quality control such as identification of precipitation type, identification of storm type and Z-R relation. In addition, some of the variables are available at 5-minute scale. The rain gauge networks that are investigated are the Climate Reference Network (CRN), the Fischer-Porter Hourly Precipitation Database (HPD), and the Hydrometeorological Automated Data System (HADS). The CRN network is available at the 5-minute scale, the HPD network is available at the 15-minute and hourly scale, and HADS is available at the hourly scale. The varying scales present challenges for comparisons. However given the higher resolution radar-based products we identify an optimal combination of rain gauge observations that can be compared to the radar-based information. The quality control process focuses on identifying faulty gauges in direct comparison while a deeper investigation focuses on event-based differences from light rain to extreme storms.

  13. An inventory of bispectrum estimators for redshift space distortions

    NASA Astrophysics Data System (ADS)

    Regan, Donough

    2017-12-01

    In order to best improve constraints on cosmological parameters and on models of modified gravity using current and future galaxy surveys it is necessary maximally exploit the available data. As redshift-space distortions mean statistical translation invariance is broken for galaxy observations, this will require measurement of the monopole, quadrupole and hexadecapole of not just the galaxy power spectrum, but also the galaxy bispectrum. A recent (2015) paper by Scoccimarro demonstrated how the standard bispectrum estimator may be expressed in terms of Fast Fourier Transforms (FFTs) to afford an extremely efficient algorithm, allowing the bispectrum multipoles on all scales and triangle shapes to be measured in comparable time to those of the power spectrum. In this paper we present a suite of alternative proxies to measure the three-point correlation multipoles. In particular, we describe a modal (or plane wave) decomposition to capture the information in each multipole in a series of basis coefficients, and also describe three compressed estimators formed using the skew-spectrum, the line correlation function and the integrated bispectrum, respectively. As well as each of the estimators offering a different measurement channel, and thereby a robustness check, it is expected that some (especially the modal estimator) will offer a vast data compression, and so a much reduced covariance matrix. This compression may be vital to reduce the computational load involved in extracting the available three-point information.

  14. 40 CFR 98.195 - Procedures for estimating missing data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... estimating missing data. For the procedure in § 98.193(b)(1), a complete record of all measured parameters... available process data or data used for accounting purposes. (b) For missing values related to the CaO and...

  15. 40 CFR 98.195 - Procedures for estimating missing data.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... estimating missing data. For the procedure in § 98.193(b)(1), a complete record of all measured parameters... available process data or data used for accounting purposes. (b) For missing values related to the CaO and...

  16. 40 CFR 98.195 - Procedures for estimating missing data.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... estimating missing data. For the procedure in § 98.193(b)(1), a complete record of all measured parameters... available process data or data used for accounting purposes. (b) For missing values related to the CaO and...

  17. Minimax estimation of qubit states with Bures risk

    NASA Astrophysics Data System (ADS)

    Acharya, Anirudh; Guţă, Mădălin

    2018-04-01

    The central problem of quantum statistics is to devise measurement schemes for the estimation of an unknown state, given an ensemble of n independent identically prepared systems. For locally quadratic loss functions, the risk of standard procedures has the usual scaling of 1/n. However, it has been noticed that for fidelity based metrics such as the Bures distance, the risk of conventional (non-adaptive) qubit tomography schemes scales as 1/\\sqrt{n} for states close to the boundary of the Bloch sphere. Several proposed estimators appear to improve this scaling, and our goal is to analyse the problem from the perspective of the maximum risk over all states. We propose qubit estimation strategies based on separate adaptive measurements, and collective measurements, that achieve 1/n scalings for the maximum Bures risk. The estimator involving local measurements uses a fixed fraction of the available resource n to estimate the Bloch vector direction; the length of the Bloch vector is then estimated from the remaining copies by measuring in the estimator eigenbasis. The estimator based on collective measurements uses local asymptotic normality techniques which allows us to derive upper and lower bounds to its maximum Bures risk. We also discuss how to construct a minimax optimal estimator in this setup. Finally, we consider quantum relative entropy and show that the risk of the estimator based on collective measurements achieves a rate O(n-1log n) under this loss function. Furthermore, we show that no estimator can achieve faster rates, in particular the ‘standard’ rate n ‑1.

  18. Estimating Blade Section Airloads from Blade Leading-Edge Pressure Measurements

    NASA Technical Reports Server (NTRS)

    vanAken, Johannes M.

    2003-01-01

    The Tilt-Rotor Aeroacoustic Model (TRAM) test in the Duitse-Nederlandse Wind (DNW) Tunnel acquired blade pressure data for forward flight test conditions of a tiltrotor in helicopter mode. Chordwise pressure data at seven radial locations were integrated to obtain the blade section normal force. The present investigation evaluates the use of linear regression analysis and of neural networks in estimating the blade section normal force coefficient from a limited number of blade leading-edge pressure measurements and representative operating conditions. These network models are subsequently used to estimate the airloads at intermediate radial locations where only blade pressure measurements at the 3.5% chordwise stations are available.

  19. Estimating water supply arsenic levels in the New England bladder cancer study

    USGS Publications Warehouse

    Nuckols, John R.; Beane Freeman, Laura E.; Lubin, Jay H.; Airola, Matthew S.; Baris, Dalsu; Ayotte, Joseph D.; Taylor, Anne; Paulu, Chris; Karagas, Margaret R.; Colt, Joanne; Ward, Mary H.; Huang, An-Tsun; Bress, William; Cherala, Sai; Silverman, Debra T.; Cantor, Kenneth P.

    2011-01-01

    Conclusions: We used a different validation procedure for each of the three methods, and found our estimated levels to be comparable with available measured concentrations. This methodology allowed us to calculate potential drinking water exposure over long periods.

  20. Global map of heat flow on a 2 degree grid - digitally available

    NASA Astrophysics Data System (ADS)

    Davies, J. Huw

    2014-05-01

    A global map of surface heat flow is developed on a 2° by 2° equal area grid, and is made available digitally. It is based on a global heat flow data set of over 38,000 measurements, very similar to that used in Davies & Davies (2010). The map consists of three components. Firstly, in regions of young ocean crust (<67.7Ma) the model estimate uses a half-space conduction model based on the age of the oceanic crust, using parameters of Jaupart et al., (2007). This is done since it is well known that raw data measurements are frequently influenced by significant hydrothermal circulation. Secondly in other regions of data coverage the estimate is based on data measurements. At the map resolution these two categories (young ocean, data covered) cover 65% of Earth's surface. The estimate has been developed in two different ways. In one way the mean value is used and in the second the median is used. The median estimate might be expected to be less sensitive to outliers. Thirdly, for all other regions the estimate is based on the assumption that there is a correlation between heat-flow and geology. This is undertaken using the CCGM (2000) digital geology map. This assumption is assessed and the correlation is found to provide a minor improvement over assuming that heat flow would be represented by the global average. The estimate for Antarctica is guided by proxy measurements. All the work is undertaken using GIS methods. Estimates are made of the errors for all components. The results have been made available as digital files, including shapefiles and tab-delimited and csv ASCII files. In addition to the equal area grid, the results are also available on an equal longitude grid. The map has been published -Davies (2013). The digital files are available in the supplementary information of the publication. Commission for the Geological Map of the World (2000), Geological Map of the World at 1:25000000, UNESCO/CCGM, Paris. Davies, JH, (2013) A global map of solid Earth surface heat flow, Geochemistry, Geophysics and Geosystems, 14, 4608-4622, doi 10.1002/ggge.20271. Davies JH & Davies DR, (2010) Earth's surface heat flux, Solid Earth, 1, 5-24, www.solid-earth.net/1/5/2010/. Jaupart C, Labrosse S, Mareschal J-C, (2007) Temperatures, heat and energy in the mantle of the Earth, in Treatise on Geophysics, v7 Mantle Convection, ed D. Bercovici, 253-303, Elsevier, Amsterdam

  1. Estimating solar radiation for plant simulation models

    NASA Technical Reports Server (NTRS)

    Hodges, T.; French, V.; Leduc, S.

    1985-01-01

    Five algorithms producing daily solar radiation surrogates using daily temperatures and rainfall were evaluated using measured solar radiation data for seven U.S. locations. The algorithms were compared both in terms of accuracy of daily solar radiation estimates and terms of response when used in a plant growth simulation model (CERES-wheat). Requirements for accuracy of solar radiation for plant growth simulation models are discussed. One algorithm is recommended as being best suited for use in these models when neither measured nor satellite estimated solar radiation values are available.

  2. Accommodating Sensor Bias in MRAC for State Tracking

    NASA Technical Reports Server (NTRS)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    The problem of accommodating unknown sensor bias is considered in a direct model reference adaptive control (MRAC) setting for state tracking using state feedback. Sensor faults can occur during operation, and if the biased state measurements are directly used with a standard MRAC control law, neither closed-loop signal boundedness, nor asymptotic tracking can be guaranteed and the resulting tracking errors may be unbounded or unacceptably large. A modified MRAC law is proposed, which combines a bias estimator with control gain adaptation, and it is shown that signal boundedness can be accomplished, although the tracking error may not go to zero. Further, for the case wherein an asymptotically stable sensor bias estimator is available, an MRAC control law is proposed to accomplish asymptotic tracking and signal boundedness. Such a sensor bias estimator can be designed if additional sensor measurements are available, as illustrated for the case wherein bias is present in the rate gyro and airspeed measurements. Numerical example results are presented to illustrate each of the schemes.

  3. Robust sensor fusion of unobtrusively measured heart rate.

    PubMed

    Wartzek, Tobias; Brüser, Christoph; Walter, Marian; Leonhardt, Steffen

    2014-03-01

    Contactless vital sign measurement technologies often have the drawback of severe motion artifacts and periods in which no signal is available. However, using several identical or physically different sensors, redundancy can be used to decrease the error in noncontact heart rate estimation, while increasing the time period during which reliable data are available. In this paper, we show for the first time two major results in case of contactless heart rate measurements deduced from a capacitive ECG and optical pulse signals. First, an artifact detection is an essential preprocessing step to allow a reliable fusion. Second, the robust but computationally efficient median already provides good results; however, using a Bayesian approach, and a short time estimation of the variance, best results in terms of difference to reference heart rate and temporal coverage can be achieved. In this paper, six sensor signals were used and coverage increased from 0-90% to 80-94%, while the difference between the estimated heart rate and the gold standard was less than ±2 BPM.

  4. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative–quantitative modeling

    PubMed Central

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-01-01

    Summary: Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if–then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLabTM-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. Availability: ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/ Contact: stefan.streif@ovgu.de PMID:22451270

  5. Applicability of models to estimate traffic noise for urban roads.

    PubMed

    Melo, Ricardo A; Pimentel, Roberto L; Lacerda, Diego M; Silva, Wekisley M

    2015-01-01

    Traffic noise is a highly relevant environmental impact in cities. Models to estimate traffic noise, in turn, can be useful tools to guide mitigation measures. In this paper, the applicability of models to estimate noise levels produced by a continuous flow of vehicles on urban roads is investigated. The aim is to identify which models are more appropriate to estimate traffic noise in urban areas since several models available were conceived to estimate noise from highway traffic. First, measurements of traffic noise, vehicle count and speed were carried out in five arterial urban roads of a brazilian city. Together with geometric measurements of width of lanes and distance from noise meter to lanes, these data were input in several models to estimate traffic noise. The predicted noise levels were then compared to the respective measured counterparts for each road investigated. In addition, a chart showing mean differences in noise between estimations and measurements is presented, to evaluate the overall performance of the models. Measured Leq values varied from 69 to 79 dB(A) for traffic flows varying from 1618 to 5220 vehicles/h. Mean noise level differences between estimations and measurements for all urban roads investigated ranged from -3.5 to 5.5 dB(A). According to the results, deficiencies of some models are discussed while other models are identified as applicable to noise estimations on urban roads in a condition of continuous flow. Key issues to apply such models to urban roads are highlighted.

  6. Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests

    NASA Astrophysics Data System (ADS)

    Shumway, R. H.

    2001-10-01

    - The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.

  7. Classical and Bayesian Seismic Yield Estimation: The 1998 Indian and Pakistani Tests

    NASA Astrophysics Data System (ADS)

    Shumway, R. H.

    The nuclear tests in May, 1998, in India and Pakistan have stimulated a renewed interest in yield estimation, based on limited data from uncalibrated test sites. We study here the problem of estimating yields using classical and Bayesian methods developed by Shumway (1992), utilizing calibration data from the Semipalatinsk test site and measured magnitudes for the 1998 Indian and Pakistani tests given by Murphy (1998). Calibration is done using multivariate classical or Bayesian linear regression, depending on the availability of measured magnitude-yield data and prior information. Confidence intervals for the classical approach are derived applying an extension of Fieller's method suggested by Brown (1982). In the case where prior information is available, the posterior predictive magnitude densities are inverted to give posterior intervals for yield. Intervals obtained using the joint distribution of magnitudes are comparable to the single-magnitude estimates produced by Murphy (1998) and reinforce the conclusion that the announced yields of the Indian and Pakistani tests were too high.

  8. Global civil aviation black carbon emissions.

    PubMed

    Stettler, Marc E J; Boies, Adam M; Petzold, Andreas; Barrett, Steven R H

    2013-09-17

    Aircraft black carbon (BC) emissions contribute to climate forcing, but few estimates of BC emitted by aircraft at cruise exist. For the majority of aircraft engines the only BC-related measurement available is smoke number (SN)-a filter based optical method designed to measure near-ground plume visibility, not mass. While the first order approximation (FOA3) technique has been developed to estimate BC mass emissions normalized by fuel burn [EI(BC)] from SN, it is shown that it underestimates EI(BC) by >90% in 35% of directly measured cases (R(2) = -0.10). As there are no plans to measure BC emissions from all existing certified engines-which will be in service for several decades-it is necessary to estimate EI(BC) for existing aircraft on the ground and at cruise. An alternative method, called FOX, that is independent of the SN is developed to estimate BC emissions. Estimates of EI(BC) at ground level are significantly improved (R(2) = 0.68), whereas estimates at cruise are within 30% of measurements. Implementing this approach for global civil aviation estimated aircraft BC emissions are revised upward by a factor of ~3. Direct radiative forcing (RF) due to aviation BC emissions is estimated to be ~9.5 mW/m(2), equivalent to ~1/3 of the current RF due to aviation CO2 emissions.

  9. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  10. Comparison of techniques for estimating annual lake evaporation using climatological data

    USGS Publications Warehouse

    Andersen, M.E.; Jobson, H.E.

    1982-01-01

    Mean annual evaporation estimates were determined for 30 lakes by use of a numerical model (Morton, 1979) and by use of an evaporation map prepared by the U.S. Weather Service (Kohler et al., 1959). These estimates were compared to the reported value of evaporation determined from measurements on each lake. Various lengths of observation and methods of measurement were used among the 30 lakes. The evaporation map provides annual evaporation estimates which are more consistent with observations than those determined by use of the numerical model. The map cannot provide monthly estimates, however, and is only available for the contiguous United States. The numerical model can provide monthly estimates for shallow lakes and is based on monthly observations of temperature, humidity, and sunshine duration.

  11. Estimation of height-dependent solar irradiation and application to the solar climate of Iran

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samimi, J.

    1994-05-01

    An explicitly height-dependent model has been used to estimate the solar irradiation over Iran which has a vast range of altitudes. The parameters of the model have been chosen on general grounds and not by parameters best fitting to any of the available measured irradiation data in Iran. The estimated global solar irradiation on the horizontal surface shows a very good agreement (4.1% deviation) with the 17-year long pyranometric measurements in Tehran, and also, is in good agreement with other, shorter available measured data. The entire data base of the Iranian meteorological stations have been used to establish a simplemore » relation between the sunshine duration records and the cloud cover reports which can be utilized in solar energy estimations for sites with no sunshine duration recorders. Clear sky maps of Iran for direct solar irradiation on tracking, horizontal, and south-facing vertical planes are presented. The global solar irradiation map for horizontal surface with cloudiness is zoned into four irradiation zones. In about four-fifths of the land in Iran, the annual-mean daily global solar irradiation on horizontal surface ranges from 4.5 to 5.4 kWh/m[sup 2].« less

  12. PFA toolbox: a MATLAB tool for Metabolic Flux Analysis.

    PubMed

    Morales, Yeimy; Bosque, Gabriel; Vehí, Josep; Picó, Jesús; Llaneras, Francisco

    2016-07-11

    Metabolic Flux Analysis (MFA) is a methodology that has been successfully applied to estimate metabolic fluxes in living cells. However, traditional frameworks based on this approach have some limitations, particularly when measurements are scarce and imprecise. This is very common in industrial environments. The PFA Toolbox can be used to face those scenarios. Here we present the PFA (Possibilistic Flux Analysis) Toolbox for MATLAB, which simplifies the use of Interval and Possibilistic Metabolic Flux Analysis. The main features of the PFA Toolbox are the following: (a) It provides reliable MFA estimations in scenarios where only a few fluxes can be measured or those available are imprecise. (b) It provides tools to easily plot the results as interval estimates or flux distributions. (c) It is composed of simple functions that MATLAB users can apply in flexible ways. (d) It includes a Graphical User Interface (GUI), which provides a visual representation of the measurements and their uncertainty. (e) It can use stoichiometric models in COBRA format. In addition, the PFA Toolbox includes a User's Guide with a thorough description of its functions and several examples. The PFA Toolbox for MATLAB is a freely available Toolbox that is able to perform Interval and Possibilistic MFA estimations.

  13. A comparison between one year of daily global irradiation from ground-based measurements versus meteosat images from seven locations in Tunisia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djemaa, A.B.; Delorme, C.

    1992-01-01

    Three numerical images from METEOSAT B2 per day have been processed over a period of 12 months, from October 1985 to September 1986, to estimate the daily values of available solar radiation in Tunisia. The methodology used, GISTEL, on the images of the visible' channel of METEOSAT, is described. Results are compared with measured radiation values from seven stations of the Institut de la Meteorologie de Tunisie.' Among more than 2,200 measured-estimated daily pairs, a high percentage, 89%, show a relative error of + or {minus}10%. Many figures concerning Sidi-Bou-Said, Kairouan, Thala, and Gafsa are presented to show the capabilitymore » of GISTEL to map the daily available solar radiation with a sufficient spatial resolution in countries where radiation measurements are too scarce.« less

  14. Reconciling CloudSat and GPM Estimates of Falling Snow

    NASA Technical Reports Server (NTRS)

    Munchak, S. Joseph; Jackson, Gail Skofronick; Kulie, Mark; Wood, Norm; Miliani, Lisa

    2017-01-01

    Satellite-based estimates of falling snow have been provided by CloudSat (launched in 2006) and the Global Precipitation Measurement (GPM) core satellite (launched in 2014). The CloudSat estimates are derived from W-band radar measurements whereas the GPM estimates are derived from its scanning Ku- and Ka-band Dual-Frequency Precipitation Radar (DPR) and 13-channel microwave imager (GMI). Each platform has advantages and disadvantages: CloudSat has higher resolution (approximately 1.5 km) and much better sensitivity (-28 dBZ), but poorer sampling (nadir-only and daytime-only since 2011) and the reflectivity-snowfall (Z-S) relationship is poorly constrained with single-frequency measurements. Meanwhile, DPR suffers from relatively poor resolution (5 km) and sensitivity (approximately 13 dBZ), but has cross-track scanning capability to cover a 245-km swath. Additionally, where Ku and Ka measurements are available, the conversion of reflectivity to snowfall rate is better-constrained than with a single frequency.

  15. Estimation of unsteady lift on a pitching airfoil from wake velocity surveys

    NASA Technical Reports Server (NTRS)

    Zaman, K. B. M. Q.; Panda, J.; Rumsey, C. L.

    1993-01-01

    The results of a joint experimental and computational study on the flowfield over a periodically pitched NACA0012 airfoil, and the resultant lift variation, are reported in this paper. The lift variation over a cycle of oscillation, and hence the lift hysteresis loop, is estimated from the velocity distribution in the wake measured or computed for successive phases of the cycle. Experimentally, the estimated lift hysteresis loops are compared with available data from the literature as well as with limited force balance measurements. Computationally, the estimated lift variations are compared with the corresponding variation obtained from the surface pressure distribution. Four analytical formulations for the lift estimation from wake surveys are considered and relative successes of the four are discussed.

  16. Comparison of methods for estimating evapotranspiration in a small rangeland catchment

    USDA-ARS?s Scientific Manuscript database

    Evapotranspiration (ET) was quantified for two rangeland vegetation types, aspen and sagebrush/grassland, over an eight year study period by comparing several approaches for estimating ET: eddy covariance systems (EC, available for only six years); soil water storage loss measured by time domain ref...

  17. A mathematical method for verifying the validity of measured information about the flows of energy resources based on the state estimation theory

    NASA Astrophysics Data System (ADS)

    Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.

    2015-11-01

    Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate the energy resource flows measurement imbalances, and to filter invalid measurements at the data acquisition and processing stage in performing monitoring of an automated energy resource monitoring and accounting system.

  18. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  19. Evaluating MODIS satellite versus terrestrial data driven productivity estimates in Austria

    NASA Astrophysics Data System (ADS)

    Petritsch, R.; Boisvenue, C.; Pietsch, S. A.; Hasenauer, H.; Running, S. W.

    2009-04-01

    Sensors, such as the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra satellite, are developed for monitoring global and/or regional ecosystem fluxes like net primary production (NPP). Although these systems should allow us to assess carbon sequestration issues, forest management impacts, etc., relatively little is known about the consistency and accuracy in the resulting satellite driven estimates versus production estimates driven from ground data. In this study we compare the following NPP estimation methods: (i) NPP estimates as derived from MODIS and available on the internet; (ii) estimates resulting from the off-line version of the MODIS algorithm; (iii) estimates using regional meteorological data within the offline algorithm; (iv) NPP estimates from a species specific biogeochemical ecosystem model adopted for Alpine conditions; and (v) NPP estimates calculated from individual tree measurements. Single tree measurements were available from 624 forested sites across Austria but only the data from 165 sample plots included all the necessary information for performing the comparison on plot level. To ensure independence of satellite-driven and ground-based predictions, only latitude and longitude for each site were used to obtain MODIS estimates. Along with the comparison of the different methods, we discuss problems like the differing dates of field campaigns (<1999) and acquisition of satellite images (2000-2005) or incompatible productivity definitions within the methods and come up with a framework for combining terrestrial and satellite data based productivity estimates. On average MODIS estimates agreed well with the output of the models self-initialization (spin-up) and biomass increment calculated from tree measurements is not significantly different from model results; however, correlation between satellite-derived versus terrestrial estimates are relatively poor. Considering the different scales as they are 9km² from MODIS and 1000m² from the sample plots together with the heterogeneous landscape may qualify the low correlation, particularly as the correlation increases when strongly fragmented sites are left out.

  20. Orion Exploration Flight Test-1 Post-Flight Navigation Performance Assessment Relative to the Best Estimated Trajectory

    NASA Technical Reports Server (NTRS)

    Gay, Robert S.; Holt, Greg N.; Zanetti, Renato

    2016-01-01

    This paper details the post-flight navigation performance assessment of the Orion Exploration Flight Test-1 (EFT-1). Results of each flight phase are presented: Ground Align, Ascent, Orbit, and Entry Descent and Landing. This study examines the on-board Kalman Filter uncertainty along with state deviations relative to the Best Estimated Trajectory (BET). Overall the results show that the Orion Navigation System performed as well or better than expected. Specifically, the Global Positioning System (GPS) measurement availability was significantly better than anticipated at high altitudes. In addition, attitude estimation via processing GPS measurements along with Inertial Measurement Unit (IMU) data performed very well and maintained good attitude throughout the mission.

  1. HOW WELL CAN YOU ESTIMATE LOW FLOW AND BANKFULL DISCHARGE FROM STREAM CHANNEL HABITAT DATA?

    EPA Science Inventory

    Modeled estimates of stream discharge are becoming more important because of reductions in the number of gauging stations and increases in flow alteration from land development and climate change. Field measurements of channel morphology are available at thousands of streams and...

  2. Estimation of thyroid doses received by the population of Belarus as a result of the Chernobyl accident

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavrilin, Y.; Khrouch, V.; Shinkarev, S.

    Within weeks of the Chernobyl accident ABOUT 300,000 measurements of human thyroidal iodine-131 content were conducted in the more contaminated areas of Belarus. Results of these and other measurements form the basis of thyroid-dose reconstruction for the residents. For Class 1 (measured dose), individual doses are estimated directly from measured thyroidal iodine content plus information on life style and dietary habits. Such estimates are available for about 130,000 individuals from Gomel and Mogilev Oblasts and Minsk City. For Class 2 (passport doses), every settlement with a sufficient number of residents with measured doses, individual thyroid-dose distributions were determined for severalmore » age groups and levels of milk consumption. A population of about 2.7 million resides in the passport settlements.« less

  3. Formulation and implementation of a practical algorithm for parameter estimation with process and measurement noise

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1980-01-01

    A new formulation is proposed for the problem of parameter estimation of dynamic systems with both process and measurement noise. The formulation gives estimates that are maximum likelihood asymptotically in time. The means used to overcome the difficulties encountered by previous formulations are discussed. It is then shown how the proposed formulation can be efficiently implemented in a computer program. A computer program using the proposed formulation is available in a form suitable for routine application. Examples with simulated and real data are given to illustrate that the program works well.

  4. Estimation of pyrethroid pesticide intake using regression ...

    EPA Pesticide Factsheets

    Population-based estimates of pesticide intake are needed to characterize exposure for particular demographic groups based on their dietary behaviors. Regression modeling performed on measurements of selected pesticides in composited duplicate diet samples allowed (1) estimation of pesticide intakes for a defined demographic community, and (2) comparison of dietary pesticide intakes between the composite and individual samples. Extant databases were useful for assigning individual samples to composites, but they could not provide the breadth of information needed to facilitate measurable levels in every composite. Composite sample measurements were found to be good predictors of pyrethroid pesticide levels in their individual sample constituents where sufficient measurements are available above the method detection limit. Statistical inference shows little evidence of differences between individual and composite measurements and suggests that regression modeling of food groups based on composite dietary samples may provide an effective tool for estimating dietary pesticide intake for a defined population. The research presented in the journal article will improve community's ability to determine exposures through the dietary route with a less burdensome and costly method.

  5. On-line estimation and detection of abnormal substrate concentrations in WWTPs using a software sensor: a benchmark study.

    PubMed

    Benazzi, F; Gernaey, K V; Jeppsson, U; Katebi, R

    2007-08-01

    In this paper, a new approach for on-line monitoring and detection of abnormal readily biodegradable substrate (S(s)) and slowly biodegradable substrate (X(s)) concentrations, for example due to input of toxic loads from the sewer, or due to influent substrate shock load, is proposed. Considering that measurements of S(s) and X(s) concentrations are not available in real wastewater treatment plants, the S(s) / X(s) software sensor can activate an alarm with a response time of about 60 and 90 minutes, respectively, based on the dissolved oxygen measurement. The software sensor implementation is based on an extended Kalman filter observer and disturbances are modelled using fast Fourier transform and spectrum analyses. Three case studies are described. The first one illustrates the fast and accurate convergence of the extended Kalman filter algorithm, which is achieved in less than 2 hours. Furthermore, the difficulties of estimating X(s) when off-line analysis is not available are depicted, and the S(s) / X(s) software sensor performances when no measurements of S(s) and X(s) are available are illustrated. Estimation problems related to the death-regeneration concept of the activated sludge model no.1 and possible application of the software sensor in wastewater monitoring are discussed.

  6. Anthropic Correction of Information Estimates and Its Application to Neural Coding

    PubMed Central

    Gastpar, Michael C.; Gill, Patrick R.; Huth, Alexander G.; Theunissen, Frédéric E.

    2015-01-01

    Information theory has been used as an organizing principle in neuroscience for several decades. Estimates of the mutual information (MI) between signals acquired in neurophysiological experiments are believed to yield insights into the structure of the underlying information processing architectures. With the pervasive availability of recordings from many neurons, several information and redundancy measures have been proposed in the recent literature. A typical scenario is that only a small number of stimuli can be tested, while ample response data may be available for each of the tested stimuli. The resulting asymmetric information estimation problem is considered. It is shown that the direct plug-in information estimate has a negative bias. An anthropic correction is introduced that has a positive bias. These two complementary estimators and their combinations are natural candidates for information estimation in neuroscience. Tail and variance bounds are given for both estimates. The proposed information estimates are applied to the analysis of neural discrimination and redundancy in the avian auditory system. PMID:26900172

  7. Anthropic Correction of Information Estimates and Its Application to Neural Coding.

    PubMed

    Gastpar, Michael C; Gill, Patrick R; Huth, Alexander G; Theunissen, Frédéric E

    2010-02-01

    Information theory has been used as an organizing principle in neuroscience for several decades. Estimates of the mutual information (MI) between signals acquired in neurophysiological experiments are believed to yield insights into the structure of the underlying information processing architectures. With the pervasive availability of recordings from many neurons, several information and redundancy measures have been proposed in the recent literature. A typical scenario is that only a small number of stimuli can be tested, while ample response data may be available for each of the tested stimuli. The resulting asymmetric information estimation problem is considered. It is shown that the direct plug-in information estimate has a negative bias. An anthropic correction is introduced that has a positive bias. These two complementary estimators and their combinations are natural candidates for information estimation in neuroscience. Tail and variance bounds are given for both estimates. The proposed information estimates are applied to the analysis of neural discrimination and redundancy in the avian auditory system.

  8. Effects of measurement resolution on the analysis of temperature time series for stream-aquifer flux estimation

    NASA Astrophysics Data System (ADS)

    Soto-López, Carlos D.; Meixner, Thomas; Ferré, Ty P. A.

    2011-12-01

    From its inception in the mid-1960s, the use of temperature time series (thermographs) to estimate vertical fluxes has found increasing use in the hydrologic community. Beginning in 2000, researchers have examined the impacts of measurement and parameter uncertainty on the estimates of vertical fluxes. To date, the effects of temperature measurement discretization (resolution), a characteristic of all digital temperature loggers, on the determination of vertical fluxes has not been considered. In this technical note we expand the analysis of recently published work to include the effects of temperature measurement resolution on estimates of vertical fluxes using temperature amplitude and phase shift information. We show that errors in thermal front velocity estimation introduced by discretizing thermographs differ when amplitude or phase shift data are used to estimate vertical fluxes. We also show that under similar circumstances sensor resolution limits the range over which vertical velocities are accurately reproduced more than uncertainty in temperature measurements, uncertainty in sensor separation distance, and uncertainty in the thermal diffusivity combined. These effects represent the baseline error present and thus the best-case scenario when discrete temperature measurements are used to infer vertical fluxes. The errors associated with measurement resolution can be minimized by using the highest-resolution sensors available. But thoughtful experimental design could allow users to select the most cost-effective temperature sensors to fit their measurement needs.

  9. Stroke Volume estimation using aortic pressure measurements and aortic cross sectional area: Proof of concept.

    PubMed

    Kamoi, S; Pretty, C G; Chiew, Y S; Pironet, A; Davidson, S; Desaive, T; Shaw, G M; Chase, J G

    2015-08-01

    Accurate Stroke Volume (SV) monitoring is essential for patient with cardiovascular dysfunction patients. However, direct SV measurements are not clinically feasible due to the highly invasive nature of measurement devices. Current devices for indirect monitoring of SV are shown to be inaccurate during sudden hemodynamic changes. This paper presents a novel SV estimation using readily available aortic pressure measurements and aortic cross sectional area, using data from a porcine experiment where medical interventions such as fluid replacement, dobutamine infusions, and recruitment maneuvers induced SV changes in a pig with circulatory shock. Measurement of left ventricular volume, proximal aortic pressure, and descending aortic pressure waveforms were made simultaneously during the experiment. From measured data, proximal aortic pressure was separated into reservoir and excess pressures. Beat-to-beat aortic characteristic impedance values were calculated using both aortic pressure measurements and an estimate of the aortic cross sectional area. SV was estimated using the calculated aortic characteristic impedance and excess component of the proximal aorta. The median difference between directly measured SV and estimated SV was -1.4ml with 95% limit of agreement +/- 6.6ml. This method demonstrates that SV can be accurately captured beat-to-beat during sudden changes in hemodynamic state. This novel SV estimation could enable improved cardiac and circulatory treatment in the critical care environment by titrating treatment to the effect on SV.

  10. Control-enhanced multiparameter quantum estimation

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Yuan, Haidong

    2017-10-01

    Most studies in multiparameter estimation assume the dynamics is fixed and focus on identifying the optimal probe state and the optimal measurements. In practice, however, controls are usually available to alter the dynamics, which provides another degree of freedom. In this paper we employ optimal control methods, particularly the gradient ascent pulse engineering (GRAPE), to design optimal controls for the improvement of the precision limit in multiparameter estimation. We show that the controlled schemes are not only capable to provide a higher precision limit, but also have a higher stability to the inaccuracy of the time point performing the measurements. This high time stability will benefit the practical metrology, where it is hard to perform the measurement at a very accurate time point due to the response time of the measurement apparatus.

  11. Calibration and evaluation of the FAO56-Penman-Monteith, FAO24-radiation, and Priestly-Taylor reference evapotranspiration models using the spatially measured solar radiation across a large arid and semi-arid area in southern Iran

    NASA Astrophysics Data System (ADS)

    Didari, Shohreh; Ahmadi, Seyed Hamid

    2018-05-01

    Crop evapotranspiration (ET) is one of the main components in calculating the water balance in agricultural, hydrological, environmental, and climatological studies. Solar radiation (Rs) supplies the available energy for ET, and therefore, precise measurement of Rs is required for accurate ET estimation. However, measured Rs and ET and are not available in many areas and they should be estimated indirectly by the empirical methods. The Angström-Prescott (AP) is the most popular method for estimating Rs in areas where there are no measured data. In addition, the locally calibrated coefficients of AP are not yet available in many locations, and instead, the default coefficients are used. In this study, we investigated different approaches for Rs and ET calculations. The daily measured Rs values in 14 stations across arid and semi-arid areas of Fars province in south of Iran were used for calibrating the coefficients of the AP model. Results revealed that the calibrated AP coefficients were very different and higher than the default values. In addition, the reference ET (ET o ) was estimated by the FAO56 Penman-Monteith (FAO56 PM) and FAO24-radiation methods by using the measured Rs and were then compared with the measured pan evaporation as an indication of the potential atmospheric demand. Interestingly and unlike many previous studies, which have suggested the FAO56 PM as the standard method in calculation of ET o , the FAO24-radiation with the measured Rs showed better agreement with the mean pan evaporation. Therefore, the FAO24-radiation with the measured Rs was used as the reference method for the study area, which was also confirmed by the previous studies based on the lysimeter data. Moreover, the accuracy of calibrated Rs in the estimation of ET o by the FAO56 PM and FAO24-radiation was investigated. Results showed that the calibrated Rs improved the accuracy of the estimated ET o by the FAO24-radiation compared with the FAO24-radiation using the measured Rs as the reference method, whereas there was no improvement in the estimation of ET o by the FAO56 PM method compared with the FAO24-radiation using the measured Rs. Moreover, the empirical coefficient (α) of the Priestley and Taylor (PT) ET o estimation method was calibrated against the reference method and results indicated ca. 2 or higher α values than the recommended α = 1.26 in all stations. An empirical equation was suggested based on yearly mean relative humidity for estimation of α in the study area. Overall, this study showed that (1) the FAO24-radiation method with the either measured or calibrated Rs is more accurate than the FAO56 PM, (2) the spatially calibrated AP coefficients are very different from each other over an arid and semi-arid area and are higher than those proposed by the FAO56, (3) the original PT model is not applicable in arid and semi-arid area and substantially underestimates the ET o , and (4) the coefficient of the PT should be locally calibrated for each station over an arid and semi-arid area.

  12. A body composition model to estimate mammalian energy stores and metabolic rates from body mass and body length, with application to polar bears.

    PubMed

    Molnár, Péter K; Klanjscek, Tin; Derocher, Andrew E; Obbard, Martyn E; Lewis, Mark A

    2009-08-01

    Many species experience large fluctuations in food availability and depend on energy from fat and protein stores for survival, reproduction and growth. Body condition and, more specifically, energy stores thus constitute key variables in the life history of many species. Several indices exist to quantify body condition but none can provide the amount of stored energy. To estimate energy stores in mammals, we propose a body composition model that differentiates between structure and storage of an animal. We develop and parameterize the model specifically for polar bears (Ursus maritimus Phipps) but all concepts are general and the model could be easily adapted to other mammals. The model provides predictive equations to estimate structural mass, storage mass and storage energy from an appropriately chosen measure of body length and total body mass. The model also provides a means to estimate basal metabolic rates from body length and consecutive measurements of total body mass. Model estimates of body composition, structural mass, storage mass and energy density of 970 polar bears from Hudson Bay were consistent with the life history and physiology of polar bears. Metabolic rate estimates of fasting adult males derived from the body composition model corresponded closely to theoretically expected and experimentally measured metabolic rates. Our method is simple, non-invasive and provides considerably more information on the energetic status of individuals than currently available methods.

  13. Characterization of air manganese exposure estimates for residents in two Ohio towns

    PubMed Central

    Colledge, Michelle A.; Julian, Jaime R.; Gocheva, Vihra V.; Beseler, Cheryl L.; Roels, Harry A.; Lobdell, Danelle T.; Bowler, Rosemarie M.

    2016-01-01

    This study was conducted to derive receptor-specific outdoor exposure concentrations of total suspended particulate (TSP) and respirable (dae ≤ 10 μm) air manganese (air-Mn) for East Liverpool and Marietta (Ohio) in the absence of facility emissions data, but where long-term air measurements were available. Our “site-surface area emissions method” used U.S. Environmental Protection Agency’s (EPA) AERMOD (AMS/EPA Regulatory Model) dispersion model and air measurement data to estimate concentrations for residential receptor sites in the two communities. Modeled concentrations were used to create ratios between receptor points and calibrated using measured data from local air monitoring stations. Estimated outdoor air-Mn concentrations were derived for individual study subjects in both towns. The mean estimated long-term air-Mn exposure levels for total suspended particulate were 0.35 μg/m3 (geometric mean [GM]) and 0.88 μg/m3 (arithmetic mean [AM]) in East Liverpool (range: 0.014–6.32 μg/m3) and 0.17 μg/m3 (GM) and 0.21 μg/m3 (AM) in Marietta (range: 0.03–1.61 μg/m3). Modeled results compared well with averaged ambient air measurements from local air monitoring stations. Exposure to respirable Mn particulate matter (PM10; PM <10 μm) was higher in Marietta residents. Implications Few available studies evaluate long-term health outcomes from inhalational manganese (Mn) exposure in residential populations, due in part to challenges in measuring individual exposures. Local long-term air measurements provide the means to calibrate models used in estimating long-term exposures. Furthermore, this combination of modeling and ambient air sampling can be used to derive receptor-specific exposure estimates even in the absence of source emissions data for use in human health outcome studies. PMID:26211636

  14. A near-optimal low complexity sensor fusion technique for accurate indoor localization based on ultrasound time of arrival measurements from low-quality sensors

    NASA Astrophysics Data System (ADS)

    Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.

    2009-05-01

    A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.

  15. Model identification of signal transduction networks from data using a state regulator problem.

    PubMed

    Gadkar, K G; Varner, J; Doyle, F J

    2005-03-01

    Advances in molecular biology provide an opportunity to develop detailed models of biological processes that can be used to obtain an integrated understanding of the system. However, development of useful models from the available knowledge of the system and experimental observations still remains a daunting task. In this work, a model identification strategy for complex biological networks is proposed. The approach includes a state regulator problem (SRP) that provides estimates of all the component concentrations and the reaction rates of the network using the available measurements. The full set of the estimates is utilised for model parameter identification for the network of known topology. An a priori model complexity test that indicates the feasibility of performance of the proposed algorithm is developed. Fisher information matrix (FIM) theory is used to address model identifiability issues. Two signalling pathway case studies, the caspase function in apoptosis and the MAP kinase cascade system, are considered. The MAP kinase cascade, with measurements restricted to protein complex concentrations, fails the a priori test and the SRP estimates are poor as expected. The apoptosis network structure used in this work has moderate complexity and is suitable for application of the proposed tools. Using a measurement set of seven protein concentrations, accurate estimates for all unknowns are obtained. Furthermore, the effects of measurement sampling frequency and quality of information in the measurement set on the performance of the identified model are described.

  16. Average static stress drops for heterogeneous slip distributions: Comparison of several measures and implications for energy partition in earthquakes

    NASA Astrophysics Data System (ADS)

    Noda, H.; Lapusta, N.; Kanamori, H.

    2010-12-01

    Static stress drop is often estimated using the seismic moment and rupture area based on a model for uniform stress drop distribution; we denote this estimate by Δσ_M. Δσ_M is sometimes interpreted as the spatial average of stress change over the ruptured area, denoted here as Δσ_A, and used accordingly, for example, to discuss the relation between recurrence interval and the healing of the frictional surface in a system with one degree of freedom [e.g., Marone, 1998]. Δσ_M is also used to estimate available energy (defined as the strain energy change computed using the final stress state as the reference one) and radiation efficiency [e.g., Venkataraman and Kanamori, 2004]. In this work, we define a stress drop measure, Δσ_E, that would enter the exact computation of available energy and radiation efficiency. The three stress drop measures - Δσ_M that can be estimated from observations, Δσ_A, and Δσ_E - are equal if the static stress change is spatially uniform, and that motivates substituting Δσ_M for the other two quantities in applications. However, finite source inversions suggest that the stress change is heterogeneous in natural earthquakes [e.g., Bouchon, 1997]. Since Δσ_M is the average of stress change weighted by slip distribution due to a uniform stress drop [Madariaga, 1979], Δσ_E is the average of stress change weighted by actual slip distribution in the event (this work), and Δσ_A is the simple spatial average of stress change, the three measures should, in general, be different. Here, we investigate the effect of heterogeneity aiming to understand how to use the seismological estimates of stress drop appropriately. We create heterogeneous slip distributions for both circular and rectangular planar ruptures using the approach motivated by Liu-Zeng et al. [2005] and Lavalleé et al [2005]. We find that, indeed, the three stress drop measures differ in our scenarios. In particular, heterogeneity increases Δσ_E and thus the available energy when the seismic moment (and hence Δσ_M) is preserved. So using Δσ_M instead of Δσ_E would underestimate available energy and hence overestimate radiation efficiency. For a range of parameters, Δσ_E is well-approximated by the seismic estimate Δσ_M if the latter is computed using a modified (decreased) rupture area that excludes low-slipped regions; a qualitatively similar procedure is already being used in practice [Somerville et al, 1999].

  17. An evaluation of edge effects in nutritional accessibility and availability measures: a simulation study

    PubMed Central

    2010-01-01

    Background This paper addresses the statistical use of accessibility and availability indices and the effect of study boundaries on these measures. The measures are evaluated via an extensive simulation based on cluster models for local outlet density. We define outlet to mean either food retail store (convenience store, supermarket, gas station) or restaurant (limited service or full service restaurants). We designed a simulation whereby a cluster outlet model is assumed in a large study window and an internal subset of that window is constructed. We performed simulations on various criteria including one scenario representing an urban area with 2000 outlets as well as a non-urban area simulated with only 300 outlets. A comparison is made between estimates obtained with the full study area and estimates using only the subset area. This allows the study of the effect of edge censoring on accessibility measures. Results The results suggest that considerable bias is found at the edges of study regions in particular for accessibility measures. Edge effects are smaller for availability measures (when not smoothed) and also for short range accessibility Conclusions It is recommended that any study utilizing these measures should correct for edge effects. The use of edge correction via guard areas is recommended and the avoidance of large range distance-based accessibility measures is also proposed. PMID:20663199

  18. Reaeration equations derived from U.S. geological survey database

    USGS Publications Warehouse

    Melching, C.S.; Flores, H.E.

    1999-01-01

    Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the date set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.Accurate estimation of the reaeration-rate coefficient (K2) is extremely important for waste-load allocation. Currently, available K2 estimation equations generally yield poor estimates when applied to stream conditions different from those for which the equations were derived because they were derived from small databases composed of potentially highly inaccurate measurements. A large data set of K2 measurements made with tracer-gas methods was compiled from U.S. Geological Survey studies. This compilation included 493 reaches on 166 streams in 23 states. Careful screening to detect and eliminate erroneous measurements reduced the data set to 371 measurements. These measurements were divided into four subgroups on the basis of flow regime (channel control or pool and riffle) and stream scale (discharge greater than or less than 0.556 m3/s). Multiple linear regression in logarithms was applied to relate K2 to 12 stream hydraulic and water-quality characteristics. The resulting best-estimation equations had the form of semiempirical equations that included the rate of energy dissipation and discharge or depth and width as variables. For equation verification, a data set of K2 measurements made with tracer-gas procedures by other agencies was compiled from the literature. This compilation included 127 reaches on at least 24 streams in at least seven states. The standard error of estimate obtained when applying the developed equations to the U.S. Geological Survey data set ranged from 44 to 61%, whereas the standard error of estimate was 78% when applied to the verification data set.

  19. Model-data integration for developing the Cropland Carbon Monitoring System (CCMS)

    NASA Astrophysics Data System (ADS)

    Jones, C. D.; Bandaru, V.; Pnvr, K.; Jin, H.; Reddy, A.; Sahajpal, R.; Sedano, F.; Skakun, S.; Wagle, P.; Gowda, P. H.; Hurtt, G. C.; Izaurralde, R. C.

    2017-12-01

    The Cropland Carbon Monitoring System (CCMS) has been initiated to improve regional estimates of carbon fluxes from croplands in the conterminous United States through integration of terrestrial ecosystem modeling, use of remote-sensing products and publically available datasets, and development of improved landscape and management databases. In order to develop these improved carbon flux estimates, experimental datasets are essential for evaluating the skill of estimates, characterizing the uncertainty of these estimates, characterizing parameter sensitivities, and calibrating specific modeling components. Experiments were sought that included flux tower measurement of CO2 fluxes under production of major agronomic crops. Currently data has been collected from 17 experiments comprising 117 site-years from 12 unique locations. Calibration of terrestrial ecosystem model parameters using available crop productivity and net ecosystem exchange (NEE) measurements resulted in improvements in RMSE of NEE predictions of between 3.78% to 7.67%, while improvements in RMSE for yield ranged from -1.85% to 14.79%. Model sensitivities were dominated by parameters related to leaf area index (LAI) and spring growth, demonstrating considerable capacity for model improvement through development and integration of remote-sensing products. Subsequent analyses will assess the impact of such integrated approaches on skill of cropland carbon flux estimates.

  20. Prediction of internal and external noise fields for blowdown wind tunnels.

    NASA Technical Reports Server (NTRS)

    Hosier, R. N.; Mayes, W. H.

    1972-01-01

    Empirical methods have been developed to estimate the test section noise levels and the outside noise radiation patterns of blowdown wind tunnels. Included are considerations of noise generation by control valves, burners, turbulent boundary layers, and exhaust jets as appropriate. Sample test section and radiation field noise estimates are presented. The external estimates are noted to be in good agreement with the limited amount of available measurements.

  1. On Some Confidence Intervals for Estimating the Mean of a Skewed Population

    ERIC Educational Resources Information Center

    Shi, W.; Kibria, B. M. Golam

    2007-01-01

    A number of methods are available in the literature to measure confidence intervals. Here, confidence intervals for estimating the population mean of a skewed distribution are considered. This note proposes two alternative confidence intervals, namely, Median t and Mad t, which are simple adjustments to the Student's t confidence interval. In…

  2. FIELD MEASUREMENT OF GREENHOUSE GAS EMISSION RATES AND DEVELOPMENT OF EMISSION FACTORS FOR WASTEWATER TREATMENT

    EPA Science Inventory

    The report gives results of field testing to develop more reliable green house gas (GHG) emission estimates for Wastewater treatment (WWT) lagoons. (NOTE: Estimates are available for the amount of methane (CH4) emitted from certain types of waste facilities, but there is not adeq...

  3. ESTIMATION OF EARLY INTERNAL DOSES TO FUKUSHIMA RESIDENTS AFTER THE NUCLEAR DISASTER BASED ON THE ATMOSPHERIC DISPERSION SIMULATION.

    PubMed

    Kim, Eunjoo; Tani, Kotaro; Kunishima, Naoaki; Kurihara, Osamu; Sakai, Kazuo; Akashi, Makoto

    2016-11-01

    Estimating the early internal doses to residents in the Fukushima Daiichi Nuclear Power Station accident is a difficult task because limited human/environmental measurement data are available. Hence, the feasibility of using atmospheric dispersion simulations created by the Worldwide version of System for Prediction of Environmental Emergency Dose Information 2nd Version (WSPEEDI-II) in the estimation was examined in the present study. This examination was done by comparing the internal doses evaluated based on the human measurements with those calculated using time series air concentration maps ( 131 I and 137 Cs) generated by WSPEEDI-II. The results showed that the latter doses were several times higher than the former doses. However, this discrepancy could be minimised by taking into account personal behaviour data that will be available soon. This article also presents the development of a prototype system for estimating the internal dose based on the simulations. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Allometric method to estimate leaf area index for row crops

    USDA-ARS?s Scientific Manuscript database

    Leaf area index (LAI) is critical for predicting plant metabolism, biomass production, evapotranspiration, and greenhouse gas sequestration, but direct LAI measurements are difficult and labor intensive. Several methods are available to measure LAI indirectly or calculate LAI using allometric method...

  5. [Application of predictive model to estimate concentrations of chemical substances in the work environment].

    PubMed

    Kupczewska-Dobecka, Małgorzata; Czerczak, Sławomir; Jakubowski, Marek; Maciaszek, Piotr; Janasik, Beata

    2010-01-01

    Based on the Estimation and Assessment of Substance Exposure (EASE) predictive model implemented into the European Union System for the Evaluation of Substances (EUSES 2.1.), the exposure to three chosen organic solvents: toluene, ethyl acetate and acetone was estimated and compared with the results of measurements in workplaces. Prior to validation, the EASE model was pretested using three exposure scenarios. The scenarios differed in the decision tree of pattern of use. Five substances were chosen for the test: 1,4-dioxane tert-methyl-butyl ether, diethylamine, 1,1,1-trichloroethane and bisphenol A. After testing the EASE model, the next step was the validation by estimating the exposure level and comparing it with the results of measurements in the workplace. We used the results of measurements of toluene, ethyl acetate and acetone concentrations in the work environment of a paint and lacquer factory, a shoe factory and a refinery. Three types of exposure scenarios, adaptable to the description of working conditions were chosen to estimate inhalation exposure. Comparison of calculated exposure to toluene, ethyl acetate and acetone with measurements in workplaces showed that model predictions are comparable with the measurement results. Only for low concentration ranges, the measured concentrations were higher than those predicted. EASE is a clear, consistent system, which can be successfully used as an additional component of inhalation exposure estimation. If the measurement data are available, they should be preferred to values estimated from models. In addition to inhalation exposure estimation, the EASE model makes it possible not only to assess exposure-related risk but also to predict workers' dermal exposure.

  6. Graph theoretic framework based cooperative control and estimation of multiple UAVs for target tracking

    NASA Astrophysics Data System (ADS)

    Ahmed, Mousumi

    Designing the control technique for nonlinear dynamic systems is a significant challenge. Approaches to designing a nonlinear controller are studied and an extensive study on backstepping based technique is performed in this research with the purpose of tracking a moving target autonomously. Our main motivation is to explore the controller for cooperative and coordinating unmanned vehicles in a target tracking application. To start with, a general theoretical framework for target tracking is studied and a controller in three dimensional environment for a single UAV is designed. This research is primarily focused on finding a generalized method which can be applied to track almost any reference trajectory. The backstepping technique is employed to derive the controller for a simplified UAV kinematic model. This controller can compute three autopilot modes i.e. velocity, ground heading (or course angle), and flight path angle for tracking the unmanned vehicle. Numerical implementation is performed in MATLAB with the assumption of having perfect and full state information of the target to investigate the accuracy of the proposed controller. This controller is then frozen for the multi-vehicle problem. Distributed or decentralized cooperative control is discussed in the context of multi-agent systems. A consensus based cooperative control is studied; such consensus based control problem can be viewed from the algebraic graph theory concepts. The communication structure between the UAVs is represented by the dynamic graph where UAVs are represented by the nodes and the communication links are represented by the edges. The previously designed controller is augmented to account for the group to obtain consensus based on their communication. A theoretical development of the controller for the cooperative group of UAVs is presented and the simulation results for different communication topologies are shown. This research also investigates the cases where the communication topology switches to a different topology over particular time instants. Lyapunov analysis is performed to show stability in all cases. Another important aspect of this dissertation research is to implement the controller for the case, where perfect or full state information is not available. This necessitates the design of an estimator to estimate the system state. A nonlinear estimator, Extended Kalman Filter (EKF) is first developed for target tracking with a single UAV. The uncertainties involved with the measurement model and dynamics model are considered as zero mean Gaussian noises with some known covariances. The measurements of the full state of the target are not available and only the range, elevation, and azimuth angle are available from an onboard seeker sensor. A separate EKF is designed to estimate the UAV's own state where the state measurement is available through on-board sensors. The controller computes the three control commands based on the estimated states of target and its own states. Estimation based control laws is also implemented for colored noise measurement uncertainties, and the controller performance is shown with the simulation results. The estimation based control approach is then extended for the cooperative target tracking case. The target information is available to the network and a separate estimator is used to estimate target states. All of the UAVs in the network apply the same control law and the only difference is that each UAV updates the commands according to their connection. The simulation is performed for both cases of fixed and time varying communication topology. Monte Carlo simulation is also performed with different sample noises to investigate the performance of the estimator. The proposed technique is shown to be simple and robust to noisy environments.

  7. Optimal phase estimation with arbitrary a priori knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demkowicz-Dobrzanski, Rafal

    2011-06-15

    The optimal-phase estimation strategy is derived when partial a priori knowledge on the estimated phase is available. The solution is found with the help of the most famous result from the entanglement theory: the positive partial transpose criterion. The structure of the optimal measurements, estimators, and the optimal probe states is analyzed. This Rapid Communication provides a unified framework bridging the gap in the literature on the subject which until now dealt almost exclusively with two extreme cases: almost perfect knowledge (local approach based on Fisher information) and no a priori knowledge (global approach based on covariant measurements). Special attentionmore » is paid to a natural a priori probability distribution arising from a diffusion process.« less

  8. Ares I-X Best Estimated Trajectory Analysis and Results

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Starr, Brett R.; Derry, Stephen D.; Brandon, Jay; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air-data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  9. Ares I-X Best Estimated Trajectory and Comparison with Pre-Flight Predictions

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Beck, Roger E.; Derry, Stephen D.; Brandon, Jay M.; Starr, Brett R.; Tartabini, Paul V.; Olds, Aaron D.

    2011-01-01

    The Ares I-X trajectory reconstruction produced best estimated trajectories of the flight test vehicle ascent through stage separation, and of the first and upper stage entries after separation. The trajectory reconstruction process combines on-board, ground-based, and atmospheric measurements to produce the trajectory estimates. The Ares I-X vehicle had a number of on-board and ground based sensors that were available, including inertial measurement units, radar, air- data, and weather balloons. However, due to problems with calibrations and/or data, not all of the sensor data were used. The trajectory estimate was generated using an Iterative Extended Kalman Filter algorithm, which is an industry standard processing algorithm for filtering and estimation applications. This paper describes the methodology and results of the trajectory reconstruction process, including flight data preprocessing and input uncertainties, trajectory estimation algorithms, output transformations, and comparisons with preflight predictions.

  10. Parameterizing a Large-scale Water Balance Model in Regions with Sparse Data: The Tigris-Euphrates River Basins as an Example

    NASA Astrophysics Data System (ADS)

    Flint, A. L.; Flint, L. E.

    2010-12-01

    The characterization of hydrologic response to current and future climates is of increasing importance to many countries around the world that rely heavily on changing and uncertain water supplies. Large-scale models that can calculate a spatially distributed water balance and elucidate groundwater recharge and surface water flows for large river basins provide a basis of estimates of changes due to future climate projections. Unfortunately many regions in the world have very sparse data for parameterization or calibration of hydrologic models. For this study, the Tigris and Euphrates River basins were used for the development of a regional water balance model at 180-m spatial scale, using the Basin Characterization Model, to estimate historical changes in groundwater recharge and surface water flows in the countries of Turkey, Syria, Iraq, Iran, and Saudi Arabia. Necessary input parameters include precipitation, air temperature, potential evapotranspiration (PET), soil properties and thickness, and estimates of bulk permeability from geologic units. Data necessary for calibration includes snow cover, reservoir volumes (from satellite data and historic, pre-reservoir elevation data) and streamflow measurements. Global datasets for precipitation, air temperature, and PET were available at very large spatial scales (50 km) through the world scale databases, finer scale WorldClim climate data, and required downscaling to fine scales for model input. Soils data were available through world scale soil maps but required parameterization on the basis of textural data to estimate soil hydrologic properties. Soil depth was interpreted from geomorphologic interpretation and maps of quaternary deposits, and geologic materials were categorized from generalized geologic maps of each country. Estimates of bedrock permeability were made on the basis of literature and data on driller’s logs and adjusted during calibration of the model to streamflow measurements where available. Results of historical water balance calculations throughout the Tigris and Euphrates River basins will be shown along with details of processing input data to provide spatial continuity and downscaling. Basic water availability analysis for recharge and runoff is readily available from a determinisitic solar radiation energy balance model and a global potential evapotranspiration model and global estimates of precipitation and air temperature. Future climate estimates can be readily applied to the same water and energy balance models to evaluate future water availability for countries around the globe.

  11. ADMIT: a toolbox for guaranteed model invalidation, estimation and qualitative-quantitative modeling.

    PubMed

    Streif, Stefan; Savchenko, Anton; Rumschinski, Philipp; Borchers, Steffen; Findeisen, Rolf

    2012-05-01

    Often competing hypotheses for biochemical networks exist in the form of different mathematical models with unknown parameters. Considering available experimental data, it is then desired to reject model hypotheses that are inconsistent with the data, or to estimate the unknown parameters. However, these tasks are complicated because experimental data are typically sparse, uncertain, and are frequently only available in form of qualitative if-then observations. ADMIT (Analysis, Design and Model Invalidation Toolbox) is a MatLab(TM)-based tool for guaranteed model invalidation, state and parameter estimation. The toolbox allows the integration of quantitative measurement data, a priori knowledge of parameters and states, and qualitative information on the dynamic or steady-state behavior. A constraint satisfaction problem is automatically generated and algorithms are implemented for solving the desired estimation, invalidation or analysis tasks. The implemented methods built on convex relaxation and optimization and therefore provide guaranteed estimation results and certificates for invalidity. ADMIT, tutorials and illustrative examples are available free of charge for non-commercial use at http://ifatwww.et.uni-magdeburg.de/syst/ADMIT/

  12. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    PubMed

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  13. Algorithms for spacecraft formation flying navigation based on wireless positioning system measurements

    NASA Astrophysics Data System (ADS)

    Goh, Shu Ting

    Spacecraft formation flying navigation continues to receive a great deal of interest. The research presented in this dissertation focuses on developing methods for estimating spacecraft absolute and relative positions, assuming measurements of only relative positions using wireless sensors. The implementation of the extended Kalman filter to the spacecraft formation navigation problem results in high estimation errors and instabilities in state estimation at times. This is due to the high nonlinearities in the system dynamic model. Several approaches are attempted in this dissertation aiming at increasing the estimation stability and improving the estimation accuracy. A differential geometric filter is implemented for spacecraft positions estimation. The differential geometric filter avoids the linearization step (which is always carried out in the extended Kalman filter) through a mathematical transformation that converts the nonlinear system into a linear system. A linear estimator is designed in the linear domain, and then transformed back to the physical domain. This approach demonstrated better estimation stability for spacecraft formation positions estimation, as detailed in this dissertation. The constrained Kalman filter is also implemented for spacecraft formation flying absolute positions estimation. The orbital motion of a spacecraft is characterized by two range extrema (perigee and apogee). At the extremum, the rate of change of a spacecraft's range vanishes. This motion constraint can be used to improve the position estimation accuracy. The application of the constrained Kalman filter at only two points in the orbit causes filter instability. Two variables are introduced into the constrained Kalman filter to maintain the stability and improve the estimation accuracy. An extended Kalman filter is implemented as a benchmark for comparison with the constrained Kalman filter. Simulation results show that the constrained Kalman filter provides better estimation accuracy as compared with the extended Kalman filter. A Weighted Measurement Fusion Kalman Filter (WMFKF) is proposed in this dissertation. In wireless localizing sensors, a measurement error is proportional to the distance of the signal travels and sensor noise. In this proposed Weighted Measurement Fusion Kalman Filter, the signal traveling time delay is not modeled; however, each measurement is weighted based on the measured signal travel distance. The obtained estimation performance is compared to the standard Kalman filter in two scenarios. The first scenario assumes using a wireless local positioning system in a GPS denied environment. The second scenario assumes the availability of both the wireless local positioning system and GPS measurements. The simulation results show that the WMFKF has similar accuracy performance as the standard Kalman Filter (KF) in the GPS denied environment. However, the WMFKF maintains the position estimation error within its expected error boundary when the WLPS detection range limit is above 30km. In addition, the WMFKF has a better accuracy and stability performance when GPS is available. Also, the computational cost analysis shows that the WMFKF has less computational cost than the standard KF, and the WMFKF has higher ellipsoid error probable percentage than the standard Measurement Fusion method. A method to determine the relative attitudes between three spacecraft is developed. The method requires four direction measurements between the three spacecraft. The simulation results and covariance analysis show that the method's error falls within a three sigma boundary without exhibiting any singularity issues. A study of the accuracy of the proposed method with respect to the shape of the spacecraft formation is also presented.

  14. Neutron spectrum from the little boy mock-up

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robba, A.A.

    1986-01-01

    Most of the human exposure data used for setting radiation protection guidelines have been obtained by following the survivors of the nuclear explosions at Hiroshima and Nagasaki. Proper evaluation of these data requires estimates of the radiation exposure received by those survivors. Until now neutron dose estimates have relied primarily on calculations as no measurements of the leakage neutron flux or neutron spectrum were available. We have measured the high-energy leakage neutron spectrum from a mock-up of the Little Boy device operating at delayed critical. The measurements are compared with Monte Carlo calculations of the leakage neutron spectrum.

  15. Estimation of bare soil evaporation using multifrequency airborne SAR

    NASA Technical Reports Server (NTRS)

    Soares, Joao V.; Shi, Jiancheng; Van Zyl, Jakob; Engman, E. T.

    1992-01-01

    It is shown that for homogeneous areas soil moisture can be derived from synthetic aperture radar (SAR) measurements, so that the use of microwave remote sensing can given realistic estimates of energy fluxes if coupled to a simple two-layer model repesenting the soil. The model simulates volumetric water content (Wg) using classical meterological data, provided that some of the soil thermal and hydraulic properties are known. Only four parameters are necessary: mean water content, thermal conductivity and diffusitivity, and soil resistance to evaporation. They may be derived if a minimal number of measured values of Wg and surface layer temperature (Tg) are available together with independent measurements of energy flux to compare with the estimated values. The estimated evaporation is shown to be realistic and in good agreement with drying stage theory in which the transfer of water in the soil is in vapor form.

  16. Approximation of state variables for discrete-time stochastic genetic regulatory networks with leakage, distributed, and probabilistic measurement delays: a robust stability problem.

    PubMed

    Pandiselvi, S; Raja, R; Cao, Jinde; Rajchakit, G; Ahmad, Bashir

    2018-01-01

    This work predominantly labels the problem of approximation of state variables for discrete-time stochastic genetic regulatory networks with leakage, distributed, and probabilistic measurement delays. Here we design a linear estimator in such a way that the absorption of mRNA and protein can be approximated via known measurement outputs. By utilizing a Lyapunov-Krasovskii functional and some stochastic analysis execution, we obtain the stability formula of the estimation error systems in the structure of linear matrix inequalities under which the estimation error dynamics is robustly exponentially stable. Further, the obtained conditions (in the form of LMIs) can be effortlessly solved by some available software packages. Moreover, the specific expression of the desired estimator is also shown in the main section. Finally, two mathematical illustrative examples are accorded to show the advantage of the proposed conceptual results.

  17. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 1. Theory

    NASA Astrophysics Data System (ADS)

    Graham, Wendy D.; Tankersley, Claude D.

    1994-05-01

    Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.

  18. The Influence of Climate Change on Irrigated Water Demands and Surface Water Availability of the Yellow River Basin

    NASA Astrophysics Data System (ADS)

    Troy, T. J.; Zhang, J.

    2017-12-01

    Balancing irrigated water demands and surface water availability is critical for sustainable water resources management. In China, irrigation is the largest water user, and there is concern that irrigated water demands will be affected by climate change. If the relationship between climate change, irrigated water demands and surface water availability is quantified, then effective measures can be developed to maintain food production while ensuring water sustainability. This research focuses on the Yellow River, the second longest in China, and analyzes the impact of historical and projected climate change on agricultural water demands and surface water availability. Corn and wheat are selected as representative crops to estimate the effect of temperature and precipitin changes on irrigated water demands. The VIC model is used to simulate daily streamflow throughout the Yellow River, providing estimates of surface water availability. Overall, results indicate the irrigated water need and surface water availability are impacted by climate change, with spatially varying impacts depending on spatial patterns of climate trends and river network position. This research provides insight into water security in the Yellow River basin, indicating where water efficiency measures are needed and where they are not.

  19. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries.

    PubMed

    Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien

    2018-01-01

    In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.

  20. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries

    PubMed Central

    Resch, Stephen

    2018-01-01

    Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964

  1. Analysis of Terrestrial Conditions and Dynamics

    NASA Technical Reports Server (NTRS)

    Goward, S. N.

    1985-01-01

    An ecological model is developed to estimate annual net primary productivity of vegetation in twelve major North American biomes. Three models are adapted and combined, each addressing a different factor known to govern primary productivity, i.e., photosynthesis, respiration, and moisture availability. Measures of intercepted photosynthetically active radiation (1PAR) for input to the photosynthesis model are derived from spectral vegetation index data. Normalized Difference Vegetation Index (NDVI) data are produced from NOAA-7 Advanced Very High Resolution Radiometer (AVHRR) observations for April 1982 through March 1983. NDVI values are sampled from within the biomes at locations for which climatological data are available. Monthly estimates of Net Primary Productivity (NPP) for each sample location are generated and summed over the twelve month period. These monthly estimates are averaged to produce a single annual estimated NPP value for each biomes. Comparison of estimated NPP values with figures reported in the literature produces a correlation coefficient of 85.

  2. Rapid Analysis of Nonstructural Carbohydrate Components in Grass Forage Using Microplate Enzymatic Assays

    USDA-ARS?s Scientific Manuscript database

    Measurements of nonstructural carbohydrates (NSC) in plant tissues are important to estimate plant organ resources available for plant growth and stress tolerance or for feed value to grazing animals. A popular commercially available assay kit used to detect glucose with a light sensitive dye reacti...

  3. Two approaches for estimating discharge on ungauged basins in Oregon, USA

    EPA Science Inventory

    Detailed information on the hydrologic behavior of streams is available for only a small proportion of all streams. Even in cases where discharge has been monitored, these measurements may not be available for a sufficiently long period to characterize the full behavior of a str...

  4. A new scheme for processing noisy startracker measurements in spacecraft attitude determination systems

    NASA Technical Reports Server (NTRS)

    Polites, M. E.

    1991-01-01

    This paper presents a new approach to processing noisy startracker measurements in spacecraft attitude determination systems. It takes N measurements in each T-second interval and combines them to produce tracker outputs that are estimates of star position at the end of each interval, when the tracker outputs become available. This is an improvement over the standard method, measurement averaging, which generates outputs that are estimates of the average position of the star over each interval. This new scheme is superior to measurement averaging when the spacecraft has some rotation rate as in target tracking or earth pointing. Also, it is not just limited to startracker, but has potential application wherever measurement averaging of sensor outputs is used.

  5. Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements

    NASA Astrophysics Data System (ADS)

    Jakub, Thomas D.

    Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.

  6. Mapping Precipitation in the Lower Mekong River Basin and the U.S. Affiliated Pacific Islands

    NASA Astrophysics Data System (ADS)

    Lakshmi, V.; Sutton, J. R. P.; Bolten, J. D.

    2017-12-01

    Mapping and quantifying precipitation across varying temporal and spatial scales is of utmost importance in understanding, monitoring, and predicting flooding and drought. While there exists many in-situ precipitation gages that can accurately estimate precipitation in a given location, there are still many areas that lack in-situ gages. Many of these locations do not have precipitation gages because they are rural and/or topographically complex. The purpose of our research was to compare different remotely sensed satellite precipitation estimates with in-situ estimates across topographically complex and rural terrain within the United States Affiliated Pacific Islands (USAPI) and the Lower Mekong River Basin (LMRB). We utilize the publicly available Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) Climate Data Record (CDR) from NOAA and two remotely sensed precipitation products from NASA; the Tropical Rainfall Measuring Mission (TRMM) and the Global Precipitation Measurement (GPM). These precipitation estimates were compared with each other and to the available in-situ precipitation estimates from station gages. We also utilize NASA Landsat data to determine the land cover types of these study areas. Using the precipitation estimates, topography, and the land cover of the study areas, we were able to show areas experiencing differing amounts of rainfall and their agreement with in-situ estimates. Additionally, we study the seasonal and spatial trends in precipitation. These analyses can be used to help understand areas that are experience frequent flood or drought.

  7. Estimation of time-varying growth, uptake and excretion rates from dynamic metabolomics data.

    PubMed

    Cinquemani, Eugenio; Laroute, Valérie; Cocaign-Bousquet, Muriel; de Jong, Hidde; Ropers, Delphine

    2017-07-15

    Technological advances in metabolomics have made it possible to monitor the concentration of extracellular metabolites over time. From these data, it is possible to compute the rates of uptake and excretion of the metabolites by a growing cell population, providing precious information on the functioning of intracellular metabolism. The computation of the rate of these exchange reactions, however, is difficult to achieve in practice for a number of reasons, notably noisy measurements, correlations between the concentration profiles of the different extracellular metabolites, and discontinuties in the profiles due to sudden changes in metabolic regime. We present a method for precisely estimating time-varying uptake and excretion rates from time-series measurements of extracellular metabolite concentrations, specifically addressing all of the above issues. The estimation problem is formulated in a regularized Bayesian framework and solved by a combination of extended Kalman filtering and smoothing. The method is shown to improve upon methods based on spline smoothing of the data. Moreover, when applied to two actual datasets, the method recovers known features of overflow metabolism in Escherichia coli and Lactococcus lactis , and provides evidence for acetate uptake by L. lactis after glucose exhaustion. The results raise interesting perspectives for further work on rate estimation from measurements of intracellular metabolites. The Matlab code for the estimation method is available for download at https://team.inria.fr/ibis/rate-estimation-software/ , together with the datasets. eugenio.cinquemani@inria.fr. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  8. Set membership experimental design for biological systems.

    PubMed

    Marvel, Skylar W; Williams, Cranos M

    2012-03-21

    Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models.

  9. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify candidate measurement time points that maximize information corresponding to biologically relevant metrics and 2) determine the number at which additional measurements begin to provide insignificant information. This framework can be used to balance the availability of resources with the addition of one or more measurement time points to improve the predictability of resulting models. PMID:22436240

  10. A First Estimation of County-Based Green Water Availability and Its Implications for Agriculture and Bioenergy Production in the United States

    DOE PAGES

    Xu, Hui; Wu, May

    2018-02-02

    Green water is vital for the terrestrial ecosystem, but water resource assessment often focuses on blue water. In this study, we estimated green water availability for major crops (i.e., corn, soybean, and wheat) and all other users(e.g., forest, grassland, and ecosystem services) at the county level in the United States. We estimated green water resources from effective rain(ER) using three different methods: Smith, U.S. Department of Agriculture-Soil Conservation Service (USDA-SCS), and the NHD plus V2 dataset. The analysis illustrates that, if green water meets all crop water demands, the fraction of green water resources available to all other users variesmore » significantly across regions, from the Northern Plains (0.71) to the Southeast (0.98). At the county level, this fraction varies from 0.23 to 1.0. Green water resources estimated using the three different ER methods present diverse spatiotemporal distribution patterns across regions, which could affect green water availability estimates. The water availability index for green water (WAI_R) was measured taking into account crop water demand and green water resources aggregated at the county level. Beyond these parameters, WAI_R also depends on the precipitation pattern, crop type and spatially differentiated regions. In addition, seasonal analysis indicated that WAI_R is sensitive to the temporal boundary of the analysis.« less

  11. A First Estimation of County-Based Green Water Availability and Its Implications for Agriculture and Bioenergy Production in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Hui; Wu, May

    Green water is vital for the terrestrial ecosystem, but water resource assessment often focuses on blue water. In this study, we estimated green water availability for major crops (i.e., corn, soybean, and wheat) and all other users(e.g., forest, grassland, and ecosystem services) at the county level in the United States. We estimated green water resources from effective rain(ER) using three different methods: Smith, U.S. Department of Agriculture-Soil Conservation Service (USDA-SCS), and the NHD plus V2 dataset. The analysis illustrates that, if green water meets all crop water demands, the fraction of green water resources available to all other users variesmore » significantly across regions, from the Northern Plains (0.71) to the Southeast (0.98). At the county level, this fraction varies from 0.23 to 1.0. Green water resources estimated using the three different ER methods present diverse spatiotemporal distribution patterns across regions, which could affect green water availability estimates. The water availability index for green water (WAI_R) was measured taking into account crop water demand and green water resources aggregated at the county level. Beyond these parameters, WAI_R also depends on the precipitation pattern, crop type and spatially differentiated regions. In addition, seasonal analysis indicated that WAI_R is sensitive to the temporal boundary of the analysis.« less

  12. Local Estimators for Spacecraft Formation Flying

    NASA Technical Reports Server (NTRS)

    Fathpour, Nanaz; Hadaegh, Fred Y.; Mesbahi, Mehran; Nabi, Marzieh

    2011-01-01

    A formation estimation architecture for formation flying builds upon the local information exchange among multiple local estimators. Spacecraft formation flying involves the coordination of states among multiple spacecraft through relative sensing, inter-spacecraft communication, and control. Most existing formation flying estimation algorithms can only be supported via highly centralized, all-to-all, static relative sensing. New algorithms are needed that are scalable, modular, and robust to variations in the topology and link characteristics of the formation exchange network. These distributed algorithms should rely on a local information-exchange network, relaxing the assumptions on existing algorithms. In this research, it was shown that only local observability is required to design a formation estimator and control law. The approach relies on breaking up the overall information-exchange network into sequence of local subnetworks, and invoking an agreement-type filter to reach consensus among local estimators within each local network. State estimates were obtained by a set of local measurements that were passed through a set of communicating Kalman filters to reach an overall state estimation for the formation. An optimization approach was also presented by means of which diffused estimates over the network can be incorporated in the local estimates obtained by each estimator via local measurements. This approach compares favorably with that obtained by a centralized Kalman filter, which requires complete knowledge of the raw measurement available to each estimator.

  13. Making the Most of What We Have: A Practical Application of Multidimensional Item Response Theory in Test Scoring

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Patz, Richard J.

    2005-01-01

    This article proposes a practical method that capitalizes on the availability of information from multiple tests measuring correlated abilities given in a single test administration. By simultaneously estimating different abilities with the use of a hierarchical Bayesian framework, more precise estimates for each ability dimension are obtained.…

  14. ULTRAVIOLET RADIATION IN NORTH AMERICAN LAKES: ATTENUATION ESTIMATES FROM DOC MEASUREMENTS AND IMPLICATIONS FOR PLANKTON COMMUNITIES

    EPA Science Inventory

    Climate warming in North America is likely to be accompanied by changes in other environmental stresses such as UV-B radiation. We apply an empirical model to available DOC (dissolved organic C) data to estimate the depths to which 1% of surface UV-B and UV-A radiation penetrate ...

  15. Adaptive Control in the Presence of Simultaneous Sensor Bias and Actuator Failures

    NASA Technical Reports Server (NTRS)

    Joshi, Suresh M.

    2012-01-01

    The problem of simultaneously accommodating unknown sensor biases and unknown actuator failures in uncertain systems is considered in a direct model reference adaptive control (MRAC) setting for state tracking using state feedback. Sensor biases and actuator faults may be present at the outset or may occur at unknown instants of time during operation. A modified MRAC law is proposed, which combines sensor bias estimation with control gain adaptation for accommodation of sensor biases and actuator failures. This control law is shown to provide signal boundedness in the resulting system. For the case when an external asymptotically stable sensor bias estimator is available, an MRAC law is developed to accomplish asymptotic state tracking and signal boundedness. For a special case wherein biases are only present in the rate measurements and bias-free position measurements are available, an MRAC law is developed using a model-independent bias estimator, and is shown to provide asymptotic state tracking with signal boundedness.

  16. Effect of spatial resolution on remote sensing estimation of total evaporation in the uMngeni catchment, South Africa

    NASA Astrophysics Data System (ADS)

    Shoko, Cletah; Clark, David; Mengistu, Michael; Dube, Timothy; Bulcock, Hartley

    2015-01-01

    This study evaluated the effect of two readily available multispectral sensors: the newly launched 30 m spatial resolution Landsat 8 and the long-serving 1000 m moderate resolution imaging spectroradiometer (MODIS) datasets in the spatial representation of total evaporation in the heterogeneous uMngeni catchment, South Africa, using the surface energy balance system model. The results showed that sensor spatial resolution plays a critical role in the accurate estimation of energy fluxes and total evaporation across a heterogeneous catchment. Landsat 8 estimates showed better spatial representation of the biophysical parameters and total evaporation for different land cover types, due to the relatively higher spatial resolution compared to the coarse spatial resolution MODIS sensor. Moreover, MODIS failed to capture the spatial variations of total evaporation estimates across the catchment. Analysis of variance (ANOVA) results showed that MODIS-based total evaporation estimates did not show any significant differences across different land cover types (one-way ANOVA; F1.924=1.412, p=0.186). However, Landsat 8 images yielded significantly different estimates between different land cover types (one-way ANOVA; F1.993=5.185, p<0.001). The validation results showed that Landsat 8 estimates were more comparable to eddy covariance (EC) measurements than the MODIS-based total evaporation estimates. EC measurement on May 23, 2013, was 3.8 mm/day, whereas the Landsat 8 estimate on the same day was 3.6 mm/day, with MODIS showing significantly lower estimates of 2.3 mm/day. The findings of this study underscore the importance of spatial resolution in estimating spatial variations of total evaporation at the catchment scale, thus, they provide critical information on the relevance of the readily available remote sensing products in water resources management in data-scarce environments.

  17. Estimating Surgical Procedure Times Using Anesthesia Billing Data and Operating Room Records.

    PubMed

    Burgette, Lane F; Mulcahy, Andrew W; Mehrotra, Ateev; Ruder, Teague; Wynn, Barbara O

    2017-02-01

    The median time required to perform a surgical procedure is important in determining payment under Medicare's physician fee schedule. Prior studies have demonstrated that the current methodology of using physician surveys to determine surgical times results in overstated times. To measure surgical times more accurately, we developed and validated a methodology using available data from anesthesia billing data and operating room (OR) records. We estimated surgical times using Medicare 2011 anesthesia claims and New York Statewide Planning and Research Cooperative System 2011 OR times. Estimated times were validated using data from the National Surgical Quality Improvement Program. We compared our time estimates to those used by Medicare in the fee schedule. We estimate surgical times via piecewise linear median regression models. Using 3.0 million observations of anesthesia and OR times, we estimated surgical time for 921 procedures. Correlation between these time estimates and directly measured surgical time from the validation database was 0.98. Our estimates of surgical time were shorter than the Medicare fee schedule estimates for 78 percent of procedures. Anesthesia and OR times can be used to measure surgical time and thereby improve the payment for surgical procedures in the Medicare fee schedule. © Health Research and Educational Trust.

  18. Sun-Direction Estimation Using a Partially Underdetermined Set of Coarse Sun Sensors

    NASA Astrophysics Data System (ADS)

    O'Keefe, Stephen A.; Schaub, Hanspeter

    2015-09-01

    A comparison of different methods to estimate the sun-direction vector using a partially underdetermined set of cosine-type coarse sun sensors (CSS), while simultaneously controlling the attitude towards a power-positive orientation, is presented. CSS are commonly used in performing power-positive sun-pointing and are attractive due to their relative inexpensiveness, small size, and reduced power consumption. For this study only CSS and rate gyro measurements are available, and the sensor configuration does not provide global triple coverage required for a unique sun-direction calculation. The methods investigated include a vector average method, a combination of least squares and minimum norm criteria, and an extended Kalman filter approach. All cases are formulated such that precise ground calibration of the CSS is not required. Despite significant biases in the state dynamics and measurement models, Monte Carlo simulations show that an extended Kalman filter approach, despite the underdetermined sensor coverage, can provide degree-level accuracy of the sun-direction vector both with and without a control algorithm running simultaneously. If no rate gyro measurements are available, and rates are partially estimated from CSS, the EKF performance degrades as expected, but is still able to achieve better than 10∘ accuracy using only CSS measurements.

  19. Estimation of composite hydraulic resistance in ice-covered alluvial streams

    NASA Astrophysics Data System (ADS)

    Ghareh Aghaji Zare, Soheil; Moore, Stephanie A.; Rennie, Colin D.; Seidou, Ousmane; Ahmari, Habib; Malenchak, Jarrod

    2016-02-01

    Formation, propagation, and recession of ice cover introduce a dynamic boundary layer to the top of rivers during northern winters. Ice cover affects water velocity magnitude and distribution, water level and consequently conveyance capacity of the river. In this research, total resistance, i.e., "composite resistance," is studied for a 4 month period including stable ice cover, breakup, and open water stages in Lower Nelson River (LNR), northern Manitoba, Canada. Flow and ice characteristics such as water velocity and depth and ice thickness and condition were measured continuously using acoustic techniques. An Acoustic Doppler Current Profiler (ADCP) and Shallow Water Ice Profiling Sonar (SWIPS) were installed simultaneously on a bottom mount and deployed for this purpose. Total resistance to the flow and boundary roughness are estimated using measured bulk hydraulic parameters. A novel method is developed to calculate composite resistance directly from measured under ice velocity profiles. The results of this method are compared to the measured total resistance and to the calculated composite resistance using formulae available in literature. The new technique is demonstrated to compare favorably to measured total resistance and to outperform previously available methods.

  20. A Temperature-Based Model for Estimating Monthly Average Daily Global Solar Radiation in China

    PubMed Central

    Li, Huashan; Cao, Fei; Wang, Xianlong; Ma, Weibin

    2014-01-01

    Since air temperature records are readily available around the world, the models based on air temperature for estimating solar radiation have been widely accepted. In this paper, a new model based on Hargreaves and Samani (HS) method for estimating monthly average daily global solar radiation is proposed. With statistical error tests, the performance of the new model is validated by comparing with the HS model and its two modifications (Samani model and Chen model) against the measured data at 65 meteorological stations in China. Results show that the new model is more accurate and robust than the HS, Samani, and Chen models in all climatic regions, especially in the humid regions. Hence, the new model can be recommended for estimating solar radiation in areas where only air temperature data are available in China. PMID:24605046

  1. Assessment of Receiver Signal Strength Sensing for Location Estimation Based on Fisher Information

    PubMed Central

    Nielsen, John; Nielsen, Christopher

    2016-01-01

    Currently there is almost ubiquitous availability of wireless signaling for data communications within commercial building complexes resulting in receiver signal strength (RSS) observables that are typically sufficient for generating viable location estimates of mobile wireless devices. However, while RSS observables are generally plentiful, achieving an accurate estimation of location is difficult due to several factors affecting the electromagnetic coupling between the mobile antenna and the building access points that are not modeled and hence contribute to the overall estimation uncertainty. Such uncertainty is typically mitigated with a moderate redundancy of RSS sensor observations in combination with other constraints imposed on the mobile trajectory. In this paper, the Fisher Information (FI) of a set of RSS sensor observations in the context of variables related to the mobile location is developed. This provides a practical method of determining the potential location accuracy for the given set of wireless signals available. Furthermore, the information value of individual RSS measurements can be quantified and the RSS observables weighted accordingly in estimation combining algorithms. The practical utility of using FI in this context was demonstrated experimentally with an extensive set of RSS measurements recorded in an office complex. The resulting deviation of the mobile location estimation based on application of weighted likelihood processing to the experimental RSS data was shown to agree closely with the Cramer Rao bound determined from the FI analysis. PMID:27669262

  2. Improved Accuracy of Automated Estimation of Cardiac Output Using Circulation Time in Patients with Heart Failure.

    PubMed

    Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi

    2016-11-01

    Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Calculating Irradiance For Photosynthesis In The Ocean

    NASA Technical Reports Server (NTRS)

    Collins, Donald J.; Davis, Curtiss O.; Booth, C. Rockwell; Kiefer, Dale A.; Stallings, Casson

    1990-01-01

    Mathematical model predicts available and usable irradiances. Yields estimates of irradiance available for photosynthesis (Epar) and irradiance usable for photosynthesis (Epur) as functions of depth in ocean. Describes Epur and Epar in terms of spectral parameters measured remotely (from satellites or airplanes). These irradiances useful in studies of photosynthetic productivity of phytoplankton in euphotic layer.

  4. Delicatessen salads in "refrigerated ready-to-eat foods: microbial concerns and control measures"

    USDA-ARS?s Scientific Manuscript database

    Deli-type salads are in the category of refrigerated ready-to-eat food products. They are available where foods are sold or served and are becoming increasingly popular food items due to their variety, availability, and convenience. It is estimated that the annual consumption of deli salads is 5.6...

  5. Evaluation of thyroid radioactivity measurement data from Hanford workers, 1944--1946

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikenberry, T.A.

    1991-05-01

    This report describes the preliminary results of an evaluation conducted in support of the Hanford Environmental Dose Reconstruction (HEDR) Project. The primary objective of the HEDR Project is to estimate the radiation doses that populations could have received from nuclear operations at the Hanford Site since 1944. A secondary objective is to make information that HEDR staff members used in estimate radiation doses available to the public. The objectives of this report to make available thyroid measurement data from Hanford workers for the year 1944 through 1946, and to investigate the suitability of those data for use in the HEDRmore » dose estimation process. An important part of this investigation was to provide a description of the uncertainty associated with the data. Lack of documentation on thyroid measurements from this period required that assumptions be made to perform data evaluations. These assumptions introduce uncertainty into the evaluations that could be significant. It is important to recognize the nature of these assumptions, the inherent uncertainty, and the propagation of this uncertainty, and the propagation of this uncertainty through data evaluations to any conclusions that can be made by using the data. 15 refs., 1 fig., 5 tabs.« less

  6. Rain attenuation measurements: Variability and data quality assessment

    NASA Technical Reports Server (NTRS)

    Crane, Robert K.

    1989-01-01

    Year to year variations in the cumulative distributions of rain rate or rain attenuation are evident in any of the published measurements for a single propagation path that span a period of several years of observation. These variations must be described by models for the prediction of rain attenuation statistics. Now that a large measurement data base has been assembled by the International Radio Consultative Committee, the information needed to assess variability is available. On the basis of 252 sample cumulative distribution functions for the occurrence of attenuation by rain, the expected year to year variation in attenuation at a fixed probability level in the 0.1 to 0.001 percent of a year range is estimated to be 27 percent. The expected deviation from an attenuation model prediction for a single year of observations is estimated to exceed 33 percent when any of the available global rain climate model are employed to estimate the rain rate statistics. The probability distribution for the variation in attenuation or rain rate at a fixed fraction of a year is lognormal. The lognormal behavior of the variate was used to compile the statistics for variability.

  7. The Empirical Analysis of Cigarette Tax Avoidance and Illicit Trade in Vietnam, 1998-2010

    PubMed Central

    Nguyen, Minh Thac; Denniston, Ryan; Nguyen, Hien Thi Thu; Hoang, Tuan Anh; Ross, Hana; So, Anthony D.

    2014-01-01

    Illicit trade carries the potential to magnify existing tobacco-related health care costs through increased availability of untaxed and inexpensive cigarettes. What is known with respect to the magnitude of illicit trade for Vietnam is produced primarily by the industry, and methodologies are typically opaque. Independent assessment of the illicit cigarette trade in Vietnam is vital to tobacco control policy. This paper measures the magnitude of illicit cigarette trade for Vietnam between 1998 and 2010 using two methods, discrepancies between legitimate domestic cigarette sales and domestic tobacco consumption estimated from surveys, and trade discrepancies as recorded by Vietnam and trade partners. The results indicate that Vietnam likely experienced net smuggling in during the period studied. With the inclusion of adjustments for survey respondent under-reporting, inward illicit trade likely occurred in three of the four years for which surveys were available. Discrepancies in trade records indicate that the value of smuggled cigarettes into Vietnam ranges from $100 million to $300 million between 2000 and 2010 and that these cigarettes primarily originate in Singapore, Hong Kong, Macao, Malaysia, and Australia. Notable differences in trends over time exist between the two methods, but by comparison, the industry estimates consistently place the magnitude of illicit trade at the upper bounds of what this study shows. The unavailability of annual, survey-based estimates of consumption may obscure the true, annual trend over time. Second, as surveys changed over time, estimates relying on them may be inconsistent with one another. Finally, these two methods measure different components of illicit trade, specifically consumption of illicit cigarettes regardless of origin and smuggling of cigarettes into a particular market. However, absent a gold standard, comparisons of different approaches to illicit trade measurement serve efforts to refine and improve measurement approaches and estimates. PMID:24489886

  8. The empirical analysis of cigarette tax avoidance and illicit trade in Vietnam, 1998-2010.

    PubMed

    Nguyen, Minh Thac; Denniston, Ryan; Nguyen, Hien Thi Thu; Hoang, Tuan Anh; Ross, Hana; So, Anthony D

    2014-01-01

    Illicit trade carries the potential to magnify existing tobacco-related health care costs through increased availability of untaxed and inexpensive cigarettes. What is known with respect to the magnitude of illicit trade for Vietnam is produced primarily by the industry, and methodologies are typically opaque. Independent assessment of the illicit cigarette trade in Vietnam is vital to tobacco control policy. This paper measures the magnitude of illicit cigarette trade for Vietnam between 1998 and 2010 using two methods, discrepancies between legitimate domestic cigarette sales and domestic tobacco consumption estimated from surveys, and trade discrepancies as recorded by Vietnam and trade partners. The results indicate that Vietnam likely experienced net smuggling in during the period studied. With the inclusion of adjustments for survey respondent under-reporting, inward illicit trade likely occurred in three of the four years for which surveys were available. Discrepancies in trade records indicate that the value of smuggled cigarettes into Vietnam ranges from $100 million to $300 million between 2000 and 2010 and that these cigarettes primarily originate in Singapore, Hong Kong, Macao, Malaysia, and Australia. Notable differences in trends over time exist between the two methods, but by comparison, the industry estimates consistently place the magnitude of illicit trade at the upper bounds of what this study shows. The unavailability of annual, survey-based estimates of consumption may obscure the true, annual trend over time. Second, as surveys changed over time, estimates relying on them may be inconsistent with one another. Finally, these two methods measure different components of illicit trade, specifically consumption of illicit cigarettes regardless of origin and smuggling of cigarettes into a particular market. However, absent a gold standard, comparisons of different approaches to illicit trade measurement serve efforts to refine and improve measurement approaches and estimates.

  9. A Low Cost Approach to Simultaneous Orbit, Attitude, and Rate Estimation Using an Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack

    1998-01-01

    An innovative approach to autonomous attitude and trajectory estimation is available using only magnetic field data and rate data. The estimation is performed simultaneously using an Extended Kalman Filter, a well known algorithm used extensively in onboard applications. The magnetic field is measured on a satellite by a magnetometer, an inexpensive and reliable sensor flown on virtually all satellites in low earth orbit. Rate data is provided by a gyro, which can be costly. This system has been developed and successfully tested in a post-processing mode using magnetometer and gyro data from 4 satellites supported by the Flight Dynamics Division at Goddard. In order for this system to be truly low cost, an alternative source for rate data must be utilized. An independent system which estimate spacecraft rate has been successfully developed and tested using only magnetometer data or a combination of magnetometer data and sun sensor data, which is less costly than a gyro. This system also uses an Extended Kalman Filter. Merging the two systems will provide an extremely low cost, autonomous approach to attitude and trajectory estimation. In this work we provide the theoretical background of the combined system. The measurement matrix is developed by combining the measurement matrix of the orbit and attitude estimation EKF with the measurement matrix of the rate estimation EKF, which is composed of a pseudo-measurement which makes the effective measurement a function of the angular velocity. Associated with this is the development of the noise covariance matrix associated with the original measurement combined with the new pseudo-measurement. In addition, the combination of the dynamics from the two systems is presented along with preliminary test results.

  10. A low cost approach to simultaneous orbit, attitude, and rate estimation using an extended Kalman filter

    NASA Technical Reports Server (NTRS)

    Deutschmann, Julie; Harman, Rick; Bar-Itzhack, Itzhack

    1998-01-01

    An innovative approach to autonomous attitude and trajectory estimation is available using only magnetic field data and rate data. The estimation is performed simultaneously using an Extended Kalman Filter (EKF), a well known algorithm used extensively in onboard applications. The magnetic field is measured on a satellite by a magnetometer, an inexpensive and reliable sensor flown on virtually all satellites in low earth orbit. Rate data is provided by a gyro, which can be costly. This system has been developed and successfully tested in a post-processing mode using magnetometer and gyro data from 4 satellites supported by the Flight Dynamics Division at Goddard. In order for this system to be truly low cost, an alternative source for rate data must be utilized. An independent system which estimates spacecraft rate has been successfully developed and tested using only magnetometer data or a combination of magnetometer data and sun sensor data, which is less costly than a gyro. This system also uses an EKF. Merging the two systems will provide an extremely low cost, autonomous approach to attitude and trajectory estimation. In this work we provide the theoretical background of the combined system. The measurement matrix is developed by combining the measurement matrix of the orbit and attitude estimation EKF with the measurement matrix of the rate estimation EKF, which is composed of a pseudo-measurement which makes the effective measurement a function of the angular velocity. Associated with this is the development of the noise covariance matrix associated with the original measurement combined with the new pseudo-measurement. In addition, the combination of the dynamics from the two systems is presented along with preliminary test results.

  11. Multidimensional Measurement of Poverty among Women in Sub-Saharan Africa

    ERIC Educational Resources Information Center

    Batana, Yele Maweki

    2013-01-01

    Since the seminal work of Sen, poverty has been recognized as a multidimensional phenomenon. The recent availability of relevant databases renewed the interest in this approach. This paper estimates multidimensional poverty among women in fourteen Sub-Saharan African countries using the Alkire and Foster multidimensional poverty measures, whose…

  12. Individual tree growth models for natural even-aged shortleaf pine

    Treesearch

    Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin

    2006-01-01

    Shortleaf pine (Pinus echinata Mill.) measurements were available from permanent plots established in even-aged stands of the Ouachita Mountains for studying growth. Annual basal area growth was modeled with a least-squares nonlinear regression method utilizing three measurements. The analysis showed that the parameter estimates were in agreement...

  13. Repeated serum creatinine measurement in primary care: Not all patients have chronic renal failure.

    PubMed

    Gentille Lorente, Delicia; Gentille Lorente, Jorge; Salvadó Usach, Teresa

    2015-01-01

    To assess the prevalence of kidney failure in patients from a primary care centre in a basic healthcare district with laboratory availability allowing serum creatinine measurements. An observational descriptive cross-sectional study. A basic healthcare district serving 23,807 people aged ≥ 18 years. Prevalence of kidney failure among 17,240 patients having at least one laboratory measurement available was 8.5% (mean age 77.6 ± 12.05 years). In 33.2% of such patients an occult kidney failure was found (98.8% were women). Prevalence of chronic kidney failure among 10,011 patients having at least 2 laboratory measurements available (≥ 3 months apart) was 5.5% with mean age being 80.1 ± 10.0 years (most severely affected patients were those aged 75 to 84); 59.7% were men and 76.3% of cases were in stage 3. An occult kidney failure was found in 5.3% of patients with women being 86.2% of them (a glomerular filtration rate<60 ml/min was estimated for plasma creatinine levels of 0.9 mg/dl or higher). Comparison of present findings to those previously reported demonstrates the need for further studies on the prevalence of overall (chronic and acute) kidney failure in Spain in order to estimate the real scope of the disease. Primary care physicians play a critical role in disease detection, therapy, control and recording (in medical records). MDRD equation is useful and practical to estimate glomerular filtration rate. Copyright © 2015 The Authors. Published by Elsevier España, S.L.U. All rights reserved.

  14. The Challenges of Measuring Glycemic Variability

    PubMed Central

    Rodbard, David

    2012-01-01

    This commentary reviews several of the challenges encountered when attempting to quantify glycemic variability and correlate it with risk of diabetes complications. These challenges include (1) immaturity of the field, including problems of data accuracy, precision, reliability, cost, and availability; (2) larger relative error in the estimates of glycemic variability than in the estimates of the mean glucose; (3) high correlation between glycemic variability and mean glucose level; (4) multiplicity of measures; (5) correlation of the multiple measures; (6) duplication or reinvention of methods; (7) confusion of measures of glycemic variability with measures of quality of glycemic control; (8) the problem of multiple comparisons when assessing relationships among multiple measures of variability and multiple clinical end points; and (9) differing needs for routine clinical practice and clinical research applications. PMID:22768904

  15. Remote Sensing of Groundwater Storage Changes in Illinois using the Gravity Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Yeh, Pat J.-F.; Swenson, S. C.; Famiglietti, J. S.; Rodell, M.

    2007-01-01

    Regional groundwater storage changes in Illinois are estimated from monthly GRACE total water storage change (TWSC) data and in situ measurements of soil moisture for the period 2002-2005. Groundwater storage change estimates are compared to those derived from the soil moisture and available well level data. The seasonal pattern and amplitude of GRACE-estimated groundwater storage changes track those of the in situ measurements reasonably well, although substantial differences exist in month-to-month variations. The seasonal cycle of GRACE TWSC agrees well with observations (correlation coefficient = 0.83), while the seasonal cycle of GRACE-based estimates of groundwater storage changes beneath 2 m depth agrees with observations with a correlation coefficient of 0.63. We conclude that the GRACE-based method of estimating monthly to seasonal groundwater storage changes performs reasonably well at the 200,000 sq km scale of Illinois.

  16. Nonparametric entropy estimation using kernel densities.

    PubMed

    Lake, Douglas E

    2009-01-01

    The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.

  17. How Big Is It Really? Assessing the Efficacy of Indirect Estimates of Body Size in Asian Elephants.

    PubMed

    Chapman, Simon N; Mumby, Hannah S; Crawley, Jennie A H; Mar, Khyne U; Htut, Win; Thura Soe, Aung; Aung, Htoo Htoo; Lummaa, Virpi

    2016-01-01

    Information on an organism's body size is pivotal in understanding its life history and fitness, as well as helping inform conservation measures. However, for many species, particularly large-bodied wild animals, taking accurate body size measurements can be a challenge. Various means to estimate body size have been employed, from more direct methods such as using photogrammetry to obtain height or length measurements, to indirect prediction of weight using other body morphometrics or even the size of dung boli. It is often unclear how accurate these measures are because they cannot be compared to objective measures. Here, we investigate how well existing estimation equations predict the actual body weight of Asian elephants Elephas maximus, using body measurements (height, chest girth, length, foot circumference and neck circumference) taken directly from a large population of semi-captive animals in Myanmar (n = 404). We then define new and better fitting formulas to predict body weight in Myanmar elephants from these readily available measures. We also investigate whether the important parameters height and chest girth can be estimated from photographs (n = 151). Our results show considerable variation in the ability of existing estimation equations to predict weight, and that the equations proposed in this paper predict weight better in almost all circumstances. We also find that measurements from standardised photographs reflect body height and chest girth after applying minor adjustments. Our results have implications for size estimation of large wild animals in the field, as well as for management in captive settings.

  18. How Big Is It Really? Assessing the Efficacy of Indirect Estimates of Body Size in Asian Elephants

    PubMed Central

    Chapman, Simon N.; Mumby, Hannah S.; Crawley, Jennie A. H.; Mar, Khyne U.; Htut, Win; Thura Soe, Aung; Aung, Htoo Htoo; Lummaa, Virpi

    2016-01-01

    Information on an organism’s body size is pivotal in understanding its life history and fitness, as well as helping inform conservation measures. However, for many species, particularly large-bodied wild animals, taking accurate body size measurements can be a challenge. Various means to estimate body size have been employed, from more direct methods such as using photogrammetry to obtain height or length measurements, to indirect prediction of weight using other body morphometrics or even the size of dung boli. It is often unclear how accurate these measures are because they cannot be compared to objective measures. Here, we investigate how well existing estimation equations predict the actual body weight of Asian elephants Elephas maximus, using body measurements (height, chest girth, length, foot circumference and neck circumference) taken directly from a large population of semi-captive animals in Myanmar (n = 404). We then define new and better fitting formulas to predict body weight in Myanmar elephants from these readily available measures. We also investigate whether the important parameters height and chest girth can be estimated from photographs (n = 151). Our results show considerable variation in the ability of existing estimation equations to predict weight, and that the equations proposed in this paper predict weight better in almost all circumstances. We also find that measurements from standardised photographs reflect body height and chest girth after applying minor adjustments. Our results have implications for size estimation of large wild animals in the field, as well as for management in captive settings. PMID:26938085

  19. Mental Disorder Symptoms among Public Safety Personnel in Canada.

    PubMed

    Carleton, R Nicholas; Afifi, Tracie O; Turner, Sarah; Taillieu, Tamara; Duranceau, Sophie; LeBouthillier, Daniel M; Sareen, Jitender; Ricciardelli, Rose; MacPhee, Renee S; Groll, Dianne; Hozempa, Kadie; Brunet, Alain; Weekes, John R; Griffiths, Curt T; Abrams, Kelly J; Jones, Nicholas A; Beshai, Shadi; Cramm, Heidi A; Dobson, Keith S; Hatcher, Simon; Keane, Terence M; Stewart, Sherry H; Asmundson, Gordon J G

    2018-01-01

    Canadian public safety personnel (PSP; e.g., correctional workers, dispatchers, firefighters, paramedics, police officers) are exposed to potentially traumatic events as a function of their work. Such exposures contribute to the risk of developing clinically significant symptoms related to mental disorders. The current study was designed to provide estimates of mental disorder symptom frequencies and severities for Canadian PSP. An online survey was made available in English or French from September 2016 to January 2017. The survey assessed current symptoms, and participation was solicited from national PSP agencies and advocacy groups. Estimates were derived using well-validated screening measures. There were 5813 participants (32.5% women) who were grouped into 6 categories (i.e., call center operators/dispatchers, correctional workers, firefighters, municipal/provincial police, paramedics, Royal Canadian Mounted Police). Substantial proportions of participants reported current symptoms consistent with 1 (i.e., 15.1%) or more (i.e., 26.7%) mental disorders based on the screening measures. There were significant differences across PSP categories with respect to proportions screening positive based on each measure. The estimated proportion of PSP reporting current symptom clusters consistent with 1 or more mental disorders appears higher than previously published estimates for the general population; however, direct comparisons are impossible because of methodological differences. The available data suggest that Canadian PSP experience substantial and heterogeneous difficulties with mental health and underscore the need for a rigorous epidemiologic study and category-specific solutions.

  20. Estimation of vehicle home parking availability in China and quantification of its potential impacts on plug-in electric vehicle ownership cost

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ou, Shiqi; Lin, Zhenhong; He, Xin

    China has become the world's largest plug-in electric vehicle (PEV) market. One major barrier to greater consumer acceptance of PEVs is the lack of home parking spaces for charging outlets. This paper developed a methodology to estimate the residential parking ratios (parking spaces vs household numbers) and project the residential community-weighted parking availabilities (home parking availabilities) in China, by area and by province, through data mining from several major real estate trading network platforms. The results show that the home parking availabilities from 2015 to 2050 vary by geographic areas and building life expectancy. A method was developed to quantifymore » the shadow values of home parking impacting on PEV ownership costs and combined with Monte Carlo simulation to address estimation uncertainty. Depending on the PEV type and all-electric range, the value of home parking space to a PEV owner, measured by the reduced vehicle ownership cost, ranges from $2399 USD to 10,802 USD. Finally, the total incremental shadow value, relative to the 2015 situation, of the home parking availability for PEV owners increases over time due to both improvement in home parking availability and increase in the PEV population, and is estimated to reach over 2.51 billion USD by 2025 (U.S. dollars in 2015 level).« less

  1. Estimation of vehicle home parking availability in China and quantification of its potential impacts on plug-in electric vehicle ownership cost

    DOE PAGES

    Ou, Shiqi; Lin, Zhenhong; He, Xin; ...

    2018-05-03

    China has become the world's largest plug-in electric vehicle (PEV) market. One major barrier to greater consumer acceptance of PEVs is the lack of home parking spaces for charging outlets. This paper developed a methodology to estimate the residential parking ratios (parking spaces vs household numbers) and project the residential community-weighted parking availabilities (home parking availabilities) in China, by area and by province, through data mining from several major real estate trading network platforms. The results show that the home parking availabilities from 2015 to 2050 vary by geographic areas and building life expectancy. A method was developed to quantifymore » the shadow values of home parking impacting on PEV ownership costs and combined with Monte Carlo simulation to address estimation uncertainty. Depending on the PEV type and all-electric range, the value of home parking space to a PEV owner, measured by the reduced vehicle ownership cost, ranges from $2399 USD to 10,802 USD. Finally, the total incremental shadow value, relative to the 2015 situation, of the home parking availability for PEV owners increases over time due to both improvement in home parking availability and increase in the PEV population, and is estimated to reach over 2.51 billion USD by 2025 (U.S. dollars in 2015 level).« less

  2. A test of the 'one-point method' for estimating maximum carboxylation capacity from field-measured, light-saturated photosynthesis

    DOE PAGES

    Martin G. De Kauwe; Serbin, Shawn P.; Lin, Yan -Shih; ...

    2015-12-31

    Here, simulations of photosynthesis by terrestrial biosphere models typically need a specification of the maximum carboxylation rate (V cmax). Estimating this parameter using A–C i curves (net photosynthesis, A, vs intercellular CO 2 concentration, C i) is laborious, which limits availability of V cmax data. However, many multispecies field datasets include net photosynthetic rate at saturating irradiance and at ambient atmospheric CO 2 concentration (A sat) measurements, from which V cmax can be extracted using a ‘one-point method’.

  3. Improved soil water deficit estimation through the integration of canopy temperature measurements into a soil water balance model

    USDA-ARS?s Scientific Manuscript database

    Correct prediction of the dynamics of total available water in the root zone (TAWr) is critical for irrigation management as shown in the soil water balance model presented in FAO paper 56 (Allen et al., 1998). In this study, we propose a framework to improve TAWr estimation by incorporating the cro...

  4. A re-evaluation of a case-control model with contaminated controls for resource selection studies

    Treesearch

    Christopher T. Rota; Joshua J. Millspaugh; Dylan C. Kesler; Chad P. Lehman; Mark A. Rumble; Catherine M. B. Jachowski

    2013-01-01

    A common sampling design in resource selection studies involves measuring resource attributes at sample units used by an animal and at sample units considered available for use. Few models can estimate the absolute probability of using a sample unit from such data, but such approaches are generally preferred over statistical methods that estimate a relative probability...

  5. MoisturEC: a new R program for moisture content estimation from electrical conductivity data

    USGS Publications Warehouse

    Terry, Neil; Day-Lewis, Frederick D.; Werkema, Dale D.; Lane, John W.

    2018-01-01

    Noninvasive geophysical estimation of soil moisture has potential to improve understanding of flow in the unsaturated zone for problems involving agricultural management, aquifer recharge, and optimization of landfill design and operations. In principle, several geophysical techniques (e.g., electrical resistivity, electromagnetic induction, and nuclear magnetic resonance) offer insight into soil moisture, but data‐analysis tools are needed to “translate” geophysical results into estimates of soil moisture, consistent with (1) the uncertainty of this translation and (2) direct measurements of moisture. Although geostatistical frameworks exist for this purpose, straightforward and user‐friendly tools are required to fully capitalize on the potential of geophysical information for soil‐moisture estimation. Here, we present MoisturEC, a simple R program with a graphical user interface to convert measurements or images of electrical conductivity (EC) to soil moisture. Input includes EC values, point moisture estimates, and definition of either Archie parameters (based on experimental or literature values) or empirical data of moisture vs. EC. The program produces two‐ and three‐dimensional images of moisture based on available EC and direct measurements of moisture, interpolating between measurement locations using a Tikhonov regularization approach.

  6. Estimation of geopotential differences over intercontinental locations using satellite and terrestrial measurements. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Pavlis, Nikolaos K.

    1991-01-01

    An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.

  7. A statistical estimation of Snow Water Equivalent coupling ground data and MODIS images

    NASA Astrophysics Data System (ADS)

    Bavera, D.; Bocchiola, D.; de Michele, C.

    2007-12-01

    The Snow Water Equivalent (SWE) is an important component of the hydrologic balance of mountain basins and snow fed areas in general. The total cumulated snow water equivalent at the end of the accumulation season represents the water availability at melt. Here, a statistical methodology to estimate the Snow Water Equivalent, at April 1st, is developed coupling ground data (snow depth and snow density measurements) and MODIS images. The methodology is applied to the Mallero river basin (about 320 km²) located in the Central Alps, northern Italy, where are available 11 snow gauges and a lot of sparse snow density measurements. The application covers 7 years from 2001 to 2007. The analysis has identified some problems in the MODIS information due to the cloud cover and misclassification for orographic shadow. The study is performed in the framework of AWARE (A tool for monitoring and forecasting Available WAter REsource in mountain environment) EU-project, a STREP Project in the VI F.P., GMES Initiative.

  8. Aerosol single scattering albedo estimated across China from a combination of ground and satellite measurements

    Treesearch

    Kwon Ho Lee; Zhanqing Li; Man Sing Wong; Jinyuan Xin; Wang Yuesi; Wei Min Hao; Fengsheng Zhao

    2007-01-01

    Single scattering albedo (SSA) governs the strength of aerosols in absorbing solar radiation, but few methods are available to directly measure this important quantity. There currently exist many ground-based measurements of spectral transmittance from which aerosol optical thickness (AOT) are retrieved under clear sky conditions. Reflected radiances at the top of the...

  9. Validation of a novel air toxic risk model with air monitoring.

    PubMed

    Pratt, Gregory C; Dymond, Mary; Ellickson, Kristie; Thé, Jesse

    2012-01-01

    Three modeling systems were used to estimate human health risks from air pollution: two versions of MNRiskS (for Minnesota Risk Screening), and the USEPA National Air Toxics Assessment (NATA). MNRiskS is a unique cumulative risk modeling system used to assess risks from multiple air toxics, sources, and pathways on a local to a state-wide scale. In addition, ambient outdoor air monitoring data were available for estimation of risks and comparison with the modeled estimates of air concentrations. Highest air concentrations and estimated risks were generally found in the Minneapolis-St. Paul metropolitan area and lowest risks in undeveloped rural areas. Emissions from mobile and area (nonpoint) sources created greater estimated risks than emissions from point sources. Highest cancer risks were via ingestion pathway exposures to dioxins and related compounds. Diesel particles, acrolein, and formaldehyde created the highest estimated inhalation health impacts. Model-estimated air concentrations were generally highest for NATA and lowest for the AERMOD version of MNRiskS. This validation study showed reasonable agreement between available measurements and model predictions, although results varied among pollutants, and predictions were often lower than measurements. The results increased confidence in identifying pollutants, pathways, geographic areas, sources, and receptors of potential concern, and thus provide a basis for informing pollution reduction strategies and focusing efforts on specific pollutants (diesel particles, acrolein, and formaldehyde), geographic areas (urban centers), and source categories (nonpoint sources). The results heighten concerns about risks from food chain exposures to dioxins and PAHs. Risk estimates were sensitive to variations in methodologies for treating emissions, dispersion, deposition, exposure, and toxicity. © 2011 Society for Risk Analysis.

  10. Prediction of anthropometric measurements from tooth length--A Dravidian study.

    PubMed

    Sunitha, J; Ananthalakshmi, R; Sathiya, Jeeva J; Nadeem, Jeddy; Dhanarathnam, Shanmugam

    2015-12-01

    Anthropometric measurement is essential for identification of both victims and suspects. Often, this data is not readily available in a crime scene situation. The availability of one data set should help in predicting the other. This study was hypothesised on the basis of a correlation and geometry between the tooth length and various body measurements. To correlate face, palm, foot and stature measurements with tooth length. To derive a regression formula to estimate the various measurements from tooth length. The present study was conducted on Dravidian dental students in the age group 18 - 25 with a sample size of 372. All of the dental and physical parameters were measured using standard anthropometric equipments and techniques. The data was analysed using SPSS software and the methods used for statistical analysis were linear regression analysis and Pearson correlation. The parameters (incisor height (IH), face height (FH), palm length (PL), foot length (FL) and stature (S) showed nil to mild correlation (R = 0.2 ≤ 0.4) except for palm length (PL) and foot length (FL). (R>0.6). It is concluded that odontometric data is not a reliable source for estimating the face height (FH), palm length (PL), foot length (FL) and stature (S).

  11. Use of instantaneous streamflow measurements to improve regression estimates of index flow for the summer month of lowest streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, David J.

    2011-01-01

    In Michigan, index flow Q50 is a streamflow characteristic defined as the minimum of median flows for July, August, and September. The state of Michigan uses index flow estimates to help regulate large (greater than 100,000 gallons per day) water withdrawals to prevent adverse effects on characteristic fish populations. At sites where long-term streamgages are located, index flows are computed directly from continuous streamflow records as GageQ50. In an earlier study, a multiple-regression equation was developed to estimate index flows IndxQ50 at ungaged sites. The index equation explains about 94 percent of the variability of index flows at 147 (index) streamgages by use of six explanatory variables describing soil type, aquifer transmissivity, land cover, and precipitation characteristics. This report extends the results of the previous study, by use of Monte Carlo simulations, to evaluate alternative flow estimators, DiscQ50, IntgQ50, SiteQ50, and AugmQ50. The Monte Carlo simulations treated each of the available index streamgages, in turn, as a miscellaneous site where streamflow conditions are described by one or more instantaneous measurements of flow. In the simulations, instantaneous flows were approximated by daily mean flows at the corresponding site. All estimators use information that can be obtained from instantaneous flow measurements and contemporaneous daily mean flow data from nearby long-term streamgages. The efficacy of these estimators was evaluated over a set of measurement intensities in which the number of simulated instantaneous flow measurements ranged from 1 to 100 at a site. The discrete measurement estimator DiscQ50 is based on a simple linear regression developed between information on daily mean flows at five or more streamgages near the miscellaneous site and their corresponding GageQ50 index flows. The regression relation then was used to compute a DiscQ50 estimate at the miscellaneous site by use of the simulated instantaneous flow measurement. This process was repeated to develop a set of DiscQ50 estimates for all simulated instantaneous measurements, a weighted DiscQ50 estimate was formed from this set. Results indicated that the expected value of this weighted estimate was more precise than the IndxQ50 estimate for all measurement intensities evaluated. The integrated index-flow estimator, IntgQ50, was formed by computing a weighted average of the index estimate IndxQ50 and the DiscQ50 estimate. Results indicated that the IntgQ50 estimator was more precise than the DiscQ50 estimator at low measurement intensities of one to two measurements. At greater measurement intensities, the precision of the IntgQ50 estimator converges to the DiscQ50 estimator. Neither the DiscQ50 nor the IntgQ50 estimators provided site-specific estimates. In particular, although expected values of DiscQ50 and IntgQ50 estimates converge with increasing measurement intensity, they do not necessarily converge to the site-specific value of Q50. The site estimator of flow, SiteQ50, was developed to facilitate this convergence at higher measurement intensities. This is accomplished by use of the median of simulated instantaneous flow values for each measurement intensity level. A weighted estimate of the median and information associated with the IntgQ50 estimate was used to form the SiteQ50 estimate. Initial simulations indicate that the SiteQ50 estimator generally has greater precision than the IntgQ50 estimator at measurement intensities greater than 3, however, additional analysis is needed to identify streamflow conditions under which instantaneous measurements will produce estimates that generally converge to the index flows. A preliminary augmented index regression equation was developed, which contains the index regression estimate and two additional variables associated with base-flow recession characteristics. When these recession variables were estimated as the medians of recession parameters compute

  12. Bayesian inference of ice thickness from remote-sensing data

    NASA Astrophysics Data System (ADS)

    Werder, Mauro A.; Huss, Matthias

    2017-04-01

    Knowledge about ice thickness and volume is indispensable for studying ice dynamics, future sea-level rise due to glacier melt or their contribution to regional hydrology. Accurate measurements of glacier thickness require on-site work, usually employing radar techniques. However, these field measurements are time consuming, expensive and sometime downright impossible. Conversely, measurements of the ice surface, namely elevation and flow velocity, are becoming available world-wide through remote sensing. The model of Farinotti et al. (2009) calculates ice thicknesses based on a mass conservation approach paired with shallow ice physics using estimates of the surface mass balance. The presented work applies a Bayesian inference approach to estimate the parameters of a modified version of this forward model by fitting it to both measurements of surface flow speed and of ice thickness. The inverse model outputs ice thickness as well the distribution of the error. We fit the model to ten test glaciers and ice caps and quantify the improvements of thickness estimates through the usage of surface ice flow measurements.

  13. Quality control and gap-filling of PM10 daily mean concentrations with the best linear unbiased estimator.

    PubMed

    Sozzi, R; Bolignano, A; Ceradini, S; Morelli, M; Petenko, I; Argentini, S

    2017-10-15

    According to the European Directive 2008/50/CE, the air quality assessment consists in the measurement of the concentration fields, and the evaluation of the mean, number of exceedances, etc. of some chemical species dangerous to human health. The measurements provided by an air quality ground-based monitoring network are the main information source but the availability of these data is often limited by several technical and operational problems. In this paper, the best linear unbiased estimator (BLUE) is proposed to validate the pollutant concentration values and to fill the gaps in the measurement of time series collected by a monitoring network. The BLUE algorithm is tested using the daily mean concentrations of particulate matter having aerodynamic diameter less than 10 μ (PM 10 concentrations) measured by the air quality monitoring sensors operating in the Lazio Region in Italy. The comparison between the estimated and measured data evidences an error comparable with the measurement uncertainty. Due to its simplicity and reliability, the BLUE will be used in the routine quality test procedures of the Lazio air quality monitoring network measurements.

  14. CarbonTracker-Lagrange: A Framework for Greenhouse Gas Flux Estimation at Regional to Continental Scales

    NASA Astrophysics Data System (ADS)

    Andrews, A. E.

    2016-12-01

    CarbonTracker-Lagrange (CT-L) is a flexible modeling framework developed to take advantage of newly available atmospheric data for CO2 and other long-lived gases such as CH4 and N2O. The North American atmospheric CO2 measurement network has grown from three sites in 2004 to >100 sites in 2015. The US network includes tall tower, mountaintop, surface, and aircraft sites in the NOAA Global Greenhouse Gas Reference Network along with sites maintained by university, government and private sector researchers. The Canadian network is operated by Environment and Climate Change Canada. This unprecedented dataset can provide spatially and temporally resolved CO2 emissions and uptake flux estimates and quantitative information about drivers of variability, such as drought and temperature. CT-L is a platform for systematic comparison of data assimilation techniques and evaluation of assumed prior, model and observation errors. A novel feature of CT-L is the optimization of boundary values along with surface fluxes, leveraging vertically resolved data available from NOAA's aircraft sampling program. CT-L uses observation footprints (influence functions) from the Weather Research and Forecasting/Stochastic Time-Inverted Lagrangian Transport (WRF-STILT) modeling system to relate atmospheric measurements to upwind fluxes and boundary values. Footprints are pre-computed and the optimization algorithms are efficient, so many variants of the calculation can be performed. Fluxes are adjusted using Bayesian or Geostatistical methods to provide optimal agreement with observations. Satellite measurements of CO2 and CH4 from GOSAT are available starting in July 2009 and from OCO-2 since September 2014. With support from the NASA Carbon Monitoring System, we are developing flux estimation strategies that use remote sensing and in situ data together, including geostatistical inversions using satellite retrievals of solar-induced chlorophyll fluorescence. CT-L enables quantitative investigation of what new measurements would best complement the existing carbon observing system. We are also working to implement multi-species inversions for CO2 flux estimation using CO2 data along with CO, δ13CO2, COS and radiocarbon observations and for CH4 flux estimation using data for various hydrocarbons.

  15. Updated techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada

    USGS Publications Warehouse

    Hess, Glen W.

    2002-01-01

    Techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada have been updated. These techniques were developed using streamflow records at six continuous-record sites, basin physical and climatic characteristics, and concurrent streamflow measurements at four partial-record sites. Two methods, the basin-characteristic method and the concurrent-measurement method, were developed to provide estimating techniques for selected streamflow characteristics at ungaged and partial-record sites in central Nevada. In the first method, logarithmic-regression analyses were used to relate monthly mean streamflows (from all months and by month) from continuous-record gaging sites of various percent exceedence levels or monthly mean streamflows (by month) to selected basin physical and climatic variables at ungaged sites. Analyses indicate that the total drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the equations developed from all months of monthly mean streamflow, the coefficient of determination averaged 0.84 and the standard error of estimate of the relations for the ungaged sites averaged 72 percent. For the equations derived from monthly means by month, the coefficient of determination averaged 0.72 and the standard error of estimate of the relations averaged 78 percent. If standard errors are compared, the relations developed in this study appear generally to be less accurate than those developed in a previous study. However, the new relations are based on additional data and the slight increase in error may be due to the wider range of streamflow for a longer period of record, 1995-2000. In the second method, streamflow measurements at partial-record sites were correlated with concurrent streamflows at nearby gaged sites by the use of linear-regression techniques. Statistical measures of results using the second method typically indicated greater accuracy than for the first method. However, to make estimates for individual months, the concurrent-measurement method requires several years additional streamflow data at more partial-record sites. Thus, exceedence values for individual months are not yet available due to the low number of concurrent-streamflow-measurement data available. Reliability, limitations, and applications of both estimating methods are described herein.

  16. Evaluation of the predicted error of the soil moisture retrieval from C-band SAR by comparison against modelled soil moisture estimates over Australia

    PubMed Central

    Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter

    2012-01-01

    The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have the same basic physical measurement characteristics, and therefore very similar retrieval error estimation method can be applied. Because of the expected improvements in radiometric resolution of the Sentinel-1 backscatter measurements, soil moisture estimation errors can be expected to be an order of magnitude less than those for ASAR GM. This opens the possibility for operationally available medium resolution soil moisture estimates with very well-specified errors that can be assimilated into hydrological or crop yield models, with potentially large benefits for land-atmosphere fluxes, crop growth, and water balance monitoring and modelling. PMID:23483015

  17. An estimation of the condensation rates in three severe storm systems from satellite observations of the convective mass flux

    NASA Technical Reports Server (NTRS)

    Mack, R. A.; Wylie, D. P.

    1982-01-01

    A technique was developed for estimating the condensation rates of convective storms using satellite measurements of cirrus anvil expansion rates and radiosonde measurements of environmental water vapor. Three cases of severe convection in Oklahoma were studied and a diagnostic model was developed for integrating radiosonde data with satellite data. Two methods were used to measure the anvil expansion rates - the expansion of isotherm contours on infrared images, and the divergent motions of small brightness anomalies tracked on the visible images. The differences between the two methods were large as the storms developed, but these differences became small in the latter stage of all three storms. A comparison between the three storms indicated that the available moisture in the lowest levels greatly affected the rain rates of the storms. This was evident from both the measured rain rates of the storms and the condensation rates estimated by the model. The possibility of using this diagnostic model for estimating the intensities of convective storms also is discussed.

  18. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  19. Determination of stability and control parameters of a light airplane from flight data using two estimation methods. [equation error and maximum likelihood methods

    NASA Technical Reports Server (NTRS)

    Klein, V.

    1979-01-01

    Two identification methods, the equation error method and the output error method, are used to estimate stability and control parameter values from flight data for a low-wing, single-engine, general aviation airplane. The estimated parameters from both methods are in very good agreement primarily because of sufficient accuracy of measured data. The estimated static parameters also agree with the results from steady flights. The effect of power different input forms are demonstrated. Examination of all results available gives the best values of estimated parameters and specifies their accuracies.

  20. Evaporation from cultivated and semi-wild Sudanian Savanna in west Africa

    NASA Astrophysics Data System (ADS)

    Ceperley, Natalie C.; Mande, Theophile; van de Giesen, Nick; Tyler, Scott; Yacouba, Hamma; Parlange, Marc B.

    2017-08-01

    Rain-fed farming is the primary livelihood of semi-arid west Africa. Changes in land cover have the potential to affect precipitation, the critical resource for production. Turbulent flux measurements from two eddy-covariance towers and additional observations from a dense network of small, wireless meteorological stations combine to relate land cover (savanna forest and agriculture) to evaporation in a small (3.5 km2) catchment in Burkina Faso, west Africa. We observe larger sensible and latent heat fluxes over the savanna forest in the headwater area relative to the agricultural section of the watershed all year. Higher fluxes above the savanna forest are attributed to the greater number of exposed rocks and trees and the higher productivity of the forest compared to rain-fed, hand-farmed agricultural fields. Vegetation cover and soil moisture are found to be primary controls of the evaporative fraction. Satellite-derived vegetation index (NDVI) and soil moisture are determined to be good predictors of evaporative fraction, as indicators of the physical basis of evaporation. Our measurements provide an estimator that can be used to derive evaporative fraction when only NDVI is available. Such large-scale estimates of evaporative fraction from remotely sensed data are valuable where ground-based measurements are lacking, which is the case across the African continent and many other semi-arid areas. Evaporative fraction estimates can be combined, for example, with sensible heat from measurements of temperature variance, to provide an estimate of evaporation when only minimal meteorological measurements are available in remote regions of the world. These findings reinforce local cultural beliefs of the importance of forest fragments for climate regulation and may provide support to local decision makers and rural farmers in the maintenance of the forest areas.

  1. Method to estimate drag coefficient at the air/ice interface over drifting open pack ice from remotely sensed data

    NASA Technical Reports Server (NTRS)

    Feldman, U.

    1984-01-01

    A knowledge in near real time, of the surface drag coefficient for drifting pack ice is vital for predicting its motions. And since this is not routinely available from measurements it must be replaced by estimates. Hence, a method for estimating this variable, as well as the drag coefficient at the water/ice interface and the ice thickness, for drifting open pack ice was developed. These estimates were derived from three-day sequences of LANDSAT-1 MSS images and surface weather charts and from the observed minima and maxima of these variables. The method was tested with four data sets in the southeastern Beaufort sea. Acceptable results were obtained for three data sets. Routine application of the method depends on the availability of data from an all-weather air or spaceborne remote sensing system, producing images with high geometric fidelity and high resolution.

  2. A Bayesian Approach for Measurements of Stray Neutrons at Proton Therapy Facilities: Quantifying Neutron Dose Uncertainty.

    PubMed

    Dommert, M; Reginatto, M; Zboril, M; Fiedler, F; Helmbrecht, S; Enghardt, W; Lutz, B

    2017-11-28

    Bonner sphere measurements are typically analyzed using unfolding codes. It is well known that it is difficult to get reliable estimates of uncertainties for standard unfolding procedures. An alternative approach is to analyze the data using Bayesian parameter estimation. This method provides reliable estimates of the uncertainties of neutron spectra leading to rigorous estimates of uncertainties of the dose. We extend previous Bayesian approaches and apply the method to stray neutrons in proton therapy environments by introducing a new parameterized model which describes the main features of the expected neutron spectra. The parameterization is based on information that is available from measurements and detailed Monte Carlo simulations. The validity of this approach has been validated with results of an experiment using Bonner spheres carried out at the experimental hall of the OncoRay proton therapy facility in Dresden. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Wave Period and Coastal Bathymetry Estimations from Satellite Images

    NASA Astrophysics Data System (ADS)

    Danilo, Celine; Melgani, Farid

    2016-08-01

    We present an approach for wave period and coastal water depth estimation. The approach based on wave observations, is entirely independent of ancillary data and can theoretically be applied to SAR or optical images. In order to demonstrate its feasibility we apply our method to more than 50 Sentinel-1A images of the Hawaiian Islands, well-known for its long waves. Six wave buoys are available to compare our results with in-situ measurements. The results on Sentinel-1A images show that half of the images were unsuitable for applying the method (no swell or wavelength too small to be captured by the SAR). On the other half, 78% of the estimated wave periods are in accordance with buoy measurements. In addition, we present preliminary results of the estimation of the coastal water depth on a Landsat-8 image (with characteristics close to Sentinel-2A). With a squared correlation coefficient of 0.7 for ground truth measurement, this approach reveals promising results for monitoring coastal bathymetry.

  4. Optical Coherence Tomography Based Estimates of Crystalline Lens Volume, Equatorial Diameter, and Plane Position.

    PubMed

    Martinez-Enriquez, Eduardo; Sun, Mengchan; Velasco-Ocana, Miriam; Birkenfeld, Judith; Pérez-Merino, Pablo; Marcos, Susana

    2016-07-01

    Measurement of crystalline lens geometry in vivo is critical to optimize performance of state-of-the-art cataract surgery. We used custom-developed quantitative anterior segment optical coherence tomography (OCT) and developed dedicated algorithms to estimate lens volume (VOL), equatorial diameter (DIA), and equatorial plane position (EPP). The method was validated ex vivo in 27 human donor (19-71 years of age) lenses, which were imaged in three-dimensions by OCT. In vivo conditions were simulated assuming that only the information within a given pupil size (PS) was available. A parametric model was used to estimate the whole lens shape from PS-limited data. The accuracy of the estimated lens VOL, DIA, and EPP was evaluated by comparing estimates from the whole lens data and PS-limited data ex vivo. The method was demonstrated in vivo using 2 young eyes during accommodation and 2 cataract eyes. Crystalline lens VOL was estimated within 96% accuracy (average estimation error across lenses ± standard deviation: 9.30 ± 7.49 mm3). Average estimation errors in EPP were below 40 ± 32 μm, and below 0.26 ± 0.22 mm in DIA. Changes in lens VOL with accommodation were not statistically significant (2-way ANOVA, P = 0.35). In young eyes, DIA decreased and EPP increased statistically significantly with accommodation (P < 0.001) by 0.14 mm and 0.13 mm, respectively, on average across subjects. In cataract eyes, VOL = 205.5 mm3, DIA = 9.57 mm, and EPP = 2.15 mm on average. Quantitative OCT with dedicated image processing algorithms allows estimation of human crystalline lens volume, diameter, and equatorial lens position, as validated from ex vivo measurements, where entire lens images are available.

  5. The modular modality frame model: continuous body state estimation and plausibility-weighted information fusion.

    PubMed

    Ehrenfeld, Stephan; Butz, Martin V

    2013-02-01

    Humans show admirable capabilities in movement planning and execution. They can perform complex tasks in various contexts, using the available sensory information very effectively. Body models and continuous body state estimations appear necessary to realize such capabilities. We introduce the Modular Modality Frame (MMF) model, which maintains a highly distributed, modularized body model continuously updating, modularized probabilistic body state estimations over time. Modularization is realized with respect to modality frames, that is, sensory modalities in particular frames of reference and with respect to particular body parts. We evaluate MMF performance on a simulated, nine degree of freedom arm in 3D space. The results show that MMF is able to maintain accurate body state estimations despite high sensor and motor noise. Moreover, by comparing the sensory information available in different modality frames, MMF can identify faulty sensory measurements on the fly. In the near future, applications to lightweight robot control should be pursued. Moreover, MMF may be enhanced with neural encodings by introducing neural population codes and learning techniques. Finally, more dexterous goal-directed behavior should be realized by exploiting the available redundant state representations.

  6. River Runoff Estimates on the Basis of Satellite-Derived Surface Currents and Water Levels

    NASA Astrophysics Data System (ADS)

    Gruenler, S.; Romeiser, R.; Stammer, D.

    2007-12-01

    One promising technique for river runoff estimates from space is the retrieval of surface currents on the basis of synthetic aperture radar along-track interferometry (ATI). The German satellite TerraSAR-X, which was launched in June 2007, permits current measurements by ATI in an experimental mode of operation. Based on numerical simulations, we present first findings of a research project in which the potential of satellite measurements of various parameters with different temporal and spatial sampling characteristics is evaluated and a dedicated data synthesis system for river discharge estimates is developed. We address the achievable accuracy and limitations of such estimates for different local flow conditions at selected test sites. High-resolution three- dimensional current fields in the Elbe river (Germany) from a numerical model of the German Federal Waterways Engineering and Research Institute (BAW) are used as reference data set and input for simulations of a variety of possible measuring and data interpretation strategies to be evaluated. For example, runoff estimates on the basis of measured surface current fields and river widths from TerraSAR-X and water levels from radar altimetry are simulated. Despite the simplicity of some of the applied methods, the results provide quite comprehensive pictures of the Elbe river runoff dynamics. Although the satellite-based river runoff estimates exhibit a lower accuracy in comparison to traditional gauge measurements, the proposed measuring strategies are quite promising for the monitoring of river discharge dynamics in regions where only sparse in-situ measurements are available. We discuss the applicability to a number of major rivers around the world.

  7. Source Term Estimation of Radioxenon Released from the Fukushima Dai-ichi Nuclear Reactors Using Measured Air Concentrations and Atmospheric Transport Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Biegalski, S.; Bowyer, Ted W.

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout from the Fukushima Daiichi nuclear accident in March 2011. Atmospheric transport modeling (ATM) of plumes of noble gases and particulates were performed soon after the accident to determine plausible detection locations of any radioactive releases to the atmosphere. We combine sampling data from multiple International Modeling System (IMS) locations in a new way to estimate the magnitude and time sequence of the releases. Dilution factors from the modeled plume at five different detection locations were combined with 57 atmospheric concentration measurements of 133-Xe taken from Marchmore » 18 to March 23 to estimate the source term. This approach estimates that 59% of the 1.24×1019 Bq of 133-Xe present in the reactors at the time of the earthquake was released to the atmosphere over a three day period. Source term estimates from combinations of detection sites have lower spread than estimates based on measurements at single detection sites. Sensitivity cases based on data from four or more detection locations bound the source term between 35% and 255% of available xenon inventory.« less

  8. Vision-Based SLAM System for Unmanned Aerial Vehicles

    PubMed Central

    Munguía, Rodrigo; Urzua, Sarquis; Bolea, Yolanda; Grau, Antoni

    2016-01-01

    The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy. PMID:26999131

  9. Recharge and groundwater models: An overview

    USGS Publications Warehouse

    Sanford, W.

    2002-01-01

    Recharge is a fundamental component of groundwater systems, and in groundwater-modeling exercises recharge is either measured and specified or estimated during model calibration. The most appropriate way to represent recharge in a groundwater model depends upon both physical factors and study objectives. Where the water table is close to the land surface, as in humid climates or regions with low topographic relief, a constant-head boundary condition is used. Conversely, where the water table is relatively deep, as in drier climates or regions with high relief, a specified-flux boundary condition is used. In most modeling applications, mixed-type conditions are more effective, or a combination of the different types can be used. The relative distribution of recharge can be estimated from water-level data only, but flux observations must be incorporated in order to estimate rates of recharge. Flux measurements are based on either Darcian velocities (e.g., stream base-flow) or seepage velocities (e.g., groundwater age). In order to estimate the effective porosity independently, both types of flux measurements must be available. Recharge is often estimated more efficiently when automated inverse techniques are used. Other important applications are the delineation of areas contributing recharge to wells and the estimation of paleorecharge rates using carbon-14.

  10. Use of satellite and modeled soil moisture data for predicting event soil loss at plot scale

    NASA Astrophysics Data System (ADS)

    Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.

    2015-09-01

    The potential of coupling soil moisture and a Universal Soil Loss Equation-based (USLE-based) model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e., the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the USLE enhances the capability of the model to account for variations in event soil losses, the soil moisture being an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to ~ 0.35 and a root mean square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.

  11. Use of satellite and modelled soil moisture data for predicting event soil loss at plot scale

    NASA Astrophysics Data System (ADS)

    Todisco, F.; Brocca, L.; Termite, L. F.; Wagner, W.

    2015-03-01

    The potential of coupling soil moisture and a~USLE-based model for event soil loss estimation at plot scale is carefully investigated at the Masse area, in Central Italy. The derived model, named Soil Moisture for Erosion (SM4E), is applied by considering the unavailability of in situ soil moisture measurements, by using the data predicted by a soil water balance model (SWBM) and derived from satellite sensors, i.e. the Advanced SCATterometer (ASCAT). The soil loss estimation accuracy is validated using in situ measurements in which event observations at plot scale are available for the period 2008-2013. The results showed that including soil moisture observations in the event rainfall-runoff erosivity factor of the RUSLE/USLE, enhances the capability of the model to account for variations in event soil losses, being the soil moisture an effective alternative to the estimated runoff, in the prediction of the event soil loss at Masse. The agreement between observed and estimated soil losses (through SM4E) is fairly satisfactory with a determination coefficient (log-scale) equal to of ~ 0.35 and a root-mean-square error (RMSE) of ~ 2.8 Mg ha-1. These results are particularly significant for the operational estimation of soil losses. Indeed, currently, soil moisture is a relatively simple measurement at the field scale and remote sensing data are also widely available on a global scale. Through satellite data, there is the potential of applying the SM4E model for large-scale monitoring and quantification of the soil erosion process.

  12. A new NDVI measure that overcomes data sparsity in cloud-covered regions predicts annual variation in ground-based estimates of high arctic plant productivity

    NASA Astrophysics Data System (ADS)

    Rune Karlsen, Stein; Anderson, Helen B.; van der Wal, René; Bremset Hansen, Brage

    2018-02-01

    Efforts to estimate plant productivity using satellite data can be frustrated by the presence of cloud cover. We developed a new method to overcome this problem, focussing on the high-arctic archipelago of Svalbard where extensive cloud cover during the growing season can prevent plant productivity from being estimated over large areas. We used a field-based time-series (2000-2009) of live aboveground vascular plant biomass data and a recently processed cloud-free MODIS-Normalised Difference Vegetation Index (NDVI) data set (2000-2014) to estimate, on a pixel-by-pixel basis, the onset of plant growth. We then summed NDVI values from onset of spring to the average time of peak NDVI to give an estimate of annual plant productivity. This remotely sensed productivity measure was then compared, at two different spatial scales, with the peak plant biomass field data. At both the local scale, surrounding the field data site, and the larger regional scale, our NDVI measure was found to predict plant biomass (adjusted R 2 = 0.51 and 0.44, respectively). The commonly used ‘maximum NDVI’ plant productivity index showed no relationship with plant biomass, likely due to some years having very few cloud-free images available during the peak plant growing season. Thus, we propose this new summed NDVI from onset of spring to time of peak NDVI as a proxy of large-scale plant productivity for regions such as the Arctic where climatic conditions restrict the availability of cloud-free images.

  13. Estimating big bluestem albedo from directional reflectance measurements

    NASA Technical Reports Server (NTRS)

    Irons, J. R.; Ranson, K. J.; Daughtry, C. S. T.

    1988-01-01

    Multidirectional reflectance factor measurements acquired in the summer of 1986 are used to make estimates of big bluestem grass albedo, evaluating the variation of albedo with changes in solar zenith angle and phenology. On any given day, the albedo was observed to increase by at least 19 percent as solar zenith angle increased. Changes in albedo were found to correspond to changes in the green leaf area index of the grass canopy. Estimates of albedo made using reflectance data acquired within only one or two azimuthal planes and at a restricted range of view zenith angle were evaluated and compared to 'true' albedos derived from all available reflectance factor data. It was found that even a limited amount of multiple direction reflectance data was preferable to a single nadir reflectance factor for the estimation of prarie grass albedo.

  14. Technical note: Using distributed temperature sensing for Bowen ratio evaporation measurements

    NASA Astrophysics Data System (ADS)

    Schilperoort, Bart; Coenders-Gerrits, Miriam; Luxemburg, Willem; Jiménez Rodríguez, César; Cisneros Vaca, César; Savenije, Hubert

    2018-01-01

    Rapid improvements in the precision and spatial resolution of distributed temperature sensing (DTS) technology now allow its use in hydrological and atmospheric sciences. Introduced by ) is the use of DTS for measuring the Bowen ratio (BR-DTS), to estimate the sensible and latent heat flux. The Bowen ratio is derived from DTS-measured vertical profiles of the air temperature and wet-bulb temperature. However, in previous research the measured temperatures were not validated, and the cables were not shielded from solar radiation. Additionally, the BR-DTS method has not been tested above a forest before, where temperature gradients are small and energy storage in the air column becomes important. In this paper the accuracy of the wet-bulb and air temperature measurements of the DTS are verified, and the resulting Bowen ratio and heat fluxes are compared to eddy covariance data. The performance of BR-DTS was tested on a 46 m high tower in a mixed forest in the centre of the Netherlands in August 2016. The average tree height is 26 to 30 m, and the temperatures are measured below, in, and above the canopy. Using the vertical temperature profiles the storage of latent and sensible heat in the air column was calculated. We found a significant effect of solar radiation on the temperature measurements, leading to a deviation of up to 3 K. By installing screens, the error caused by sunlight is reduced to under 1 K. Wind speed seems to have a minimal effect on the measured wet-bulb temperature, both below and above the canopy. After a simple quality control, the Bowen ratio measured by DTS correlates well with eddy covariance (EC) estimates (r2 = 0.59). The average energy balance closure between BR-DTS and EC is good, with a mean underestimation of 3.4 W m-2 by the BR-DTS method. However, during daytime the BR-DTS method overestimates the available energy, and during night-time the BR-DTS method estimates the available energy to be more negative. This difference could be related to the biomass heat storage, which is neglected in this study. The BR-DTS method overestimates the latent heat flux on average by 18.7 W m-2, with RMSE = 90 W m-2. The sensible heat flux is underestimated on average by 10.6 W m-2, with RMSE = 76 W m-2. Estimates of the BR-DTS can be improved once the uncertainties in the energy balance are reduced. However, applying, for example, Monin-Obukhov similarity theory could provide independent estimates for the sensible heat flux. This would make the determination of the highly uncertain and difficult to determine net available energy redundant.

  15. Skin Friction at Very High Reynolds Numbers in the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Watson, Ralph D.; Anders, John B.; Hall, Robert M.

    2006-01-01

    Skin friction coefficients were derived from measurements using standard measurement technologies on an axisymmetric cylinder in the NASA Langley National Transonic Facility (NTF) at Mach numbers from 0.2 to 0.85. The pressure gradient was nominally zero, the wall temperature was nominally adiabatic, and the ratio of boundary layer thickness to model diameter within the measurement region was 0.10 to 0.14, varying with distance along the model. Reynolds numbers based on momentum thicknesses ranged from 37,000 to 605,000. The measurements approximately doubled the range of available data for flat plate skin friction coefficients. Three different techniques were used to measure surface shear. The maximum error of Preston tube measurements was estimated to be 2.5 percent, while that of Clauser derived measurements was estimated to be approximately 5 percent. Direct measurements by skin friction balance proved to be subject to large errors and were not considered reliable.

  16. Estimation of Hidden State Variables of the Intracranial System Using Constrained Nonlinear Kalman Filters

    PubMed Central

    Nenov, Valeriy; Bergsneider, Marvin; Glenn, Thomas C.; Vespa, Paul; Martin, Neil

    2007-01-01

    Impeded by the rigid skull, assessment of physiological variables of the intracranial system is difficult. A hidden state estimation approach is used in the present work to facilitate the estimation of unobserved variables from available clinical measurements including intracranial pressure (ICP) and cerebral blood flow velocity (CBFV). The estimation algorithm is based on a modified nonlinear intracranial mathematical model, whose parameters are first identified in an offline stage using a nonlinear optimization paradigm. Following the offline stage, an online filtering process is performed using a nonlinear Kalman filter (KF)-like state estimator that is equipped with a new way of deriving the Kalman gain satisfying the physiological constraints on the state variables. The proposed method is then validated by comparing different state estimation methods and input/output (I/O) configurations using simulated data. It is also applied to a set of CBFV, ICP and arterial blood pressure (ABP) signal segments from brain injury patients. The results indicated that the proposed constrained nonlinear KF achieved the best performance among the evaluated state estimators and that the state estimator combined with the I/O configuration that has ICP as the measured output can potentially be used to estimate CBFV continuously. Finally, the state estimator combined with the I/O configuration that has both ICP and CBFV as outputs can potentially estimate the lumped cerebral arterial radii, which are not measurable in a typical clinical environment. PMID:17281533

  17. Matgéné: a program to develop job-exposure matrices in the general population in France.

    PubMed

    Févotte, Joëlle; Dananché, Brigitte; Delabre, Laurène; Ducamp, Stephane; Garras, Loïc; Houot, Marie; Luce, Danièle; Orlowski, Ewa; Pilorget, Corinne; Lacourt, Aude; Brochard, Patrick; Goldberg, Marcel; Imbernon, Ellen

    2011-10-01

    Matgéné is a program to develop job-exposure matrices (JEMs) adapted to the general population in France for the period since 1950. The aim is to create retrospective exposure assessment tools for estimating the prevalence of occupational exposure to various agents that can then be correlated to health-related parameters. JEMs were drawn up by a team of six industrial hygienists who based their assessments on available occupational measurement, economic and statistical data, and several thousand job descriptions from epidemiological studies performed in France since 1984. Each JEM is specific to one agent, assessing exposure for a set of homogeneous combinations (occupation × activity × period) according to two occupational classifications (ISCO 1968 and PCS 1994) and one economic activities classification (NAF 2000). The cells of the JEM carry an estimate of the probability and level of exposure. Level is estimated by the duration and intensity of exposure-linked tasks or by description of the tasks when exposure measurement data are lacking for the agent in question. The JEMs were applied to a representative sample of the French population in 2007, and prevalence for each exposure was estimated in various population groups. All documents and data are available on a dedicated website. By the end of 2010, 18 JEMs have been developed and eight are under development, concerning a variety of chemical agents: organic and mineral dust, mineral fibers, and solvents. By implementation in the French population, exposure prevalences were calculated at different dates and for complete careers, and attributable risk fractions were estimated for certain pathologies. Some of these results were validated by comparison with those of other programs. Initial Matgéné JEMs results are in agreement with the French and international literature, thus validating the methodology. Exposure estimates precision, however, vary between agents and according to the amount of exposure measurement data available. These JEMs are important epidemiological tools, and improving their quality will require investment in occupational health data harvesting, especially in the case of low-level exposures.

  18. A method for predicting service life of zinc rich primers on carbon steel

    NASA Technical Reports Server (NTRS)

    Hoppesch, C. W.

    1986-01-01

    The service life of zinc rich primers on carbon steel can be estimated by immersing a primer coated glass slide into an aqueous copper sulfate solution and measuring the amount of zinc that reacts with the copper in 15 minutes. This zinc availability test was used to evaluate eleven primers currently available for which marine beach exposure data was available from previous programs. Results were evaluated and a correlation between zinc availability and ASTM rust grade was shown.

  19. Survival estimates for Florida manatees from the photo-identification of individuals

    USGS Publications Warehouse

    Langtimm, C.A.; Beck, C.A.; Edwards, H.H.; Fick-Child, K. J.; Ackerman, B.B.; Barton, S.L.; Hartley, W.C.

    2004-01-01

    We estimated adult survival probabilities for the endangered Florida manatee (Trichechus manatus latirostris) in four regional populations using photo-identification data and open-population capture-recapture statistical models. The mean annual adult survival probability over the most recent 10-yr period of available estimates was as follows: Northwest - 0.956 (SE 0.007), Upper St. Johns River - 0.960 (0.011), Atlantic Coast - 0.937 (0.008), and Southwest - 0.908 (0.019). Estimates of temporal variance independent of sampling error, calculated from the survival estimates, indicated constant survival in the Upper St. Johns River, true temporal variability in the Northwest and Atlantic Coast, and large sampling variability obscuring estimates for the Southwest. Calf and subadult survival probabilities were estimated for the Upper St. Johns River from the only available data for known-aged individuals: 0.810 (95% CI 0.727-0.873) for 1st year calves, 0.915 (0.827-0.960) for 2nd year calves, and 0.969 (0.946-0.982) for manatee 3 yr or older. These estimates of survival probabilities and temporal variance, in conjunction with estimates of reproduction probabilities from photoidentification data can be used to model manatee population dynamics, estimate population growth rates, and provide an integrated measure of regional status.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bisio, Alessandro; D'Ariano, Giacomo Mauro; Perinotti, Paolo

    We analyze quantum algorithms for cloning of a quantum measurement. Our aim is to mimic two uses of a device performing an unknown von Neumann measurement with a single use of the device. When the unknown device has to be used before the bipartite state to be measured is available we talk about 1{yields}2 learning of the measurement, otherwise the task is called 1{yields}2 cloning of a measurement. We perform the optimization for both learning and cloning for arbitrary dimension d of the Hilbert space. For 1{yields}2 cloning we also propose a simple quantum network that achieves the optimal fidelity.more » The optimal fidelity for 1{yields}2 learning just slightly outperforms the estimate and prepare strategy in which one first estimates the unknown measurement and depending on the result suitably prepares the duplicate.« less

  1. Assessment of reduced-order unscented Kalman filter for parameter identification in 1-dimensional blood flow models using experimental data.

    PubMed

    Caiazzo, A; Caforio, Federica; Montecinos, Gino; Muller, Lucas O; Blanco, Pablo J; Toro, Eluterio F

    2016-10-25

    This work presents a detailed investigation of a parameter estimation approach on the basis of the reduced-order unscented Kalman filter (ROUKF) in the context of 1-dimensional blood flow models. In particular, the main aims of this study are (1) to investigate the effects of using real measurements versus synthetic data for the estimation procedure (i.e., numerical results of the same in silico model, perturbed with noise) and (2) to identify potential difficulties and limitations of the approach in clinically realistic applications to assess the applicability of the filter to such setups. For these purposes, the present numerical study is based on a recently published in vitro model of the arterial network, for which experimental flow and pressure measurements are available at few selected locations. To mimic clinically relevant situations, we focus on the estimation of terminal resistances and arterial wall parameters related to vessel mechanics (Young's modulus and wall thickness) using few experimental observations (at most a single pressure or flow measurement per vessel). In all cases, we first perform a theoretical identifiability analysis on the basis of the generalized sensitivity function, comparing then the results owith the ROUKF, using either synthetic or experimental data, to results obtained using reference parameters and to available measurements. Copyright © 2016 John Wiley & Sons, Ltd.

  2. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

  3. Optimization of advanced Wiener estimation methods for Raman reconstruction from narrow-band measurements in the presence of fluorescence background

    PubMed Central

    Chen, Shuo; Ong, Yi Hong; Lin, Xiaoqian; Liu, Quan

    2015-01-01

    Raman spectroscopy has shown great potential in biomedical applications. However, intrinsically weak Raman signals cause slow data acquisition especially in Raman imaging. This problem can be overcome by narrow-band Raman imaging followed by spectral reconstruction. Our previous study has shown that Raman spectra free of fluorescence background can be reconstructed from narrow-band Raman measurements using traditional Wiener estimation. However, fluorescence-free Raman spectra are only available from those sophisticated Raman setups capable of fluorescence suppression. The reconstruction of Raman spectra with fluorescence background from narrow-band measurements is much more challenging due to the significant variation in fluorescence background. In this study, two advanced Wiener estimation methods, i.e. modified Wiener estimation and sequential weighted Wiener estimation, were optimized to achieve this goal. Both spontaneous Raman spectra and surface enhanced Raman spectra were evaluated. Compared with traditional Wiener estimation, two advanced methods showed significant improvement in the reconstruction of spontaneous Raman spectra. However, traditional Wiener estimation can work as effectively as the advanced methods for SERS spectra but much faster. The wise selection of these methods would enable accurate Raman reconstruction in a simple Raman setup without the function of fluorescence suppression for fast Raman imaging. PMID:26203387

  4. Gridding Cloud and Irradiance to Quantify Variability at the ARM Southern Great Plains Site

    NASA Astrophysics Data System (ADS)

    Riihimaki, L.; Long, C. N.; Gaustad, K.

    2017-12-01

    Ground-based radiometers provide the most accurate measurements of surface irradiance. However, geometry differences between surface point measurements and large area climate model grid boxes or satellite-based footprints can cause systematic differences in surface irradiance comparisons. In this work, irradiance measurements from a network of ground stations around Kansas and Oklahoma at the US Department of Energy Atmospheric Radiation Measurement (ARM) Southern Great Plains facility are examined. Upwelling and downwelling broadband shortwave and longwave radiometer measurements are available at each site as well as surface meteorological measurements. In addition to the measured irradiances, clear sky irradiance and cloud fraction estimates are analyzed using well established methods based on empirical fits to measured clear sky irradiances. Measurements are interpolated onto a 0.25 degree latitude and longitude grid using a Gaussian weight scheme in order to provide a more accurate statistical comparison between ground measurements and a larger area such as that used in climate models, plane parallel radiative transfer calculations, and other statistical and climatological research. Validation of the gridded product will be shown, as well as analysis that quantifies the impact of site location, cloud type, and other factors on the resulting surface irradiance estimates. The results of this work are being incorporated into the Surface Cloud Grid operational data product produced by ARM, and will be made publicly available for use by others.

  5. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  6. Comparison of Lives Saved Tool model child mortality estimates against measured data from vector control studies in sub-Saharan Africa

    PubMed Central

    2011-01-01

    Background Insecticide-treated mosquito nets (ITNs) and indoor-residual spraying have been scaled-up across sub-Saharan Africa as part of international efforts to control malaria. These interventions have the potential to significantly impact child survival. The Lives Saved Tool (LiST) was developed to provide national and regional estimates of cause-specific mortality based on the extent of intervention coverage scale-up. We compared the percent reduction in all-cause child mortality estimated by LiST against measured reductions in all-cause child mortality from studies assessing the impact of vector control interventions in Africa. Methods We performed a literature search for appropriate studies and compared reductions in all-cause child mortality estimated by LiST to 4 studies that estimated changes in all-cause child mortality following the scale-up of vector control interventions. The following key parameters measured by each study were applied to available country projections: baseline all-cause child mortality rate, proportion of mortality due to malaria, and population coverage of vector control interventions at baseline and follow-up years. Results The percent reduction in all-cause child mortality estimated by the LiST model fell within the confidence intervals around the measured mortality reductions for all 4 studies. Two of the LiST estimates overestimated the mortality reductions by 6.1 and 4.2 percentage points (33% and 35% relative to the measured estimates), while two underestimated the mortality reductions by 4.7 and 6.2 percentage points (22% and 25% relative to the measured estimates). Conclusions The LiST model did not systematically under- or overestimate the impact of ITNs on all-cause child mortality. These results show the LiST model to perform reasonably well at estimating the effect of vector control scale-up on child mortality when compared against measured data from studies across a range of malaria transmission settings. The LiST model appears to be a useful tool in estimating the potential mortality reduction achieved from scaling-up malaria control interventions. PMID:21501453

  7. Comparing errors in Medicaid reporting across surveys: evidence to date.

    PubMed

    Call, Kathleen T; Davern, Michael E; Klerman, Jacob A; Lynch, Victoria

    2013-04-01

    To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys. All available validation studies. Compare results from existing research to understand variation in reporting across surveys. Synthesize all available studies validating survey reports of Medicaid coverage. Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate. Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting. © Health Research and Educational Trust.

  8. Gathering pipeline methane emissions in Fayetteville shale pipelines and scoping guidelines for future pipeline measurement campaigns

    DOE PAGES

    Zimmerle, Daniel J.; Pickering, Cody K.; Bell, Clay S.; ...

    2017-11-24

    Gathering pipelines, which transport gas from well pads to downstream processing, are a sector of the natural gas supply chain for which little measured methane emissions data are available. This study performed leak detection and measurement on 96 km of gathering pipeline and the associated 56 pigging facilities and 39 block valves. The study found one underground leak accounting for 83% (4.0 kg CH 4/hr) of total measured emissions. Methane emissions for the 4684 km of gathering pipeline in the study area were estimated at 402 kg CH 4/hr [95 to 1065 kg CH 4/hr, 95% CI], or 1% [0.2%more » to 2.6%] of all methane emissions measured during a prior aircraft study of the same area. Emissions estimated by this study fall within the uncertainty range of emissions estimated using emission factors from EPA's 2015 Greenhouse Inventory and study activity estimates. While EPA's current inventory is based upon emission factors from distribution mains measured in the 1990s, this study indicates that using emission factors from more recent distribution studies could significantly underestimate emissions from gathering pipelines. To guide broader studies of pipeline emissions, we also estimate the fraction of the pipeline length within a basin that must be measured to constrain uncertainty of pipeline emissions estimates to within 1% of total basin emissions. The study provides both substantial insight into the mix of emission sources and guidance for future gathering pipeline studies, but since measurements were made in a single basin, the results are not sufficiently representative to provide methane emission factors at the regional or national level.« less

  9. Gathering pipeline methane emissions in Fayetteville shale pipelines and scoping guidelines for future pipeline measurement campaigns

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zimmerle, Daniel J.; Pickering, Cody K.; Bell, Clay S.

    Gathering pipelines, which transport gas from well pads to downstream processing, are a sector of the natural gas supply chain for which little measured methane emissions data are available. This study performed leak detection and measurement on 96 km of gathering pipeline and the associated 56 pigging facilities and 39 block valves. The study found one underground leak accounting for 83% (4.0 kg CH 4/hr) of total measured emissions. Methane emissions for the 4684 km of gathering pipeline in the study area were estimated at 402 kg CH 4/hr [95 to 1065 kg CH 4/hr, 95% CI], or 1% [0.2%more » to 2.6%] of all methane emissions measured during a prior aircraft study of the same area. Emissions estimated by this study fall within the uncertainty range of emissions estimated using emission factors from EPA's 2015 Greenhouse Inventory and study activity estimates. While EPA's current inventory is based upon emission factors from distribution mains measured in the 1990s, this study indicates that using emission factors from more recent distribution studies could significantly underestimate emissions from gathering pipelines. To guide broader studies of pipeline emissions, we also estimate the fraction of the pipeline length within a basin that must be measured to constrain uncertainty of pipeline emissions estimates to within 1% of total basin emissions. The study provides both substantial insight into the mix of emission sources and guidance for future gathering pipeline studies, but since measurements were made in a single basin, the results are not sufficiently representative to provide methane emission factors at the regional or national level.« less

  10. ECO-DRIVING MODELING ENVIRONMENT

    DOT National Transportation Integrated Search

    2015-11-01

    This research project aims to examine the eco-driving modeling capabilities of different traffic modeling tools available and to develop a driver-simulator-based eco-driving modeling tool to evaluate driver behavior and to reliably estimate or measur...

  11. Estimated generic prices of cancer medicines deemed cost-ineffective in England: a cost estimation analysis.

    PubMed

    Hill, Andrew; Redd, Christopher; Gotham, Dzintars; Erbacher, Isabelle; Meldrum, Jonathan; Harada, Ryo

    2017-01-20

    The aim of this study was to estimate lowest possible treatment costs for four novel cancer drugs, hypothesising that generic manufacturing could significantly reduce treatment costs. This research was carried out in a non-clinical research setting using secondary data. There were no human participants in the study. Four drugs were selected for the study: bortezomib, dasatinib, everolimus and gefitinib. These medications were selected according to their clinical importance, novel pharmaceutical actions and the availability of generic price data. Target costs for treatment were to be generated for each indication for each treatment. The primary outcome measure was the target cost according to a production cost calculation algorithm. The secondary outcome measure was the target cost as the lowest available generic price; this was necessary where export data were not available to generate an estimate from our cost calculation algorithm. Other outcomes included patent expiry dates and total eligible treatment populations. Target prices were £411 per cycle for bortezomib, £9 per month for dasatinib, £852 per month for everolimus and £10 per month for gefitinib. Compared with current list prices in England, these target prices would represent reductions of 74-99.6%. Patent expiry dates were bortezomib 2014-22, dasatinib 2020-26, everolimus 2019-25 and gefitinib 2017. The total global eligible treatment population in 1 year is 769 736. Our findings demonstrate that affordable drug treatment costs are possible for novel cancer drugs, suggesting that new therapeutic options can be made available to patients and doctors worldwide. Assessing treatment cost estimations alongside cost-effectiveness evaluations is an important area of future research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  12. Spatial and spectral interpolation of ground-motion intensity measure observations

    USGS Publications Warehouse

    Worden, Charles; Thompson, Eric M.; Baker, Jack W.; Bradley, Brendon A.; Luco, Nicolas; Wilson, David

    2018-01-01

    Following a significant earthquake, ground‐motion observations are available for a limited set of locations and intensity measures (IMs). Typically, however, it is desirable to know the ground motions for additional IMs and at locations where observations are unavailable. Various interpolation methods are available, but because IMs or their logarithms are normally distributed, spatially correlated, and correlated with each other at a given location, it is possible to apply the conditional multivariate normal (MVN) distribution to the problem of estimating unobserved IMs. In this article, we review the MVN and its application to general estimation problems, and then apply the MVN to the specific problem of ground‐motion IM interpolation. In particular, we present (1) a formulation of the MVN for the simultaneous interpolation of IMs across space and IM type (most commonly, spectral response at different oscillator periods) and (2) the inclusion of uncertain observation data in the MVN formulation. These techniques, in combination with modern empirical ground‐motion models and correlation functions, provide a flexible framework for estimating a variety of IMs at arbitrary locations.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Traditionally power distribution networks are either not observable or only partially observable. This complicates development and implementation of new smart grid technologies, such as those related to demand response, outage detection and management, and improved load-monitoring. In this two part paper, inspired by proliferation of the metering technology, we discuss estimation problems in structurally loopy but operationally radial distribution grids from measurements, e.g. voltage data, which are either already available or can be made available with a relatively minor investment. In Part I, the objective is to learn the operational layout of the grid. Part II of this paper presentsmore » algorithms that estimate load statistics or line parameters in addition to learning the grid structure. Further, Part II discusses the problem of structure estimation for systems with incomplete measurement sets. Our newly suggested algorithms apply to a wide range of realistic scenarios. The algorithms are also computationally efficient – polynomial in time– which is proven theoretically and illustrated computationally on a number of test cases. The technique developed can be applied to detect line failures in real time as well as to understand the scope of possible adversarial attacks on the grid.« less

  14. Body composition changes in pregnancy: measurement, predictors and outcomes

    PubMed Central

    Widen, EM; Gallagher, D

    2014-01-01

    Prevalence of overweight and obesity has risen in the United States over the past few decades. Concurrent with this rise in obesity has been an increase in pregravid body mass index and gestational weight gain affecting maternal body composition changes in pregnancy. During pregnancy, many of the assumptions inherent in body composition estimation are violated, particularly the hydration of fat-free mass, and available methods are unable to disentangle maternal composition from fetus and supporting tissues; therefore, estimates of maternal body composition during pregnancy are prone to error. Here we review commonly used and available methods for assessing body composition changes in pregnancy, including: (1) anthropometry, (2) total body water, (3) densitometry, (4) imaging, (5) dual-energy X-ray absorptiometry, (6) bioelectrical impedance and (7) ultrasound. Several of these methods can measure regional changes in adipose tissue; however, most of these methods provide only whole-body estimates of fat and fat-free mass. Consideration is given to factors that may influence changes in maternal body composition, as well as long-term maternal and offspring outcomes. Finally, we provide recommendations for future research in this area. PMID:24667754

  15. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    PubMed

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  16. Estimation of selected flow and water-quality characteristics of Alaskan streams

    USGS Publications Warehouse

    Parks, Bruce; Madison, R.J.

    1985-01-01

    Although hydrologic data are either sparse or nonexistent for large areas of Alaska, the drainage area, area of lakes, glacier and forest cover, and average precipitation in a hydrologic basin of interest can be measured or estimated from existing maps. Application of multiple linear regression techniques indicates that statistically significant correlations exist between properties of basins determined from maps and measured streamflow characteristics. This suggests that corresponding characteristics of ungaged basins can be estimated. Streamflow frequency characteristics can be estimated from regional equations developed for southeast, south-central and Yukon regions. Statewide or modified regional equations must be used, however, for the southwest, northwest, and Arctic Slope regions where there is a paucity of data. Equations developed from basin characteristics are given to estimate suspended-sediment values for glacial streams and, with less reliability, for nonglacial streams. Equations developed from available specific conductance data are given to estimate concentrations of major dissolved inorganic constituents. Suggestions are made for expanding the existing data base and thus improving the ability to estimate hydrologic characteristics for Alaskan streams. (USGS)

  17. Finding and estimating chemical property data for environmental assessment.

    PubMed

    Boethling, Robert S; Howard, Philip H; Meylan, William M

    2004-10-01

    The ability to predict the behavior of a chemical substance in a biological or environmental system largely depends on knowledge of the physicochemical properties and reactivity of that substance. We focus here on properties, with the objective of providing practical guidance for finding measured values and using estimation methods when necessary. Because currently available computer software often makes it more convenient to estimate than to retrieve measured values, we try to discourage irrational exuberance for these tools by including comprehensive lists of Internet and hard-copy data resources. Guidance for assessors is presented in the form of a process to obtain data that includes establishment of chemical identity, identification of data sources, assessment of accuracy and reliability, substructure searching for analogs when experimental data are unavailable, and estimation from chemical structure. Regarding property estimation, we cover estimation from close structural analogs in addition to broadly applicable methods requiring only the chemical structure. For the latter, we list and briefly discuss the most widely used methods. Concluding thoughts are offered concerning appropriate directions for future work on estimation methods, again with an emphasis on practical applications.

  18. Measuring specific receptor binding of a PET radioligand in human brain without pharmacological blockade: The genomic plot.

    PubMed

    Veronese, Mattia; Zanotti-Fregonara, Paolo; Rizzo, Gaia; Bertoldo, Alessandra; Innis, Robert B; Turkheimer, Federico E

    2016-04-15

    PET studies allow in vivo imaging of the density of brain receptor species. The PET signal, however, is the sum of the fraction of radioligand that is specifically bound to the target receptor and the non-displaceable fraction (i.e. the non-specifically bound radioligand plus the free ligand in tissue). Therefore, measuring the non-displaceable fraction, which is generally assumed to be constant across the brain, is a necessary step to obtain regional estimates of the specific fractions. The nondisplaceable binding can be directly measured if a reference region, i.e. a region devoid of any specific binding, is available. Many receptors are however widely expressed across the brain, and a true reference region is rarely available. In these cases, the nonspecific binding can be obtained after competitive pharmacological blockade, which is often contraindicated in humans. In this work we introduce the genomic plot for estimating the nondisplaceable fraction using baseline scans only. The genomic plot is a transformation of the Lassen graphical method in which the brain maps of mRNA transcripts of the target receptor obtained from the Allen brain atlas are used as a surrogate measure of the specific binding. Thus, the genomic plot allows the calculation of the specific and nondisplaceable components of radioligand uptake without the need of pharmacological blockade. We first assessed the statistical properties of the method with computer simulations. Then we sought ground-truth validation using human PET datasets of seven different neuroreceptor radioligands, where nonspecific fractions were either obtained separately using drug displacement or available from a true reference region. The population nondisplaceable fractions estimated by the genomic plot were very close to those measured by actual human blocking studies (mean relative difference between 2% and 7%). However, these estimates were valid only when mRNA expressions were predictive of protein levels (i.e. there were no significant post-transcriptional changes). This condition can be readily established a priori by assessing the correlation between PET and mRNA expression. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Measuring specific receptor binding of a PET radioligand in human brain without pharmacological blockade: The genomic plot

    PubMed Central

    Veronese, Mattia; Zanotti-Fregonara, Paolo; Rizzo, Gaia; Bertoldo, Alessandra; Innis, Robert B.; Turkheimer, Federico E.

    2016-01-01

    PET studies allow in vivo imaging of the density of brain receptor species. The PET signal, however, is the sum of the fraction of radioligand that is specifically bound to the target receptor and the non-displaceable fraction (i.e. the non-specifically bound radioligand plus the free ligand in tissue). Therefore, measuring the non-displaceable fraction, which is generally assumed to be constant across the brain, is a necessary step to obtain regional estimates of the specific fractions. The nondisplaceable binding can be directly measured if a reference region, i.e. a region devoid of any specific binding, is available. Many receptors are however widely expressed across the brain, and a true reference region is rarely available. In these cases, the nonspecific binding can be obtained after competitive pharmacological blockade, which is often contraindicated in humans. In this work we introduce the genomic plot for estimating the nondisplaceable fraction using baseline scans only. The genomic plot is a transformation of the Lassen graphical method in which the brain maps of mRNA transcripts of the target receptor obtained from the Allen brain atlas are used as a surrogate measure of the specific binding. Thus, the genomic plot allows the calculation of the specific and nondisplaceable components of radioligand uptake without the need of pharmacological blockade. We first assessed the statistical properties of the method with computer simulations. Then we sought ground-truth validation using human PET datasets of seven different neuroreceptor radioligands, where nonspecific fractions were either obtained separately using drug displacement or available from a true reference region. The population nondisplaceable fractions estimated by the genomic plot were very close to those measured by actual human blocking studies (mean relative difference between 2% and 7%). However, these estimates were valid only when mRNA expressions were predictive of protein levels (i.e. there were no significant post-transcriptional changes). This condition can be readily established a priori by assessing the correlation between PET and mRNA expression. PMID:26850512

  20. Preliminary evaluation of the publicly available Laboratory for Breast Radiodensity Assessment (LIBRA) software tool: comparison of fully automated area and volumetric density measures in a case-control study with digital mammography.

    PubMed

    Keller, Brad M; Chen, Jinbo; Daye, Dania; Conant, Emily F; Kontos, Despina

    2015-08-25

    Breast density, commonly quantified as the percentage of mammographically dense tissue area, is a strong breast cancer risk factor. We investigated associations between breast cancer and fully automated measures of breast density made by a new publicly available software tool, the Laboratory for Individualized Breast Radiodensity Assessment (LIBRA). Digital mammograms from 106 invasive breast cancer cases and 318 age-matched controls were retrospectively analyzed. Density estimates acquired by LIBRA were compared with commercially available software and standard Breast Imaging-Reporting and Data System (BI-RADS) density estimates. Associations between the different density measures and breast cancer were evaluated by using logistic regression after adjustment for Gail risk factors and body mass index (BMI). Area under the curve (AUC) of the receiver operating characteristic (ROC) was used to assess discriminatory capacity, and odds ratios (ORs) for each density measure are provided. All automated density measures had a significant association with breast cancer (OR = 1.47-2.23, AUC = 0.59-0.71, P < 0.01) which was strengthened after adjustment for Gail risk factors and BMI (OR = 1.96-2.64, AUC = 0.82-0.85, P < 0.001). In multivariable analysis, absolute dense area (OR = 1.84, P < 0.001) and absolute dense volume (OR = 1.67, P = 0.003) were jointly associated with breast cancer (AUC = 0.77, P < 0.01), having a larger discriminatory capacity than models considering the Gail risk factors alone (AUC = 0.64, P < 0.001) or the Gail risk factors plus standard area percent density (AUC = 0.68, P = 0.01). After BMI was further adjusted for, absolute dense area retained significance (OR = 2.18, P < 0.001) and volume percent density approached significance (OR = 1.47, P = 0.06). This combined area-volume density model also had a significantly (P < 0.001) improved discriminatory capacity (AUC = 0.86) relative to a model considering the Gail risk factors plus BMI (AUC = 0.80). Our study suggests that new automated density measures may ultimately augment the current standard breast cancer risk factors. In addition, the ability to fully automate density estimation with digital mammography, particularly through the use of publically available breast density estimation software, could accelerate the translation of density reporting in routine breast cancer screening and surveillance protocols and facilitate broader research into the use of breast density as a risk factor for breast cancer.

  1. Evaluation of the Brix refractometer to estimate immunoglobulin G concentration in bovine colostrum.

    PubMed

    Quigley, J D; Lago, A; Chapman, C; Erickson, P; Polo, J

    2013-02-01

    Refractometry using a Brix refractometer has been proposed as a means to estimate IgG concentration in bovine maternal colostrum (MC). The refractometer has advantages over other methods of estimating IgG concentration in that the Brix refractometer is inexpensive, readily available, less fragile, and less sensitive to variation in colostral temperature, season of the year and other factors. Samples of first-milking MC were collected from 7 dairy farms in Maine, New Hampshire, Vermont, and Connecticut (n=84) and 1 dairy farm in California (n=99). The MC was milked from the cow at 6.1 ± 5.6h postparturition and a sample was evaluated for Brix percentage by using an optical refractometer. Two additional samples (30 mL) were collected from the milk bucket, placed in vials, and frozen before analysis of total IgG by radial immunodiffusion (RID) using commercially available plates and by turbidimetric immunoassay (TIA). The second sample was analyzed for total bacterial counts and coliform counts at laboratories in New York (Northeast samples) and California (California samples). The Brix percentage (mean ± SD) was 23.8 ± 3.5, IgG concentration measured by RID was 73.4 ± 26.2g/L, and IgG concentration measured by TIA was 67.5 ± 25.0 g/L. The Brix percentage was highly correlated (r=0.75) with IgG analyzed by RID. The Brix percentage cut point to define high- or low-quality colostrum (50 g of IgG/L measured by RID) that classified more samples correctly given the proportion of high- (86%) and low-quality (14%) samples in this study was 21%, which is slightly lower than other recent estimates of Brix measurements. At this cut point, the test sensitivity, specificity, positive and negative predictive values, and accuracy were 92.9, 65.5, 93.5, 63.3, and 88.5%, respectively. Measurement of IgG by TIA correlated with Brix (r=0.63) and RID (r=0.87); however, TIA and RID methods of IgG measurement were not consistent throughout the range of samples tested. We conclude that Brix measurement of total solids in fresh MC is an inexpensive, rapid, and satisfactorily accurate method of estimating IgG concentration. A cut point of 21% Brix to estimate samples of MC >50 g/L was most appropriate for our data. Measurement of IgG in MC by TIA differed from measurement by RID. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. The limitations of some European healthcare databases for monitoring the effectiveness of pregnancy prevention programmes as risk minimisation measures.

    PubMed

    Charlton, R A; Bettoli, V; Bos, H J; Engeland, A; Garne, E; Gini, R; Hansen, A V; de Jong-van den Berg, L T W; Jordan, S; Klungsøyr, K; Neville, A J; Pierini, A; Puccini, A; Sinclair, M; Thayer, D; Dolk, H

    2018-04-01

    Pregnancy prevention programmes (PPPs) exist for some medicines known to be highly teratogenic. It is increasingly recognised that the impact of these risk minimisation measures requires periodic evaluation. This study aimed to assess the extent to which some of the data needed to monitor the effectiveness of PPPs may be present in European healthcare databases. An inventory was completed for databases contributing to EUROmediCAT capturing pregnancy and prescription data in Denmark, Norway, the Netherlands, Italy (Tuscany/Emilia Romagna), Wales and the rest of the UK, to determine the extent of data collected that could be used to evaluate the impact of PPPs. Data availability varied between databases. All databases could be used to identify the frequency and duration of prescriptions to women of childbearing age from primary care, but there were specific issues with availability of data from secondary care and private care. To estimate the frequency of exposed pregnancies, all databases could be linked to pregnancy data, but the accuracy of timing of the start of pregnancy was variable, and data on pregnancies ending in induced abortions were often not available. Data availability on contraception to estimate compliance with contraception requirements was variable and no data were available on pregnancy tests. Current electronic healthcare databases do not contain all the data necessary to fully monitor the effectiveness of PPP implementation, and thus, special data collection measures need to be instituted.

  3. Comparison of methods to assess change in children’s body composition123

    PubMed Central

    Elberg, Jane; McDuffie, Jennifer R; Sebring, Nancy G; Salaita, Christine; Keil, Margaret; Robotham, Delphine; Reynolds, James C; Yanovski, Jack A

    2008-01-01

    Background Little is known about how simpler and more available methods to measure change in body fatness compare with criterion methods such as dual-energy X-ray absorptiometry (DXA) in children. Objective Our objective was to determine the ability of air-displacement plethysmography (ADP) and formulas based on triceps skinfold thickness (TSF) and bioelectrical impedance analysis (BIA) to estimate changes in body fat over time in children. Design Eighty-six nonoverweight and overweight boys (n = 34) and girls (n = 52) with an average age of 11.0 ± 2.4 y underwent ADP, TSF measurement, BIA, and DXA to estimate body fatness at baseline and 1 ± 0.3 y later. Recent equations were used to estimate percentage body fat by TSF measurement (Dezenberg equation) and by BIA (Suprasongsin and Lewy equations). Percentage body fat estimates by ADP, TSF measurement, and BIA were compared with those by DXA. Results All methods were highly correlated with DXA (P < 0.001). No mean bias for estimates of percentage body fat change was found for ADP (Siri equation) compared with DXA for all subjects examined together, and agreement between body fat estimation by ADP and DXA did not vary with race or sex. Magnitude bias was present for ADP relative to DXA (P < 0.01). Estimates of change in percentage body fat were systematically overestimated by BIA equations (1.37 ± 6.98%; P < 0.001). TSF accounted for only 13% of the variance in percentage body fat change. Conclusion Compared with DXA, there appears to be no noninvasive and simple method to measure changes in children’s percentage body fat accurately and precisely, but ADP performed better than did TSF or BIA. ADP could prove useful for measuring changes in adiposity in children. PMID:15213029

  4. Contribution of soil respiration to the global carbon equation.

    PubMed

    Xu, Ming; Shang, Hua

    2016-09-20

    Soil respiration (Rs) is the second largest carbon flux next to GPP between the terrestrial ecosystem (the largest organic carbon pool) and the atmosphere at a global scale. Given their critical role in the global carbon cycle, Rs measurement and modeling issues have been well reviewed in previous studies. In this paper, we briefly review advances in soil organic carbon (SOC) decomposition processes and the factors affecting Rs. We examine the spatial and temporal distribution of Rs measurements available in the literature and found that most of the measurements were conducted in North America, Europe, and East Asia, with major gaps in Africa, East Europe, North Asia, Southeast Asia, and Australia, especially in dry ecosystems. We discuss the potential problems of measuring Rs on slope soils and propose using obliquely-cut soil collars to solve the existing problems. We synthesize previous estimates of global Rs flux and find that the estimates ranged from 50 PgC/yr to 98 PgC/yr and the error associated with each estimation was also high (4 PgC/yr to 33.2 PgC/yr). Using a newly integrated database of Rs measurements and the MODIS vegetation map, we estimate that the global annual Rs flux is 94.3 PgC/yr with an estimation error of 17.9 PgC/yr at a 95% confidence level. The uneven distribution of Rs measurements limits our ability to improve the accuracy of estimation. Based on the global estimation of Rs flux, we found that Rs is highly correlated with GPP and NPP at the biome level, highlighting the role of Rs in global carbon budgets. Copyright © 2016. Published by Elsevier GmbH.

  5. Constraining the SIF - GPP relationship via estimation of NPQ

    NASA Astrophysics Data System (ADS)

    Silva, C. E.; Yang, X.; Tang, J.; Lee, J. E.; Cushman, K.; Toh Yuan Kun, L.; Kellner, J. R.

    2016-12-01

    Airborne and satellite measurements of solar-induced fluorescence (SIF) have the potential to improve estimates of gross primary production (GPP). Plants dissipate absorbed photosynthetically active radiation (APAR) among three de-excitation pathways: SIF, photochemical quenching (PQ), which results in electron transport and the production of ATP and NADPH consumed during carbon fixation (i.e., GPP), and heat dissipation via conversion of xanthophyll pigments (non-photochemical quenching: NPQ). As a result, the relationship between SIF and GPP is a function of NPQ and may vary temporally and spatially with environmental conditions (e.g., light and water availability) and plant traits (e.g., leaf N content). Accurate estimates of any one of the de-excitation pathways require measurement of the other two. Here we combine half-hourly measurements of canopy APAR and SIF with eddy covariance estimates of GPP at Harvard Forest to close the canopy radiation budget and infer canopy NPQ throughout the 2013 growing season. We use molecular-level photosynthesis equations to compute PQ (umol photons m-2s-1) from GPP (umol CO2 m-2s-1) and invert an integrated canopy radiative transfer and leaf-level photosynthesis/fluorescence model (SCOPE) to quantify hemispherically and spectrally-integrated SIF emission (umol photons m-2s-1) from single band (760 nm) top-of-canopy SIF measurements. We estimate half-hourly NPQ as the residual required to close the radiation budget (NPQ = APAR - SIF - PQ). Our future work will test estimated NPQ against simultaneously acquired measurements of the photochemical reflectance index (PRI), a spectral index sensitive to xanthopyll pigments. By constraining two of the three de-excitation pathways, simultaneous SIF and PRI measurements are likely to improve GPP estimates, which are crucial to the study of climate - carbon cycle interactions.

  6. Comparison of measured versus predicted energy requirements in critically ill cancer patients.

    PubMed

    Pirat, Arash; Tucker, Anne M; Taylor, Kim A; Jinnah, Rashida; Finch, Clarence G; Canada, Todd D; Nates, Joseph L

    2009-04-01

    Accurate determination of caloric requirements is essential to avoid feeding-associated complications in critically ill patients. In critically ill cancer patients we compared the measured and estimated resting energy expenditures. All patients admitted to the oncology intensive care unit between March 2004 and July 2005 were considered for inclusion. For those patients enrolled (n = 34) we measured resting energy expenditure via indirect calorimetry, and estimated resting energy expenditure in 2 ways: clinically estimated resting energy expenditure; and the Harris-Benedict basal energy expenditure equation. Clinically estimated resting energy expenditure was associated with underfeeding, appropriate feeding, and overfeeding in approximately 15%, 15%, and 71% of the patients, respectively. The Harris-Benedict basal energy expenditure was associated with underfeeding, appropriate feeding, and overfeeding in approximately 29%, 41%, and 29% of the patients, respectively. The mean measured resting energy expenditure (1,623 +/- 384 kcal/d) was similar to the mean Harris-Benedict basal energy expenditure without the addition of stress or activity factors (1,613 +/- 382 kcal/d, P = .87), and both were significantly lower than the mean clinically estimated resting energy expenditure (1,862 +/- 330 kcal/d, P < or = .003 for both). There was a significant correlation only between mean measured resting energy expenditure and mean Harris-Benedict basal energy expenditure (P < .001), but the correlation coefficient between those values was low (r = 0.587). Underfeeding and overfeeding were common in our critically ill cancer patients when resting energy expenditure was estimated rather than measured. Indirect calorimetry is the method of choice for determining caloric need in critically ill cancer patients, but if indirect calorimetry is not available or feasible, the Harris-Benedict equation without added stress and activity factors is more accurate than the clinically estimated resting energy expenditure.

  7. A feasibility study for measuring stratospheric turbulence using metrac positioning system

    NASA Technical Reports Server (NTRS)

    Gage, K. S.; Jasperson, W. H.

    1975-01-01

    The feasibility of obtaining measurements of Lagrangian turbulence at stratospheric altitudes is demonstrated by using the METRAC System to track constant-level balloons. The basis for current estimates of diffusion coefficients are reviewed and it is pointed out that insufficient data is available upon which to base reliable estimates of vertical diffusion coefficients. It is concluded that diffusion coefficients could be directly obtained from Lagrangian turbulence measurements. The METRAC balloon tracking system is shown to possess the necessary precision in order to resolve the response of constant-level balloons to turbulence at stratospheric altitudes. A small sample of data recorded from a tropospheric tetroon flight tracked by the METRAC System is analyzed to obtain estimates of small-scale three-dimensional diffusion coefficients. It is recommended that this technique be employed to establish a climatology of diffusion coefficients and to ascertain the variation of these coefficients with altitude, season, and latitude.

  8. Improving the precision of our ecosystem calipers: a modified morphometric technique for estimating marine mammal mass and body composition.

    PubMed

    Shero, Michelle R; Pearson, Linnea E; Costa, Daniel P; Burns, Jennifer M

    2014-01-01

    Mass and body composition are indices of overall animal health and energetic balance and are often used as indicators of resource availability in the environment. This study used morphometric models and isotopic dilution techniques, two commonly used methods in the marine mammal field, to assess body composition of Weddell seals (Leptonychotes weddellii, N = 111). Findings indicated that traditional morphometric models that use a series of circular, truncated cones to calculate marine mammal blubber volume and mass overestimated the animal's measured body mass by 26.9±1.5% SE. However, we developed a new morphometric model that uses elliptical truncated cones, and estimates mass with only -2.8±1.7% error (N = 10). Because this elliptical truncated cone model can estimate body mass without the need for additional correction factors, it has the potential to be a broadly applicable method in marine mammal species. While using elliptical truncated cones yielded significantly smaller blubber mass estimates than circular cones (10.2±0.8% difference; or 3.5±0.3% total body mass), both truncated cone models significantly underestimated total body lipid content as compared to isotopic dilution results, suggesting that animals have substantial internal lipid stores (N = 76). Multiple linear regressions were used to determine the minimum number of morphometric measurements needed to reliably estimate animal mass and body composition so that future animal handling times could be reduced. Reduced models estimated body mass and lipid mass with reasonable accuracy using fewer than five morphometric measurements (root-mean-square-error: 4.91% for body mass, 10.90% for lipid mass, and 10.43% for % lipid). This indicates that when test datasets are available to create calibration coefficients, regression models also offer a way to improve body mass and condition estimates in situations where animal handling times must be short and efficient.

  9. Facial convective heat exchange coefficients in cold and windy environments estimated from human experiments

    NASA Astrophysics Data System (ADS)

    Ben Shabat, Yael; Shitzer, Avraham

    2012-07-01

    Facial heat exchange convection coefficients were estimated from experimental data in cold and windy ambient conditions applicable to wind chill calculations. Measured facial temperature datasets, that were made available to this study, originated from 3 separate studies involving 18 male and 6 female subjects. Most of these data were for a -10°C ambient environment and wind speeds in the range of 0.2 to 6 m s-1. Additional single experiments were for -5°C, 0°C and 10°C environments and wind speeds in the same range. Convection coefficients were estimated for all these conditions by means of a numerical facial heat exchange model, applying properties of biological tissues and a typical facial diameter of 0.18 m. Estimation was performed by adjusting the guessed convection coefficients in the computed facial temperatures, while comparing them to measured data, to obtain a satisfactory fit ( r 2 > 0.98, in most cases). In one of the studies, heat flux meters were additionally used. Convection coefficients derived from these meters closely approached the estimated values for only the male subjects. They differed significantly, by about 50%, when compared to the estimated female subjects' data. Regression analysis was performed for just the -10°C ambient temperature, and the range of experimental wind speeds, due to the limited availability of data for other ambient temperatures. The regressed equation was assumed in the form of the equation underlying the "new" wind chill chart. Regressed convection coefficients, which closely duplicated the measured data, were consistently higher than those calculated by this equation, except for one single case. The estimated and currently used convection coefficients are shown to diverge exponentially from each other, as wind speed increases. This finding casts considerable doubts on the validity of the convection coefficients that are used in the computation of the "new" wind chill chart and their applicability to humans in cold and windy environments.

  10. Facial convective heat exchange coefficients in cold and windy environments estimated from human experiments.

    PubMed

    Ben Shabat, Yael; Shitzer, Avraham

    2012-07-01

    Facial heat exchange convection coefficients were estimated from experimental data in cold and windy ambient conditions applicable to wind chill calculations. Measured facial temperature datasets, that were made available to this study, originated from 3 separate studies involving 18 male and 6 female subjects. Most of these data were for a -10°C ambient environment and wind speeds in the range of 0.2 to 6 m s(-1). Additional single experiments were for -5°C, 0°C and 10°C environments and wind speeds in the same range. Convection coefficients were estimated for all these conditions by means of a numerical facial heat exchange model, applying properties of biological tissues and a typical facial diameter of 0.18 m. Estimation was performed by adjusting the guessed convection coefficients in the computed facial temperatures, while comparing them to measured data, to obtain a satisfactory fit (r(2) > 0.98, in most cases). In one of the studies, heat flux meters were additionally used. Convection coefficients derived from these meters closely approached the estimated values for only the male subjects. They differed significantly, by about 50%, when compared to the estimated female subjects' data. Regression analysis was performed for just the -10°C ambient temperature, and the range of experimental wind speeds, due to the limited availability of data for other ambient temperatures. The regressed equation was assumed in the form of the equation underlying the "new" wind chill chart. Regressed convection coefficients, which closely duplicated the measured data, were consistently higher than those calculated by this equation, except for one single case. The estimated and currently used convection coefficients are shown to diverge exponentially from each other, as wind speed increases. This finding casts considerable doubts on the validity of the convection coefficients that are used in the computation of the "new" wind chill chart and their applicability to humans in cold and windy environments.

  11. Characterization of the LANDSAT sensors' spatial responses

    NASA Technical Reports Server (NTRS)

    Markham, B. L.

    1984-01-01

    The characteristics of the thematic mapper (TM) and multispectral scanner (MSS) sensors on LANDSATs 4 and 5 affecting their spatial responses are described, and functions defining the response of the system to an arbitrary input spatial pattern are derived, i.e., transfer functions (TF) and line spread functions (LSF). These design LSF's and TF's were modified based on prelaunch component and system measurements to provide improved estimates. Prelaunch estimates of LSF/FT's are compared to in-orbit estimates. For the MSS instruments, only limited prelaunch scan direction square-wave response (SWR) data were available. Design estimates were modified by convolving in Gaussian blur till the derived LSF/TF's produced SWR's comparable to the measurements. The two MSS instruments were comparable at their temperatures of best focus; separate calculations were performed for bands 1 and 3, band 2 and band 4. The pre-sample nadir effective instantaneous field's of view (EIFOV's) based on the .5 modulation transfer function (MTF) criteria vary from 70 to 75 meters in the track direction and 79 to 82 meters in the scan direction. For the TM instruments more extensive prelaunch measurements were available. Bands 1 to 4, 5 and 7, and 6 were handled separately as were the two instruments. Derived MTF's indicate nadir pre-sample EIFOV's of 32 to 33 meter track (bands 1 to 5, 7) and 36 meter scan (bands 1 to 5, 7) and 1245 meter track (band 6) and 141 meter scan (band 6) for both TM's.

  12. Rules of Thumb for Depth of Investigation, Pseudo-Position and Resolution of the Electrical Resistivity Method from Analysis of the Moments of the Sensitivity Function for a Homogeneous Half-Space

    NASA Astrophysics Data System (ADS)

    Butler, S. L.

    2017-12-01

    The electrical resistivity method is now highly developed with 2D and even 3D surveys routinely performed and with available fast inversion software. However, rules of thumb, based on simple mathematical formulas, for important quantities like depth of investigation, horizontal position and resolution have not previously been available and would be useful for survey planning, preliminary interpretation and general education about the method. In this contribution, I will show that the sensitivity function for the resistivity method for a homogeneous half-space can be analyzed in terms of its first and second moments which yield simple mathematical formulas. The first moment gives the sensitivity-weighted center of an apparent resistivity measurement with the vertical center being an estimate of the depth of investigation. I will show that this depth of investigation estimate works at least as well as previous estimates based on the peak and median of the depth sensitivity function which must be calculated numerically for a general four electrode array. The vertical and horizontal first moments can also be used as pseudopositions when plotting 1, 2 and 3D pseudosections. The appropriate horizontal plotting point for a pseudosection was not previously obvious for nonsymmetric arrays. The second moments of the sensitivity function give estimates of the spatial extent of the region contributing to an apparent resistivity measurement and hence are measures of the resolution. These also have simple mathematical formulas.

  13. Estimators of wheel slip for electric vehicles using torque and encoder measurements

    NASA Astrophysics Data System (ADS)

    Boisvert, M.; Micheau, P.

    2016-08-01

    For the purpose of regenerative braking control in hybrid and electrical vehicles, recent studies have suggested controlling the slip ratio of the electric-powered wheel. A slip tracking controller requires an accurate slip estimation in the overall range of the slip ratio (from 0 to 1), contrary to the conventional slip limiter (ABS) which calls for an accurate slip estimation in the critical slip area, estimated at around 0.15 in several applications. Considering that it is not possible to directly measure the slip ratio of a wheel, the problem is to estimate the latter from available online data. To estimate the slip of a wheel, both wheel speed and vehicle speed must be known. Several studies provide algorithms that allow obtaining a good estimation of vehicle speed. On the other hand, there is no proposed algorithm for the conditioning of the wheel speed measurement. Indeed, the noise included in the wheel speed measurement reduces the accuracy of the slip estimation, a disturbance increasingly significant at low speed and low torque. Herein, two different extended Kalman observers of slip ratio were developed. The first calculates the slip ratio with data provided by an observer of vehicle speed and of propeller wheel speed. The second observer uses an original nonlinear model of the slip ratio as a function of the electric motor. A sinus tracking algorithm is included in the two observers, in order to reject harmonic disturbances of wheel speed measurement. Moreover, mass and road uncertainties can be compensated with a coefficient adapted online by an RLS. The algorithms were implemented and tested with a three-wheel recreational hybrid vehicle. Experimental results show the efficiency of both methods.

  14. Are cannabis prevalence estimates comparable across countries and regions? A cross-cultural validation using search engine query data.

    PubMed

    Steppan, Martin; Kraus, Ludwig; Piontek, Daniela; Siciliano, Valeria

    2013-01-01

    Prevalence estimation of cannabis use is usually based on self-report data. Although there is evidence on the reliability of this data source, its cross-cultural validity is still a major concern. External objective criteria are needed for this purpose. In this study, cannabis-related search engine query data are used as an external criterion. Data on cannabis use were taken from the 2007 European School Survey Project on Alcohol and Other Drugs (ESPAD). Provincial data came from three Italian nation-wide studies using the same methodology (2006-2008; ESPAD-Italia). Information on cannabis-related search engine query data was based on Google search volume indices (GSI). (1) Reliability analysis was conducted for GSI. (2) Latent measurement models of "true" cannabis prevalence were tested using perceived availability, web-based cannabis searches and self-reported prevalence as indicators. (3) Structure models were set up to test the influences of response tendencies and geographical position (latitude, longitude). In order to test the stability of the models, analyses were conducted on country level (Europe, US) and on provincial level in Italy. Cannabis-related GSI were found to be highly reliable and constant over time. The overall measurement model was highly significant in both data sets. On country level, no significant effects of response bias indicators and geographical position on perceived availability, web-based cannabis searches and self-reported prevalence were found. On provincial level, latitude had a significant positive effect on availability indicating that perceived availability of cannabis in northern Italy was higher than expected from the other indicators. Although GSI showed weaker associations with cannabis use than perceived availability, the findings underline the external validity and usefulness of search engine query data as external criteria. The findings suggest an acceptable relative comparability of national (provincial) prevalence estimates of cannabis use that are based on a common survey methodology. Search engine query data are a too weak indicator to base prevalence estimations on this source only, but in combination with other sources (waste water analysis, sales of cigarette paper) they may provide satisfactory estimates. Copyright © 2012. Published by Elsevier B.V.

  15. A Study about Kalman Filters Applied to Embedded Sensors

    PubMed Central

    Valade, Aurélien; Acco, Pascal; Grabolosa, Pierre; Fourniols, Jean-Yves

    2017-01-01

    Over the last decade, smart sensors have grown in complexity and can now handle multiple measurement sources. This work establishes a methodology to achieve better estimates of physical values by processing raw measurements within a sensor using multi-physical models and Kalman filters for data fusion. A driving constraint being production cost and power consumption, this methodology focuses on algorithmic complexity while meeting real-time constraints and improving both precision and reliability despite low power processors limitations. Consequently, processing time available for other tasks is maximized. The known problem of estimating a 2D orientation using an inertial measurement unit with automatic gyroscope bias compensation will be used to illustrate the proposed methodology applied to a low power STM32L053 microcontroller. This application shows promising results with a processing time of 1.18 ms at 32 MHz with a 3.8% CPU usage due to the computation at a 26 Hz measurement and estimation rate. PMID:29206187

  16. Analysis and design of second-order sliding-mode algorithms for quadrotor roll and pitch estimation.

    PubMed

    Chang, Jing; Cieslak, Jérôme; Dávila, Jorge; Zolghadri, Ali; Zhou, Jun

    2017-11-01

    The problem addressed in this paper is that of quadrotor roll and pitch estimation without any assumption about the knowledge of perturbation bounds when Inertial Measurement Units (IMU) data or position measurements are available. A Smooth Sliding Mode (SSM) algorithm is first designed to provide reliable estimation under a smooth disturbance assumption. This assumption is next relaxed with the second proposed Adaptive Sliding Mode (ASM) algorithm that deals with disturbances of unknown bounds. In addition, the analysis of the observers are extended to the case where measurements are corrupted by bias and noise. The gains of the proposed algorithms were deduced from the Lyapunov function. Furthermore, some useful guidelines are provided for the selection of the observer turning parameters. The performance of these two approaches is evaluated using a nonlinear simulation model and considering either accelerometer or position measurements. The simulation results demonstrate the benefits of the proposed solutions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Output-feedback control of combined sewer networks through receding horizon control with moving horizon estimation

    NASA Astrophysics Data System (ADS)

    Joseph-Duran, Bernat; Ocampo-Martinez, Carlos; Cembrano, Gabriela

    2015-10-01

    An output-feedback control strategy for pollution mitigation in combined sewer networks is presented. The proposed strategy provides means to apply model-based predictive control to large-scale sewer networks, in-spite of the lack of measurements at most of the network sewers. In previous works, the authors presented a hybrid linear control-oriented model for sewer networks together with the formulation of Optimal Control Problems (OCP) and State Estimation Problems (SEP). By iteratively solving these problems, preliminary Receding Horizon Control with Moving Horizon Estimation (RHC/MHE) results, based on flow measurements, were also obtained. In this work, the RHC/MHE algorithm has been extended to take into account both flow and water level measurements and the resulting control loop has been extensively simulated to assess the system performance according different measurement availability scenarios and rain events. All simulations have been carried out using a detailed physically based model of a real case-study network as virtual reality.

  18. SU-F-P-44: A Direct Estimate of Peak Skin Dose for Interventional Fluoroscopy Procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weir, V; Zhang, J

    Purpose: There is an increasing demand for medical physicist to calculate peak skin dose (PSD) for interventional fluoroscopy procedures. The dose information (Dose-Area-Product and Air Kerma) displayed in the console cannot directly be used for this purpose. Our clinical experience shows that the use of the existing methods may overestimate or underestimate PSD. This study attempts to develop a direct estimate of PSD from the displayed dose metrics. Methods: An anthropomorphic torso phantom was used for dose measurements for a common fluoroscopic procedure. Entrance skin doses were measured with a Piranha solid state point detector placed on the table surfacemore » below the torso phantom. An initial “reference dose rate” (RE) measurement was conducted by comparing the displayed dose rate (mGy/min) to the dose rate measured. The distance from table top to focal spot was taken as the reference distance (RD at the RE. Table height was then adjusted. The displayed air kerma and DAP were recorded and sent to three physicists to estimate PSD. An inverse square correction was applied to correct displayed air kerma at various table heights. The PSD estimated by physicists and the PSD by the proposed method were then compared with the measurements. The estimated DAPs were compared to displayed DAP readings (mGycm2). Results: The difference between estimated PSD by the proposed method and direct measurements was less than 5%. For the same set of data, the estimated PSD by each of three physicists is different from measurements by ±52%. The DAP calculated by the proposed method and displayed DAP readings in the console is less than 20% at various table heights. Conclusion: PSD may be simply estimated from displayed air kerma or DAP if the distance between table top and tube focal spot or if x-ray beam area on table top is available.« less

  19. The importance of incorporating functional habitats into conservation planning for highly mobile species in dynamic systems.

    PubMed

    Webb, Matthew H; Terauds, Aleks; Tulloch, Ayesha; Bell, Phil; Stojanovic, Dejan; Heinsohn, Robert

    2017-10-01

    The distribution of mobile species in dynamic systems can vary greatly over time and space. Estimating their population size and geographic range can be problematic and affect the accuracy of conservation assessments. Scarce data on mobile species and the resources they need can also limit the type of analytical approaches available to derive such estimates. We quantified change in availability and use of key ecological resources required for breeding for a critically endangered nomadic habitat specialist, the Swift Parrot (Lathamus discolor). We compared estimates of occupied habitat derived from dynamic presence-background (i.e., presence-only data) climatic models with estimates derived from dynamic occupancy models that included a direct measure of food availability. We then compared estimates that incorporate fine-resolution spatial data on the availability of key ecological resources (i.e., functional habitats) with more common approaches that focus on broader climatic suitability or vegetation cover (due to the absence of fine-resolution data). The occupancy models produced significantly (P < 0.001) smaller (up to an order of magnitude) and more spatially discrete estimates of the total occupied area than climate-based models. The spatial location and extent of the total area occupied with the occupancy models was highly variable between years (131 and 1498 km 2 ). Estimates accounting for the area of functional habitats were significantly smaller (2-58% [SD 16]) than estimates based only on the total area occupied. An increase or decrease in the area of one functional habitat (foraging or nesting) did not necessarily correspond to an increase or decrease in the other. Thus, an increase in the extent of occupied area may not equate to improved habitat quality or function. We argue these patterns are typical for mobile resource specialists but often go unnoticed because of limited data over relevant spatial and temporal scales and lack of spatial data on the availability of key resources. Understanding changes in the relative availability of functional habitats is crucial to informing conservation planning and accurately assessing extinction risk for mobile resource specialists. © 2017 Society for Conservation Biology.

  20. MoisturEC: A New R Program for Moisture Content Estimation from Electrical Conductivity Data.

    PubMed

    Terry, Neil; Day-Lewis, Frederick D; Werkema, Dale; Lane, John W

    2018-03-06

    Noninvasive geophysical estimation of soil moisture has potential to improve understanding of flow in the unsaturated zone for problems involving agricultural management, aquifer recharge, and optimization of landfill design and operations. In principle, several geophysical techniques (e.g., electrical resistivity, electromagnetic induction, and nuclear magnetic resonance) offer insight into soil moisture, but data-analysis tools are needed to "translate" geophysical results into estimates of soil moisture, consistent with (1) the uncertainty of this translation and (2) direct measurements of moisture. Although geostatistical frameworks exist for this purpose, straightforward and user-friendly tools are required to fully capitalize on the potential of geophysical information for soil-moisture estimation. Here, we present MoisturEC, a simple R program with a graphical user interface to convert measurements or images of electrical conductivity (EC) to soil moisture. Input includes EC values, point moisture estimates, and definition of either Archie parameters (based on experimental or literature values) or empirical data of moisture vs. EC. The program produces two- and three-dimensional images of moisture based on available EC and direct measurements of moisture, interpolating between measurement locations using a Tikhonov regularization approach. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.

  1. Simultaneous estimation of aerosol optical constants and size distribution from angular light-scattering measurement signals

    NASA Astrophysics Data System (ADS)

    He, Zhen-Zong; Liang, Dong; Mao, Jun-Kui; Han, Xing-Si

    2018-05-01

    Not Available Project supported by the Jiangsu Provincial Natural Science Foundation, China (Grant Nos. BK20170800 and BK20160794) and the National Natural Science Foundation of China (Grant No. 51606095).

  2. A statistical framework for protein quantitation in bottom-up MS-based proteomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas

    2009-08-15

    ABSTRACT Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and confidence measures. Challenges include the presence of low-quality or incorrectly identified peptides and widespread, informative, missing data. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model for protein abundance in terms of peptide peak intensities, applicable to both label-based and label-free quantitation experiments. The model allows for both random and censoring missingness mechanisms and provides naturally for protein-level estimates and confidence measures. The model is also used to derive automated filtering and imputation routines. Three LC-MS datasets are used tomore » illustrate the methods. Availability: The software has been made available in the open-source proteomics platform DAnTE (Polpitiya et al. (2008)) (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/ormore » line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.« less

  4. Inertial Measurements for Aero-assisted Navigation (IMAN)

    NASA Technical Reports Server (NTRS)

    Jah, Moriba; Lisano, Michael; Hockney, George

    2007-01-01

    IMAN is a Python tool that provides inertial sensor-based estimates of spacecraft trajectories within an atmospheric influence. It provides Kalman filter-derived spacecraft state estimates based upon data collected onboard, and is shown to perform at a level comparable to the conventional methods of spacecraft navigation in terms of accuracy and at a higher level with regard to the availability of results immediately after completion of an atmospheric drag pass.

  5. Predicting Explosion-Generated SN and LG Coda Using Syntheic Seismograms

    DTIC Science & Technology

    2008-09-01

    velocities in the upper crust are based on borehole data, geologic and gravity data, refraction studies and seismic experiments (McLaughlin et al. 1983...realizations of random media. We have estimated the heterogeneity parameters for the NTS using available seismic and geologic data. Lateral correlation...variance and coherence measures between seismic traces are estimated from clusters of nuclear explosions and well- log data. The horizontal von Karman

  6. An Evaluation of Population Density Mapping and Built up Area Estimates in Sri Lanka Using Multiple Methodologies

    NASA Astrophysics Data System (ADS)

    Engstrom, R.; Soundararajan, V.; Newhouse, D.

    2017-12-01

    In this study we examine how well multiple population density and built up estimates that utilize satellite data compare in Sri Lanka. The population relationship is examined at the Gram Niladhari (GN) level, the lowest administrative unit in Sri Lanka from the 2011 census. For this study we have two spatial domains, the whole country and a 3,500km2 sub-sample, for which we have complete high spatial resolution imagery coverage. For both the entire country and the sub-sample we examine how consistent are the existing publicly available measures of population constructed from satellite imagery at predicting population density? For just the sub-sample we examine how well do a suite of values derived from high spatial resolution satellite imagery predict population density and how does our built up area estimate compare to other publicly available estimates. Population measures were obtained from the Sri Lankan census, and were downloaded from Facebook, WorldPoP, GPW, and Landscan. Percentage built-up area at the GN level was calculated from three sources: Facebook, Global Urban Footprint (GUF), and the Global Human Settlement Layer (GHSL). For the sub-sample we have derived a variety of indicators from the high spatial resolution imagery. Using deep learning convolutional neural networks, an object oriented, and a non-overlapping block, spatial feature approach. Variables calculated include: cars, shadows (a proxy for building height), built up area, and buildings, roof types, roads, type of agriculture, NDVI, Pantex, and Histogram of Oriented Gradients (HOG) and others. Results indicate that population estimates are accurate at the higher, DS Division level but not necessarily at the GN level. Estimates from Facebook correlated well with census population (GN correlation of 0.91) but measures from GPW and WorldPop are more weakly correlated (0.64 and 0.34). Estimates of built-up area appear to be reliable. In the 32 DSD-subsample, Facebook's built- up area measure is highly correlated with our built-up measure (correlation of 0.9). Preliminary regression results based on variables selected from Lasso-regressions indicate that satellite indicators have exceptionally strong predictive power in predicting GN level population level and density with an out of sample r-squared of 0.75 and 0.72 respectively.

  7. Age estimation by using dental radiographs

    PubMed Central

    Limdiwala, Piyush G.; Shah, J. S.

    2013-01-01

    Background: Estimation of age is important in forensic sciences as a way to establish the identity of human remains. Of the various parts of the body used in age estimation, teeth are the least affected by the taphonomic process. Their durability means that they are sometimes the only body part available for study. Several methods of age estimation have been studied using bone and teeth, and among them, tooth wear and apposition of secondary dentine are the currently available non-destructive methods. Objectives: The purpose of the study was to determine the age of adults by using Kvaal's method as well as to establish the relationship of chronological age and dental age with its reliability and limitations on digital panoramic radiographs. Materials and Methods: The present study was based on panoramic radiographs that consisted of two groups. One hundred orthopantomographs with Kvaal's criteria (Group A) and 50 orthopantomographs without Kvaal's criteria (Group B) were included. Various parameters were measured and the result was analyzed by means of SPSS-12.0 program statistical data. Result and Conclusion: On the basis of Kvaal's criteria, the difference between chronological age and real age was 8.3 years. This suggests that the accuracy of this method depends on the precision of measurements and quality and number of the orthopantomographs. PMID:24255560

  8. Multi-point Measurements of Relativistic Electrons in the Magnetosphere

    NASA Astrophysics Data System (ADS)

    Li, X.; Selesnick, R.; Baker, D. N.; Blake, J. B.; Schiller, Q.; Blum, L. W.; Zhao, H.; Jaynes, A. N.; Kanekal, S.

    2014-12-01

    We take an advantage of five different DC electric field measurements in the plasma sheet available from the EFW double probe experiment, EDI electron drift instrument, CODIF and HIA ion spectrometers, and PEACE electron spectrometer on the four Cluster spacecraft. The calibrated observations of the three spectrometers are used to determine the proton and electron velocity moments. The velocity moments can be used to estimate the proton and electron drift velocity and furthermore the DC electric field, assuming that the electron and proton velocity perpendicular to the magnetic field is dominated by the ExB drift motion. Naturally when ions and electrons do not perform a proper drift motion, which can happen in the plasma sheet, the estimated DC electric field from ion and electron motion is not correct. However, surprisingly often the DC electric fields estimated from electron and ion motions are identical suggesting that this field is a real DC electric field around the measurement point. As the measurement techniques are so different, it is quite plausible that when two different measurements yield the same DC electric field, it is the correct field. All five measurements of the DC electric field are usually not simultaneously available, especially on Cluster 2 where CODIF and HIA are not operational, or on Cluster 4 where EDI is off. In this presentation we investigate DC electric field in various transient plasma sheet events such as dipolarization events and BBF's and how the five measurements agree or disagree. There are plenty of important issues that are considered, e.g., (1) what kind of DC electric fields exist in such events and what are their spatial scales, (2) do electrons and ions perform ExB drift motions in these events, and (3) how well the instruments have been calibrated.

  9. Synergistic Use of SMOS Measurements with SMAP Derived and In-situ Data over Valencia Anchor Station by Using Downscaling Technique

    NASA Astrophysics Data System (ADS)

    Ansari Amoli, Abdolreza; Lopez-Baeza, Ernesto; Mahmoudi, Ali; Mahmoodi, Ali

    2016-07-01

    Synergistic Use of SMOS Measurements with SMAP Derived and In-situ Data over the Valencia Anchor Station by Using a Downscaling Technique Ansari Amoli, A.(1),Mahmoodi, A.(2) and Lopez-Baeza, E.(3) (1) Department of Earth Physics and Thermodynamics, University of Valencia, Spain (2) Centre d'Etudes Spatiales de la BIOsphère (CESBIO), France (3) Department of Earth Physics and Thermodynamics, University of Valencia, Spain Soil moisture products from active sensors are not operationally available. Passive remote sensors return more accurate estimates, but their resolution is much coarser. One solution to overcome this problem is the synergy between radar and radiometric data by using disaggregation (downscaling) techniques. Few studies have been conducted to merge high resolution radar and coarse resolution radiometer measurements in order to obtain an intermediate resolution product. In this paper we present an algorithm using combined available SMAP (Soil Moisture Active and Passive) radar and SMOS (Soil Moisture and Ocean Salinity) radiometer measurements to estimate surface soil moisture over the Valencia Anchor Station (VAS), Valencia, Spain. The goal is to combine the respective attributes of the radar and radiometer observations to estimate soil moisture at a resolution of 3 km. The algorithm disaggregates the coarse resolution SMOS (15 km) radiometer brightness temperature product based on the spatial variation of the high resolution SMAP (3 km) radar backscatter. The disaggregation of the radiometer brightness temperature uses the radar backscatter spatial patterns within the radiometer footprint that are inferred from the radar measurements. For this reason the radar measurements within the radiometer footprint are scaled by parameters that are derived from the temporal fluctuations in the radar and radiometer measurements.

  10. Surrogate gas prediction model as a proxy for Δ14C-based measurements of fossil fuel-CO2.

    PubMed

    Coakley, Kevin J; Miller, John B; Montzka, Stephen A; Sweeney, Colm; Miller, Ben R

    2016-06-27

    The measured 14 C: 12 C isotopic ratio of atmospheric CO 2 (and its associated derived Δ 14 C value) is an ideal tracer for determination of the fossil fuel derived CO 2 enhancement contributing to any atmospheric CO 2 measurement ( C ff ). Given enough such measurements, independent top-down estimation of US fossil fuel-CO 2 emissions should be possible. However, the number of Δ 14 C measurements is presently constrained by cost, available sample volume, and availability of mass spectrometer measurement facilities. Δ 14 C is therefore measured in just a small fraction of samples obtained by ask air sampling networks around the world. Here, we develop a Projection Pursuit Regression (PPR) model to predict C ff as a function of multiple surrogate gases acquired within the NOAA/ESRL Global Greenhouse Gas Reference Network (GGGRN). The surrogates consist of measured enhancements of various anthropogenic trace gases, including CO, SF 6 , and halo- and hydrocarbons acquired in vertical airborne sampling profiles near Cape May, NJ and Portsmouth, NH from 2005 through 2010. Model performance for these sites is quantified based on predicted values corresponding to test data excluded from the model building process. Chi-square hypothesis test analysis indicates that these predictions and corresponding observations are consistent given our uncertainty budget which accounts for random effects and one particular systematic effect. However, quantification of the combined uncertainty of the prediction due to all relevant systematic effects is difficult because of the limited range of the observations and their relatively high fractional uncertainties at the sampling sites considered here. To account for the possibility of additional systematic effects, we incorporate another component of uncertainty into our budget. Expanding the number of Δ 14 C measurements in the NOAA GGGRN and building new PPR models at additional sites would improve our understanding of uncertainties and potentially increase the number of C ff estimates by approximately a factor of three. Provided that these estimates are of comparable quality to Δ 14 C-based estimates, we expect an improved determination of fossil fuel-CO 2 emissions.

  11. Technical Adequacy of the easyCBM Primary-Level Reading Measures (Grades K-1), 2009-2010 Version. Technical Report #1003

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Nese, Joseph F. T.; Jamgochian, Elisa M.; Alonzo, Julie; Tindal, Gerald

    2010-01-01

    In this technical report, we provide the results of a series of studies on the technical adequacy of the early reading measures available on the easyCBM[R] assessment system. The results from the two-level hierarchical linear growth model analyses suggest that the reliability of the slope estimates for the easyCBM[R] reading measures are strong,…

  12. Nonlinear, discrete flood event models, 1. Bayesian estimation of parameters

    NASA Astrophysics Data System (ADS)

    Bates, Bryson C.; Townley, Lloyd R.

    1988-05-01

    In this paper (Part 1), a Bayesian procedure for parameter estimation is applied to discrete flood event models. The essence of the procedure is the minimisation of a sum of squares function for models in which the computed peak discharge is nonlinear in terms of the parameters. This objective function is dependent on the observed and computed peak discharges for several storms on the catchment, information on the structure of observation error, and prior information on parameter values. The posterior covariance matrix gives a measure of the precision of the estimated parameters. The procedure is demonstrated using rainfall and runoff data from seven Australian catchments. It is concluded that the procedure is a powerful alternative to conventional parameter estimation techniques in situations where a number of floods are available for parameter estimation. Parts 2 and 3 will discuss the application of statistical nonlinearity measures and prediction uncertainty analysis to calibrated flood models. Bates (this volume) and Bates and Townley (this volume).

  13. Traveltime and longitudinal dispersion in Illinois streams

    USGS Publications Warehouse

    Graf, Julia B.

    1986-01-01

    Twenty-seven measurements of traveltime and longitudinal dispersion in 10 Illinois streams made from 1975 to 1982 provide data needed for estimating traveltime of peak concentration of a conservative solute, traveltime of the leading edge of a solute cloud, peak concentration resulting from injection of a given quantity of solute, and passage time of solute past a given point on a stream. These four variables can be estimated graphically for each stream from distance of travel and either discharge at the downstream end of the reach or flow-duration frequency. From equations developed from field measurements, the traveltime and dispersion characteristics also can be estimated for other unregulated streams in Illinois that have drainage areas less than about 1,500 square miles. For unmeasured streams, traveltime of peak concentration and of the leading edge of the cloud are related to discharge at the downstream end of the reach and to distance of travel. For both measured and unmeasured streams, peak concentration and passage time are best estimated from the relation of each to traveltime. In measured streams, dispersion efficiency is greater than that predicted by Fickian diffusion theory. The rate of decrease in peak concentration with traveltime is about equal to the rate of increase in passage time. Average velocity in a stream reach, given by the velocity of the center of solute mass in that reach, can be estimated from an equation developed from measured values. The equation relates average reach velocity to discharge at the downstream end of the reach. Average reach velocities computed for 9 of the 10 streams from available equations that are based on hydraulic-geometry relations are high relative to measured values. The estimating equation developed from measured velocities provides estimates of average reach velocity that are closer to measured velocities than are those computed using equations developed from hydraulic-geometry relations.

  14. A demonstration of position angle-only weak lensing shear estimators on the GREAT3 simulations

    NASA Astrophysics Data System (ADS)

    Whittaker, Lee; Brown, Michael L.; Battye, Richard A.

    2015-12-01

    We develop and apply the position angle-only shear estimator of Whittaker, Brown & Battye to realistic galaxy images. This is done by demonstrating the method on the simulations of the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, which include contributions from anisotropic point spread functions (PSFs). We measure the position angles of the galaxies using three distinct methods - the integrated light method, quadrupole moments of surface brightness, and using model-based ellipticity measurements provided by IM3SHAPE. A weighting scheme is adopted to address biases in the position angle measurements which arise in the presence of an anisotropic PSF. Biases on the shear estimates, due to measurement errors on the position angles and correlations between the measurement errors and the true position angles, are corrected for using simulated galaxy images and an iterative procedure. The properties of the simulations are estimated using the deep field images provided as part of the challenge. A method is developed to match the distributions of galaxy fluxes and half-light radii from the deep fields to the corresponding distributions in the field of interest. We recover angle-only shear estimates with a performance close to current well-established model and moments-based methods for all three angle measurement techniques. The Q-values for all three methods are found to be Q ˜ 400. The code is freely available online at http://www.jb.man.ac.uk/mbrown/angle_only_shear/.

  15. Statistical models for incorporating data from routine HIV testing of pregnant women at antenatal clinics into HIV/AIDS epidemic estimates.

    PubMed

    Sheng, Ben; Marsh, Kimberly; Slavkovic, Aleksandra B; Gregson, Simon; Eaton, Jeffrey W; Bao, Le

    2017-04-01

    HIV prevalence data collected from routine HIV testing of pregnant women at antenatal clinics (ANC-RT) are potentially available from all facilities that offer testing services to pregnant women and can be used to improve estimates of national and subnational HIV prevalence trends. We develop methods to incorporate these new data source into the Joint United Nations Programme on AIDS Estimation and Projection Package in Spectrum 2017. We develop a new statistical model for incorporating ANC-RT HIV prevalence data, aggregated either to the health facility level (site-level) or regionally (census-level), to estimate HIV prevalence alongside existing sources of HIV prevalence data from ANC unlinked anonymous testing (ANC-UAT) and household-based national population surveys. Synthetic data are generated to understand how the availability of ANC-RT data affects the accuracy of various parameter estimates. We estimate HIV prevalence and additional parameters using both ANC-RT and other existing data. Fitting HIV prevalence using synthetic data generally gives precise estimates of the underlying trend and other parameters. More years of ANC-RT data should improve prevalence estimates. More ANC-RT sites and continuation with existing ANC-UAT sites may improve the estimate of calibration between ANC-UAT and ANC-RT sites. We have proposed methods to incorporate ANC-RT data into Spectrum to obtain more precise estimates of prevalence and other measures of the epidemic. Many assumptions about the accuracy, consistency, and representativeness of ANC-RT prevalence underlie the use of these data for monitoring HIV epidemic trends and should be tested as more data become available from national ANC-RT programs.

  16. Statistical Models for Incorporating Data from Routine HIV Testing of Pregnant Women at Antenatal Clinics into HIV/AIDS Epidemic Estimates

    PubMed Central

    Sheng, Ben; Marsh, Kimberly; Slavkovic, Aleksandra B.; Gregson, Simon; Eaton, Jeffrey W.; Bao, Le

    2017-01-01

    Objective HIV prevalence data collected from routine HIV testing of pregnant women at antenatal clinics (ANC-RT) are potentially available from all facilities that offer testing services to pregnant women, and can be used to improve estimates of national and sub-national HIV prevalence trends. We develop methods to incorporate this new data source into the UNAIDS Estimation and Projection Package (EPP) in Spectrum 2017. Methods We develop a new statistical model for incorporating ANC-RT HIV prevalence data, aggregated either to the health facility level (‘site-level’) or regionally (‘census-level’), to estimate HIV prevalence alongside existing sources of HIV prevalence data from ANC unlinked anonymous testing (ANC-UAT) and household-based national population surveys. Synthetic data are generated to understand how the availability of ANC-RT data affects the accuracy of various parameter estimates. Results We estimate HIV prevalence and additional parameters using both ANC-RT and other existing data. Fitting HIV prevalence using synthetic data generally gives precise estimates of the underlying trend and other parameters. More years of ANC-RT data should improve prevalence estimates. More ANC-RT sites and continuation with existing ANC-UAT sites may improve the estimate of calibration between ANC-UAT and ANC-RT sites. Conclusion We have proposed methods to incorporate ANC-RT data into Spectrum to obtain more precise estimates of prevalence and other measures of the epidemic. Many assumptions about the accuracy, consistency, and representativeness of ANC-RT prevalence underlie the use of these data for monitoring HIV epidemic trends, and should be tested as more data become available from national ANC-RT programs. PMID:28296804

  17. Database for earthquake strong motion studies in Italy

    USGS Publications Warehouse

    Scasserra, G.; Stewart, J.P.; Kayen, R.E.; Lanzo, G.

    2009-01-01

    We describe an Italian database of strong ground motion recordings and databanks delineating conditions at the instrument sites and characteristics of the seismic sources. The strong motion database consists of 247 corrected recordings from 89 earthquakes and 101 recording stations. Uncorrected recordings were drawn from public web sites and processed on a record-by-record basis using a procedure utilized in the Next-Generation Attenuation (NGA) project to remove instrument resonances, minimize noise effects through low- and high-pass filtering, and baseline correction. The number of available uncorrected recordings was reduced by 52% (mostly because of s-triggers) to arrive at the 247 recordings in the database. The site databank includes for every recording site the surface geology, a measurement or estimate of average shear wave velocity in the upper 30 m (Vs30), and information on instrument housing. Of the 89 sites, 39 have on-site velocity measurements (17 of which were performed as part of this study using SASW techniques). For remaining sites, we estimate Vs30 based on measurements on similar geologic conditions where available. Where no local velocity measurements are available, correlations with surface geology are used. Source parameters are drawn from databanks maintained (and recently updated) by Istituto Nazionale di Geofisica e Vulcanologia and include hypocenter location and magnitude for small events (M< ??? 5.5) and finite source parameters for larger events. ?? 2009 A.S. Elnashai & N.N. Ambraseys.

  18. Methods of Constructing a Blended Performance Function Suitable for Formation Flight

    NASA Technical Reports Server (NTRS)

    Ryan, John J.

    2017-01-01

    This paper presents two methods for constructing an approximate performance function of a desired parameter using correlated parameters. The methods are useful when real-time measurements of a desired performance function are not available to applications such as extremum-seeking control systems. The first method approximates an a priori measured or estimated desired performance function by combining real-time measurements of readily available correlated parameters. The parameters are combined using a weighting vector determined from a minimum-squares optimization to form a blended performance function. The blended performance function better matches the desired performance function mini- mum than single-measurement performance functions. The second method expands upon the first by replacing the a priori data with near-real-time measurements of the desired performance function. The resulting blended performance function weighting vector is up- dated when measurements of the desired performance function are available. Both methods are applied to data collected during formation- flight-for-drag-reduction flight experiments.

  19. Epithelium percentage estimation facilitates epithelial quantitative protein measurement in tissue specimens.

    PubMed

    Chen, Jing; Toghi Eshghi, Shadi; Bova, George Steven; Li, Qing Kay; Li, Xingde; Zhang, Hui

    2013-12-01

    The rapid advancement of high-throughput tools for quantitative measurement of proteins has demonstrated the potential for the identification of proteins associated with cancer. However, the quantitative results on cancer tissue specimens are usually confounded by tissue heterogeneity, e.g. regions with cancer usually have significantly higher epithelium content yet lower stromal content. It is therefore necessary to develop a tool to facilitate the interpretation of the results of protein measurements in tissue specimens. Epithelial cell adhesion molecule (EpCAM) and cathepsin L (CTSL) are two epithelial proteins whose expressions in normal and tumorous prostate tissues were confirmed by measuring staining intensity with immunohistochemical staining (IHC). The expressions of these proteins were measured by ELISA in protein extracts from OCT embedded frozen prostate tissues. To eliminate the influence of tissue heterogeneity on epithelial protein quantification measured by ELISA, a color-based segmentation method was developed in-house for estimation of epithelium content using H&E histology slides from the same prostate tissues and the estimated epithelium percentage was used to normalize the ELISA results. The epithelium contents of the same slides were also estimated by a pathologist and used to normalize the ELISA results. The computer based results were compared with the pathologist's reading. We found that both EpCAM and CTSL levels, measured by ELISA assays itself, were greatly affected by epithelium content in the tissue specimens. Without adjusting for epithelium percentage, both EpCAM and CTSL levels appeared significantly higher in tumor tissues than normal tissues with a p value less than 0.001. However, after normalization by the epithelium percentage, ELISA measurements of both EpCAM and CTSL were in agreement with IHC staining results, showing a significant increase only in EpCAM with no difference in CTSL expression in cancer tissues. These results were obtained with normalization by both the computer estimated and pathologist estimated epithelium percentage. Our results show that estimation of tissue epithelium percentage using our color-based segmentation method correlates well with pathologists' estimation of tissue epithelium percentages. The epithelium contents estimated by color-based segmentation may be useful in immuno-based analysis or clinical proteomic analysis of tumor proteins. The codes used for epithelium estimation as well as the micrographs with estimated epithelium content are available online.

  20. Uncertainty Evaluation of Measurements with Pyranometers and Pyrheliometers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konings, Jorgen; Habte, Aron

    2016-01-03

    Evaluating photovoltaic (PV) cells, modules, arrays and systems performance of solar energy relies on accurate measurement of the available solar radiation resources. Solar radiation resources are measured using radiometers such as pyranometers (global horizontal irradiance) and pyrheliometers (direct normal irradiance). The accuracy of solar radiation data measured by radiometers depends not only on the specification of the instrument but also on a) the calibration procedure, b) the measurement conditions and maintenance, and c) the environmental conditions. Therefore, statements about the overall measurement uncertainty can only be made on an individual basis, taking all relevant factors into account. This paper providesmore » guidelines and recommended procedures for estimating the uncertainty in measurements by radiometers using the Guide to the Expression of Uncertainty (GUM) Method. Special attention is paid to the concept of data availability and its link to uncertainty evaluation.« less

  1. Estimating turbidity current conditions from channel morphology: A Froude number approach

    NASA Astrophysics Data System (ADS)

    Sequeiros, Octavio E.

    2012-04-01

    There is a growing need across different disciplines to develop better predictive tools for flow conditions of density and turbidity currents. Apart from resorting to complex numerical modeling or expensive field measurements, little is known about how to estimate gravity flow parameters from scarce available data and how they relate to each other. This study presents a new method to estimate normal flow conditions of gravity flows from channel morphology based on an extensive data set of laboratory and field measurements. The compilation consists of 78 published works containing 1092 combined measurements of velocity and concentration of gravity flows dating as far back as the early 1950s. Because the available data do not span all ranges of the critical parameters, such as bottom slope, a validated Reynolds-averaged Navier-Stokes (RANS)κ-ɛnumerical model is used to cover the gaps. It is shown that gravity flows fall within a range of Froude numbers spanning 1 order of magnitude centered on unity, as opposed to rivers and open-channel flows which extend to a much wider range. It is also observed that the transition from subcritical to supercritical flow regime occurs around a slope of 1%, with a spread caused by parameters other than the bed slope, like friction and suspended sediment settling velocity. The method is based on a set of equations relating Froude number to bed slope, combined friction, suspended material, and other flow parameters. The applications range from quick estimations of gravity flow conditions to improved numerical modeling and back calculation of missing parameters. A real case scenario of turbidity current estimation from a submarine canyon off the Nigerian coast is provided as an example.

  2. NASA MEaSUREs Combined ASTER and MODIS Emissivity over Land (CAMEL) Uncertainty Estimation

    NASA Astrophysics Data System (ADS)

    Feltz, M.; Borbas, E. E.; Knuteson, R. O.; Hulley, G. C.; Hook, S. J.

    2016-12-01

    Under the NASA MEASUREs project a new global, land surface emissivity database is being made available as part of the Unified and Coherent Land Surface Temperature and Emissivity Earth System Data Record. This new CAMEL emissivity database is created by the merging of the MODIS baseline-fit emissivity database (UWIREMIS) developed at the University of Wisconsin-Madison and the ASTER Global Emissivity Dataset v4 produced at the Jet Propulsion Labratory. The combined CAMEL product leverages the ability of ASTER's 5 bands to more accurately resolve the TIR (8-12 micron) region and the ability of UWIREMIS to provide information throughout the 3.6-12 micron IR region. It will be made available for 2000 through 2017 at monthly mean, 5 km resolution for 13 bands within the 3.6-14.3 micron region, and will also be extended to 417 infrared spectral channels using a principal component regression approach. Uncertainty estimates of the CAMEL will be provided that combine temporal, spatial, and algorithm variability as part of a total uncertainty estimate for the emissivity product. The spatial and temporal uncertainties are calculated as the standard deviation of the surrounding 5x5 pixels and 3 neighboring months respectively while the algorithm uncertainty is calculated using a measure of the difference between the two CAMEL emissivity inputs—the ASTER GED and MODIS baseline-fit products. This work describes these uncertainty estimation methods in detail and shows first results. Global, monthly results for different seasons are shown as well as case study examples at locations with different land surface types. Comparisons of the case studies to both lab values and an independent emissivity climatology derived from IASI measurements (Dan Zhou et al., IEEE Trans., 2011) are included.

  3. Thermal conductivity, bulk properties, and thermal stratigraphy of silicic tuffs from the upper portion of hole USW-G1, Yucca Mountain, Nye County, Nevada

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lappin, A.R.; VanBuskirk, R.G.; Enniss, D.O.

    1982-03-01

    Thermal-conductivity and bulk-property measurements were made on welded and nonwelded silicic tuffs from the upper portion of Hole USW-G1, located near the southwestern margin of the Nevada Test Site. Bulk-property measurements were made by standard techniques. Thermal conductivities were measured at temperatures as high as 280{sup 0}C, confining pressures to 10 MPa, and pore pressures to 1.5 MPa. Extrapolation of measured saturated conductivities to zero porosity suggests that matrix conductivity of both zeolitized and devitrified tuffs is independent of stratigraphic position, depth, and probably location. This fact allows development of a thermal-conductivity stratigraphy for the upper portion of Hole G1.more » Estimates of saturated conductivities of zeolitized nonwelded tuffs and devitrified tuffs below the water table appear most reliable. Estimated conductivities of saturated densely welded devitrified tuffs above the water table are less reliable, due to both internal complexity and limited data presently available. Estimation of conductivity of dewatered tuffs requires use of different air thermal conductivities in devitrified and zeolitized samples. Estimated effects of in-situ fracturing generally appear negligible.« less

  4. Comparison of Vehicle-Broadcasted Fuel Consumption Rates against Precise Fuel Measurements for Medium- and Heavy-Duty Vehicles and Engines

    DOE PAGES

    Pink, Alex; Ragatz, Adam; Wang, Lijuan; ...

    2017-03-28

    Vehicles continuously report real-time fuel consumption estimates over their data bus, known as the controller area network (CAN). However, the accuracy of these fueling estimates is uncertain to researchers who collect these data from any given vehicle. To assess the accuracy of these estimates, CAN-reported fuel consumption data are compared against fuel measurements from precise instrumentation. The data analyzed consisted of eight medium/heavy-duty vehicles and two medium-duty engines. Varying discrepancies between CAN fueling rates and the more accurate measurements emerged but without a vehicular trend-for some vehicles the CAN under-reported fuel consumption and for others the CAN over-reported fuel consumption.more » Furthermore, a qualitative real-time analysis revealed that the operating conditions under which these fueling discrepancies arose varied among vehicles. A drive cycle analysis revealed that while CAN fueling estimate accuracy differs for individual vehicles, that CAN estimates capture the relative fuel consumption differences between drive cycles within 4% for all vehicles and even more accurately for some vehicles. Furthermore, in situations where only CAN-reported data are available, CAN fueling estimates can provide relative fuel consumption trends but not accurate or precise fuel consumption rates.« less

  5. Comparison of Vehicle-Broadcasted Fuel Consumption Rates against Precise Fuel Measurements for Medium- and Heavy-Duty Vehicles and Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pink, Alex; Ragatz, Adam; Wang, Lijuan

    Vehicles continuously report real-time fuel consumption estimates over their data bus, known as the controller area network (CAN). However, the accuracy of these fueling estimates is uncertain to researchers who collect these data from any given vehicle. To assess the accuracy of these estimates, CAN-reported fuel consumption data are compared against fuel measurements from precise instrumentation. The data analyzed consisted of eight medium/heavy-duty vehicles and two medium-duty engines. Varying discrepancies between CAN fueling rates and the more accurate measurements emerged but without a vehicular trend-for some vehicles the CAN under-reported fuel consumption and for others the CAN over-reported fuel consumption.more » Furthermore, a qualitative real-time analysis revealed that the operating conditions under which these fueling discrepancies arose varied among vehicles. A drive cycle analysis revealed that while CAN fueling estimate accuracy differs for individual vehicles, that CAN estimates capture the relative fuel consumption differences between drive cycles within 4% for all vehicles and even more accurately for some vehicles. Furthermore, in situations where only CAN-reported data are available, CAN fueling estimates can provide relative fuel consumption trends but not accurate or precise fuel consumption rates.« less

  6. Probabilistic estimation of residential air exchange rates for ...

    EPA Pesticide Factsheets

    Residential air exchange rates (AERs) are a key determinant in the infiltration of ambient air pollution indoors. Population-based human exposure models using probabilistic approaches to estimate personal exposure to air pollutants have relied on input distributions from AER measurements. An algorithm for probabilistically estimating AER was developed based on the Lawrence Berkley National Laboratory Infiltration model utilizing housing characteristics and meteorological data with adjustment for window opening behavior. The algorithm was evaluated by comparing modeled and measured AERs in four US cities (Los Angeles, CA; Detroit, MI; Elizabeth, NJ; and Houston, TX) inputting study-specific data. The impact on the modeled AER of using publically available housing data representative of the region for each city was also assessed. Finally, modeled AER based on region-specific inputs was compared with those estimated using literature-based distributions. While modeled AERs were similar in magnitude to the measured AER they were consistently lower for all cities except Houston. AERs estimated using region-specific inputs were lower than those using study-specific inputs due to differences in window opening probabilities. The algorithm produced more spatially and temporally variable AERs compared with literature-based distributions reflecting within- and between-city differences, helping reduce error in estimates of air pollutant exposure. Published in the Journal of

  7. Comparing Errors in Medicaid Reporting across Surveys: Evidence to Date

    PubMed Central

    Call, Kathleen T; Davern, Michael E; Klerman, Jacob A; Lynch, Victoria

    2013-01-01

    Objective To synthesize evidence on the accuracy of Medicaid reporting across state and federal surveys. Data Sources All available validation studies. Study Design Compare results from existing research to understand variation in reporting across surveys. Data Collection Methods Synthesize all available studies validating survey reports of Medicaid coverage. Principal Findings Across all surveys, reporting some type of insurance coverage is better than reporting Medicaid specifically. Therefore, estimates of uninsurance are less biased than estimates of specific sources of coverage. The CPS stands out as being particularly inaccurate. Conclusions Measuring health insurance coverage is prone to some level of error, yet survey overstatements of uninsurance are modest in most surveys. Accounting for all forms of bias is complex. Researchers should consider adjusting estimates of Medicaid and uninsurance in surveys prone to high levels of misreporting. PMID:22816493

  8. Rapid assessment of rice seed availability for wildlife in harvested fields

    USGS Publications Warehouse

    Halstead, B.J.; Miller, M.R.; Casazza, Michael L.; Coates, P.S.; Farinha, M.A.; Benjamin, Gustafson K.; Yee, J.L.; Fleskes, J.P.

    2011-01-01

    Rice seed remaining in commercial fields after harvest (waste rice) is a critical food resource for wintering waterfowl in rice-growing regions of North America. Accurate and precise estimates of the seed mass density of waste rice are essential for planning waterfowl wintering habitat extents and management. In the Sacramento Valley of California, USA, the existing method for obtaining estimates of availability of waste rice in harvested fields produces relatively precise estimates, but the labor-, time-, and machineryintensive process is not practical for routine assessments needed to examine long-term trends in waste rice availability. We tested several experimental methods designed to rapidly derive estimates that would not be burdened with disadvantages of the existing method. We first conducted a simulation study of the efficiency of each method and then conducted field tests. For each approach, methods did not vary in root mean squared error, although some methods did exhibit bias for both simulations and field tests. Methods also varied substantially in the time to conduct each sample and in the number of samples required to detect a standard trend. Overall, modified line-intercept methods performed well for estimating the density of rice seeds. Waste rice in the straw, although not measured directly, can be accounted for by a positive relationship with density of rice on the ground. Rapid assessment of food availability is a useful tool to help waterfowl managers establish and implement wetland restoration and agricultural habitat-enhancement goals for wintering waterfowl. ?? 2011 The Wildlife Society.

  9. Influence of Gridded Standoff Measurement Resolution on Numerical Bathymetric Inversion

    NASA Astrophysics Data System (ADS)

    Hesser, T.; Farthing, M. W.; Brodie, K.

    2016-02-01

    The bathymetry from the surfzone to the shoreline incurs frequent, active movement due to wave energy interacting with the seafloor. Methodologies to measure bathymetry range from point-source in-situ instruments, vessel-mounted single-beam or multi-beam sonar surveys, airborne bathymetric lidar, as well as inversion techniques from standoff measurements of wave processes from video or radar imagery. Each type of measurement has unique sources of error and spatial and temporal resolution and availability. Numerical bathymetry estimation frameworks can use these disparate data types in combination with model-based inversion techniques to produce a "best-estimate of bathymetry" at a given time. Understanding how the sources of error and varying spatial or temporal resolution of each data type affect the end result is critical for determining best practices and in turn increase the accuracy of bathymetry estimation techniques. In this work, we consider an initial step in the development of a complete framework for estimating bathymetry in the nearshore by focusing on gridded standoff measurements and in-situ point observations in model-based inversion at the U.S. Army Corps of Engineers Field Research Facility in Duck, NC. The standoff measurement methods return wave parameters computed using linear wave theory from the direct measurements. These gridded datasets can range in temporal and spatial resolution that do not match the desired model parameters and therefore could lead to a reduction in the accuracy of these methods. Specifically, we investigate the affect of numerical resolution on the accuracy of an Ensemble Kalman Filter bathymetric inversion technique in relation to the spatial and temporal resolution of the gridded standoff measurements. The accuracies of the bathymetric estimates are compared with both high-resolution Real Time Kinematic (RTK) single-beam surveys as well as alternative direct in-situ measurements using sonic altimeters.

  10. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    PubMed Central

    2011-01-01

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173

  11. Challenges and benefits on long-term eddy covariance measurements over lakes

    NASA Astrophysics Data System (ADS)

    Vesala, Timo; Golub, Malgorzata; Desai, Ankur; Heiskanen, Jouni; Provenzale, Maria; Rantakari, Miitta; Ojala, Anne; Mammarella, Ivan

    2017-04-01

    Eddy Covariance (EC) data on carbon dioxide fluxes is presently available on about 30 lakes but the time series are mostly short, order of one year. Longer EC series together with chamber measurements and appropriate auxiliary data on water column allow for more accurate estimates of the aquatic component in terrestrial carbon balance and analysis of the environmental controls. We discuss on challenges for long-term EC measurements over freshwater ecosystems and demonstrate the benefits of EC data for carbon cycle studies via examples from long-term sites EC sites in Finland. We discuss on the auxiliary measurements needed and the general design of the whole measurement set-up to get representative information. We discuss on challenges related to the CO2 flux partitioning for freshwater ecosystems and introduce a new method to estimate the net primary productivity (NPP) on EC data, which is superior to more traditional methods (bottle incubations, 14C technique) with a poor temporal resolution. Finally, we collected and analyzed CO2 fluxes from 19 globally distributed lakes and reservoirs representing six climate zones. The mean flux was c. 0.3 micro mole / m2 s. We applied a simple upscaling to the direct observations and ended up to the estimate which is about half of the current emission estimate for lentic systems.

  12. Variability and scaling of hydraulic properties for 200 Area soils, Hanford Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khaleel, R.; Freeman, E.J.

    Over the years, data have been obtained on soil hydraulic properties at the Hanford Site. Much of these data have been obtained as part of recent site characterization activities for the Environmental Restoration Program. The existing data on vadose zone soil properties are, however, fragmented and documented in reports that have not been formally reviewed and released. This study helps to identify, compile, and interpret all available data for the principal soil types in the 200 Areas plateau. Information on particle-size distribution, moisture retention, and saturated hydraulic conductivity (K{sub s}) is available for 183 samples from 12 sites in themore » 200 Areas. Data on moisture retention and K{sub s} are corrected for gravel content. After the data are corrected and cataloged, hydraulic parameters are determined by fitting the van Genuchten soil-moisture retention model to the data. A nonlinear parameter estimation code, RETC, is used. The unsaturated hydraulic conductivity relationship can subsequently be predicted using the van Genuchten parameters, Mualem`s model, and laboratory-measured saturated hydraulic conductivity estimates. Alternatively, provided unsaturated conductivity measurements are available, the moisture retention curve-fitting parameters, Mualem`s model, and a single unsaturated conductivity measurement can be used to predict unsaturated conductivities for the desired range of field moisture regime.« less

  13. Precision estimate for Odin-OSIRIS limb scatter retrievals

    NASA Astrophysics Data System (ADS)

    Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.

    2012-02-01

    The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.

  14. Estimated maximal and current brain volume predict cognitive ability in old age

    PubMed Central

    Royle, Natalie A.; Booth, Tom; Valdés Hernández, Maria C.; Penke, Lars; Murray, Catherine; Gow, Alan J.; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E.; Deary, Ian J.; Wardlaw, Joanna M.

    2013-01-01

    Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging. PMID:23850342

  15. A novel approach for analyzing data on recurrent events with duration to estimate the combined cumulative rate of both variables over time.

    PubMed

    Bhattacharya, Sudipta

    2018-06-01

    Recurrent adverse events, once occur often continue for some duration of time in clinical trials; and the number of events along with their durations is clinically considered as a measure of severity of a disease under study. While there are methods available for analyzing recurrent events or durations or for analyzing both side by side, no effort has been made so far to combine them and present as a single measure. However, this single-valued combined measure may help clinicians assess the wholesome effect of recurrence of incident comprising events and durations. Non-parametric approach is adapted here to develop an estimator for estimating the combined rate of both, the recurrence of events as well as the event-continuation, that is the duration per event. The proposed estimator produces a single numerical value, the interpretation and meaningfulness of which are discussed through the analysis of a real-life clinical dataset. The algebraic expression of variance is derived, asymptotic normality of the estimator is noted, and demonstration is provided on how the estimator can be used in the setup of testing of statistical hypothesis. Further possible development of the estimator is also noted, to adjust for the dependence of event occurrences on the history of the process generating recurrent events through covariates and for the case of dependent censoring.

  16. Evapotranspiration from areas of native vegetation in west-central Florida

    USGS Publications Warehouse

    Bidlake, W.R.; Woodham, W.M.; Lopez, M.A.

    1993-01-01

    A study was made to examine the suitability of three different micrometeorological methods for estimating evapotranspiration from selected areas of native vegetation in west-central Florida and to estimate annual evapotranspiration from those areas. Evapotranspiration was estimated using the energy- balance Bowen ratio and eddy correlation methods. Potential evapotranspiration was computed using the Penman equation. The energy-balance Bowen ratio method was used to estimate diurnal evapotrans- piration at unforested sites and yielded reasonable results; however, measurements indicated that the magnitudes of air temperature and vapor-pressure gradients above the forested sites were too small to obtain reliable evapotranspiration measurements with the energy balance Bowen ratio system. Analysis of the surface energy-balance indicated that sensible and latent heat fluxes computed using standard eddy correlation computation methods did not adequately account for available energy. Eddy correlation data were combined with the equation for the surface energy balance to yield two additional estimates of evapotranspiration. Daily potential evapotranspiration and evapotranspira- tion estimated using the energy-balance Bowen ratio method were not correlated at a unforested, dry prairie site, but they were correlated at a marsh site. Estimates of annual evapotranspiration for sites within the four vegetation types, which were based on energy-balance Bowen ratio and eddy correlation measurements, were 1,010 millimeters for dry prairie sites, 990 millimeters for marsh sites, 1,060 millimeters for pine flatwood sites, and 970 millimeters for a cypress swamp site.

  17. A basin-scale approach to estimating stream temperatures of tributaries to the lower Klamath River, California

    USGS Publications Warehouse

    Flint, L.E.; Flint, A.L.

    2008-01-01

    Stream temperature is an important component of salmonid habitat and is often above levels suitable for fish survival in the Lower Klamath River in northern California. The objective of this study was to provide boundary conditions for models that are assessing stream temperature on the main stem for the purpose of developing strategies to manage stream conditions using Total Maximum Daily Loads. For model input, hourly stream temperatures for 36 tributaries were estimated for 1 Jan. 2001 through 31 Oct. 2004. A basin-scale approach incorporating spatially distributed energy balance data was used to estimate the stream temperatures with measured air temperature and relative humidity data and simulated solar radiation, including topographic shading and corrections for cloudiness. Regression models were developed on the basis of available stream temperature data to predict temperatures for unmeasured periods of time and for unmeasured streams. The most significant factor in matching measured minimum and maximum stream temperatures was the seasonality of the estimate. Adding minimum and maximum air temperature to the regression model improved the estimate, and air temperature data over the region are available and easily distributed spatially. The addition of simulated solar radiation and vapor saturation deficit to the regression model significantly improved predictions of maximum stream temperature but was not required to predict minimum stream temperature. The average SE in estimated maximum daily stream temperature for the individual basins was 0.9 ?? 0.6??C at the 95% confidence interval. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.

  18. Junk Food in Schools and Childhood Obesity.

    PubMed

    Datar, Ashlesha; Nicosia, Nancy

    2012-01-01

    Despite limited empirical evidence, there is growing concern that junk food availability in schools has contributed to the childhood obesity epidemic. In this paper, we estimate the effects of junk food availability on BMI, obesity, and related outcomes among a national sample of fifth-graders. Unlike previous studies, we address the endogeneity of the school food environment by controlling for children's BMI at school entry and estimating instrumental variables regressions that leverage variation in the school's grade span. Our main finding is that junk food availability does not significantly increase BMI or obesity among this fifth grade cohort despite the increased likelihood of in-school junk food purchases. The results are robust to alternate measures of junk food availability including school administrator reports of sales during school hours, school administrator reports of competitive food outlets, and children's reports of junk food availability. Moreover, the absence of any effects on overall food consumption and physical activity further support the null findings for BMI and obesity.

  19. Junk Food in Schools and Childhood Obesity

    PubMed Central

    Nicosia, Nancy

    2011-01-01

    Despite limited empirical evidence, there is growing concern that junk food availability in schools has contributed to the childhood obesity epidemic. In this paper, we estimate the effects of junk food availability on BMI, obesity, and related outcomes among a national sample of fifth-graders. Unlike previous studies, we address the endogeneity of the school food environment by controlling for children’s BMI at school entry and estimating instrumental variables regressions that leverage variation in the school’s grade span. Our main finding is that junk food availability does not significantly increase BMI or obesity among this fifth grade cohort despite the increased likelihood of in-school junk food purchases. The results are robust to alternate measures of junk food availability including school administrator reports of sales during school hours, school administrator reports of competitive food outlets, and children’s reports of junk food availability. Moreover, the absence of any effects on overall food consumption and physical activity further support the null findings for BMI and obesity. PMID:23729952

  20. Modeling the Response of Anopheles gambiae (Diptera: Culicidae) Populations in the Kenya Highlands to a Rise in Mean Annual Temperature.

    PubMed

    Wallace, Dorothy; Prosper, Olivia; Savos, Jacob; Dunham, Ann M; Chipman, Jonathan W; Shi, Xun; Ndenga, Bryson; Githeko, Andrew

    2017-03-01

    A dynamical model of Anopheles gambiae larval and adult populations is constructed that matches temperature-dependent maturation times and mortality measured experimentally as well as larval instar and adult mosquito emergence data from field studies in the Kenya Highlands. Spectral classification of high-resolution satellite imagery is used to estimate household density. Indoor resting densities collected over a period of one year combined with predictions of the dynamical model give estimates of both aquatic habitat and total adult mosquito densities. Temperature and precipitation patterns are derived from monthly records. Precipitation patterns are compared with average and extreme habitat estimates to estimate available aquatic habitat in an annual cycle. These estimates are coupled with the original model to produce estimates of adult and larval populations dependent on changing aquatic carrying capacity for larvae and changing maturation and mortality dependent on temperature. This paper offers a general method for estimating the total area of aquatic habitat in a given region, based on larval counts, emergence rates, indoor resting density data, and number of households.Altering the average daily temperature and the average daily rainfall simulates the effect of climate change on annual cycles of prevalence of An. gambiae adults. We show that small increases in average annual temperature have a large impact on adult mosquito density, whether measured at model equilibrium values for a single square meter of habitat or tracked over the course of a year of varying habitat availability and temperature. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Can we achieve Millennium Development Goal 4? New analysis of country trends and forecasts of under-5 mortality to 2015.

    PubMed

    Murray, Christopher J L; Laakso, Thomas; Shibuya, Kenji; Hill, Kenneth; Lopez, Alan D

    2007-09-22

    Global efforts have increased the accuracy and timeliness of estimates of under-5 mortality; however, these estimates fail to use all data available, do not use transparent and reproducible methods, do not distinguish predictions from measurements, and provide no indication of uncertainty around point estimates. We aimed to develop new reproducible methods and reanalyse existing data to elucidate detailed time trends. We merged available databases, added to them when possible, and then applied Loess regression to estimate past trends and forecast to 2015 for 172 countries. We developed uncertainty estimates based on different model specifications and estimated levels and trends in neonatal, post-neonatal, and childhood mortality. Global under-5 mortality has fallen from 110 (109-110) per 1000 in 1980 to 72 (70-74) per 1000 in 2005. Child deaths worldwide have decreased from 13.5 (13.4-13.6) million in 1980 to an estimated 9.7 (9.5-10.0) million in 2005. Global under-5 mortality is expected to decline by 27% from 1990 to 2015, substantially less than the target of Millennium Development Goal 4 (MDG4) of a 67% decrease. Several regions in Latin America, north Africa, the Middle East, Europe, and southeast Asia have had consistent annual rates of decline in excess of 4% over 35 years. Global progress on MDG4 is dominated by slow reductions in sub-Saharan Africa, which also has the slowest rates of decline in fertility. Globally, we are not doing a better job of reducing child mortality now than we were three decades ago. Further improvements in the quality and timeliness of child-mortality measurements should be possible by more fully using existing datasets and applying standard analytical strategies.

  2. Ground-based digital imagery for tree stem analysis

    Treesearch

    Neil Clark; Daniel L. Schmoldt; Randolph H. Wynne; Matthew F. Winn; Philip A. Araman

    2000-01-01

    In the USA, a subset of permanent forest sample plots within each geographic region are intensively measured to obtain estimates of tree volume and products. The detailed field measurements required for this type of sampling are both time consuming and error prone. We are attempting to reduce both of these factors with the aid of a commercially-available solid-state...

  3. Reassessing biases and other uncertainties in sea surface temperature observations measured in situ since 1850: 2. Biases and homogenization

    NASA Astrophysics Data System (ADS)

    Kennedy, J. J.; Rayner, N. A.; Smith, R. O.; Parker, D. E.; Saunby, M.

    2011-07-01

    Changes in instrumentation and data availability have caused time-varying biases in estimates of global and regional average sea surface temperature. The size of the biases arising from these changes are estimated and their uncertainties evaluated. The estimated biases and their associated uncertainties are largest during the period immediately following the Second World War, reflecting the rapid and incompletely documented changes in shipping and data availability at the time. Adjustments have been applied to reduce these effects in gridded data sets of sea surface temperature and the results are presented as a set of interchangeable realizations. Uncertainties of estimated trends in global and regional average sea surface temperature due to bias adjustments since the Second World War are found to be larger than uncertainties arising from the choice of analysis technique, indicating that this is an important source of uncertainty in analyses of historical sea surface temperatures. Despite this, trends over the twentieth century remain qualitatively consistent.

  4. Using the NAME Lagrangian Particle Dispersion model, and aircraft measurements to assess the accuracy of trace gas emission inventories from the U.K.

    NASA Astrophysics Data System (ADS)

    O'Sullivan, D. A.; Harrison, M.; Ploson, D.; Oram, D.; Reeves, C.

    2007-12-01

    A top-down approach using a combination of aircraft data and atmospheric dispersion modelling has been used to estimate emissions for 24 halogenated trace gases from the United Kingdom. This has been done using data collected during AMPEP/FLUXEX, a U.K based measurement campaign which took place between April and September 2005. The primary objective relating to this work was to make direct airborne measurements of concentration enhancements within the boundary layer arising from anthropogenic pollution events, and then to use mass balance methods to determine an emission flux. This was done by analysing Whole Air Samples (WAS) collected in the boundary layer upwind and downwind of the UK at frequent intervals around the coast using the technique of gas chromatography mass spectrometry (GCMS). Emissions were then calculated using a simple box-model approach and also using NAME (Numerical Atmospheric-dispersion Modelling Environment) which is a Lagrangian particle model using 3 hourly 3D meteorology fields from the Met Office Unified Model. By using such an approach it is also possible to identify the most likely main source regions in the UK for the compounds measured. Among the trace gases studied are many which through their effects on stratospheric ozone, and their large radiative forcing have a direct impact on global climate such as CFC's 11, 12, 113 and 114, HCFC's 21, 22, 141b and 142b, HFC's 134a and 152a, methyl chloroform, methyl bromide and carbon tetrachloride. Also the emissions of some short lived gases with have direct effects on human health, such as tetrachloroethene, and trichloroethene, have been derived. The UK emissions estimates calculated from this experimental and modelling work are compared with bottom-up and other top-down emission inventories for the UK and Europe. It was found that the estimates from this study were often higher than those in bottom-up emission inventories derived from industry. In addition for a number of trace gases, for example HCFC-21 and the HFC's, there are no accurate emissions estimates available due to privacy laws which in the UK restrict the availability of the production and sales data required to construct bottom-up emission inventories. Therefore for some of the compounds included in this study, this work provides the first available estimate of UK emissions.

  5. Room-temperature susceptometry predicts biopsy-determined hepatic iron in patients with elevated serum ferritin.

    PubMed

    Maliken, Bryan D; Avrin, William F; Nelson, James E; Mooney, Jody; Kumar, Sankaran; Kowdley, Kris V

    2012-01-01

    There is an ongoing clinical need for novel methods to measure hepatic iron content (HIC) noninvasively. Both magnetic resonance imaging (MRI) and superconducting quantum interference device (SQUID) methods have previously shown promise for estimation of HIC, but these methods can be expensive and are not widely available. Room-temperature susceptometry (RTS) represents an inexpensive alternative and was previously found to be strongly correlated with HIC estimated by SQUID measurements among patients with transfusional iron overload related to thalassemia. The goal of the current study was to examine the relationship between RTS and biochemical HIC measured in liver biopsy specimens in a more varied patient cohort. Susceptometry was performed in a diverse group of patients with hyperferritinemia due to hereditary hemochromatosis (HHC) (n = 2), secondary iron overload (n = 3), nonalcoholic fatty liver disease (NAFLD) (n = 2), and chronic viral hepatitis (n = 3) within one month of liver biopsy in the absence of iron depletion therapy. The correlation coefficient between HIC estimated by susceptometry and by biochemical iron measurement in liver tissue was 0.71 (p = 0.022). Variance between liver iron measurement and susceptometry measurement was primarily related to reliance on the patient's body-mass index (BMI) to estimate the magnetic susceptibility of tissue overlying the liver. We believe RTS holds promise for noninvasive measurement of HIC. Improved measurement techniques, including more accurate overlayer correction, may further improve the accuracy of liver susceptometry in patients with liver disease.

  6. The economic burden of child sexual abuse in the United States.

    PubMed

    Letourneau, Elizabeth J; Brown, Derek S; Fang, Xiangming; Hassan, Ahmed; Mercy, James A

    2018-05-01

    The present study provides an estimate of the U.S. economic impact of child sexual abuse (CSA). Costs of CSA were measured from the societal perspective and include health care costs, productivity losses, child welfare costs, violence/crime costs, special education costs, and suicide death costs. We separately estimated quality-adjusted life year (QALY) losses. For each category, we used the best available secondary data to develop cost per case estimates. All costs were estimated in U.S. dollars and adjusted to the reference year 2015. Estimating 20 new cases of fatal and 40,387 new substantiated cases of nonfatal CSA that occurred in 2015, the lifetime economic burden of CSA is approximately $9.3 billion, the lifetime cost for victims of fatal CSA per female and male victim is on average $1,128,334 and $1,482,933, respectively, and the average lifetime cost for victims of nonfatal CSA is of $282,734 per female victim. For male victims of nonfatal CSA, there was insufficient information on productivity losses, contributing to a lower average estimated lifetime cost of $74,691 per male victim. If we included QALYs, these costs would increase by approximately $40,000 per victim. With the exception of male productivity losses, all estimates were based on robust, replicable incidence-based costing methods. The availability of accurate, up-to-date estimates should contribute to policy analysis, facilitate comparisons with other public health problems, and support future economic evaluations of CSA-specific policy and practice. In particular, we hope the availability of credible and contemporary estimates will support increased attention to primary prevention of CSA. Copyright © 2018. Published by Elsevier Ltd.

  7. TV Audience Measurement with Big Data.

    PubMed

    Hill, Shawndra

    2014-06-01

    TV audience measurement involves estimating the number of viewers tuned into a TV show at any given time as well as their demographics. First introduced shortly after commercial television broadcasting began in the late 1940s, audience measurement allowed the business of television to flourish by offering networks a way to quantify the monetary value of TV audiences for advertisers, who pay for the estimated number of eyeballs watching during commercials. The first measurement techniques suffered from multiple limitations because reliable, large-scale data were costly to acquire. Yet despite these limitations, measurement standards remained largely unchanged for decades until devices such as cable boxes, video-on-demand boxes, and cell phones, as well as web apps, Internet browser clicks, web queries, and social media activity, resulted in an explosion of digitally available data. TV viewers now leave digital traces that can be used to track almost every aspect of their daily lives, allowing the potential for large-scale aggregation across data sources for individual users and groups and enabling the tracking of more people on more dimensions for more shows. Data are now more comprehensive, available in real time, and cheaper to acquire, enabling accurate and fine-grained TV audience measurement. In this article, I discuss the evolution of audience measurement and what the recent data explosion means for the TV industry and academic research.

  8. Measuring rainwater content by radar using propagation differential phase shift

    NASA Technical Reports Server (NTRS)

    Jameson, A. R.

    1994-01-01

    While radars measure several quantities closely coupled to the rainfall rate, for frequencies less than 15 GHz, estimates of the rainwater content W are traditionally computed from the radar reflectivity factor Z or the rate of attenuation A--quantities only weakly related to W. Consequently, instantaneous point estimates of W using Z and A are often erroneous. A more natural, alternative parameter for estimating W at these frequencies is the specific polarization propagation differential phase shift phi(sub DP), which is a measure of the change in the difference between phases of vertically (V) and horizontally (H) polarized waves with increasing distance from a radar. It is now well known that W is nearly linearly related to phi(sub DP) divided by (1 - reversed R), where reversed R is the mass-weighted mean axis ratio of the raindrops. Unfortunately, such relations are not widely used in part because measurements of phi(sub DP) are scarce but also because one must determine reversed R. In this work it is shown that this parameter can be estimated using the differential reflectivity (Z(sub H)/Z(sub V) at 3 GHz. An alternative technique is suggested for higher frequencies when the differential reflectivity becomes degraded by attenuation. While theory indicates that it should be possible using phi(sub DP) to estimate W quite accurately, measurement errors increase the uncertainty to +/- 18%-35% depending on reversed R. While far from ideal, it appears that these estimates are likely to be considerably more accurate than those deduced using currently available methods.

  9. Generation of common coefficients to estimate global solar radiation over different locations of India

    NASA Astrophysics Data System (ADS)

    Samanta, Suman; Patra, Pulak Kumar; Banerjee, Saon; Narsimhaiah, Lakshmi; Sarath Chandran, M. A.; Vijaya Kumar, P.; Bandyopadhyay, Sanjib

    2018-06-01

    In developing countries like India, global solar radiation (GSR) is measured at very few locations due to non-availability of radiation measuring instruments. To overcome the inadequacy of GSR measurements, scientists developed many empirical models to estimate location-wise GSR. In the present study, three simple forms of Angstrom equation [Angstrom-Prescott (A-P), Ogelman, and Bahel] were used to estimate GSR at six geographically and climatologically different locations across India with an objective to find out a set of common constants usable for whole country. Results showed that GSR values varied from 9.86 to 24.85 MJ m-2 day-1 for different stations. It was also observed that A-P model showed smaller errors than Ogelman and Bahel models. All the models well estimated GSR, as the 1:1 line between measured and estimated values showed Nash-Sutcliffe efficiency (NSE) values ≥ 0.81 for all locations. Measured data of GSR pooled over six selected locations was analyzed to obtain a new set of constants for A-P equation which can be applicable throughout the country. The set of constants (a = 0.29 and b = 0.40) was named as "One India One Constant (OIOC)," and the model was named as "MOIOC." Furthermore, the developed constants are validated statistically for another six locations of India and produce close estimation. High R 2 values (≥ 76%) along with low mean bias error (MBE) ranging from - 0.64 to 0.05 MJ m-2 day-1 revealed that the new constants are able to predict GSR with lesser percentage of error.

  10. Source of parental reports of child height and weight during phone interviews and influence on obesity prevalence estimates among children aged 3-17 years.

    PubMed

    Skinner, Asheley Cockrell; Miles, Donna; Perrin, Eliana M; Coyne-Beasley, Tamera; Ford, Carol

    2013-01-01

    We compared parental reports of children's height and weight when the values were estimated vs. parent-measured to determine how these reports influence the estimated prevalence of childhood obesity. In the 2007 and 2008 North Carolina Child Health Assessment and Monitoring Program surveys, parents reported height and weight for children aged 3-17 years. When parents reported the values were not measured (by doctor, school, or home), they were asked to measure their child and were later called back. We categorized body mass index status using standard CDC definitions, and we used Chi-square tests and the Stuart-Maxwell test of marginal homogeneity to examine reporting differences. About 80% (n=509) of the 638 parents who reported an unmeasured height and/or weight participated in a callback and provided updated measures. Children originally classified as obese were subsequently classified as obese (67%), overweight (13%), and healthy weight (19%). An estimated 28% of younger children (<10 years of age) vs. 6% of older children (aged ≥10 years) were reclassified on callback. Having parents who guessed the height and weight of their children and then reported updated values did not significantly change the overall population estimates of obesity. Our findings demonstrate that using parent-reported height and weight values may be sufficient to provide reasonable estimates of obesity prevalence. Systematically asking the source of height and weight information may help improve how it is applied to research of the prevalence of childhood obesity when gold-standard measurements are not available.

  11. Toward On-line Parameter Estimation of Concentric Tube Robots Using a Mechanics-based Kinematic Model

    PubMed Central

    Jang, Cheongjae; Ha, Junhyoung; Dupont, Pierre E.; Park, Frank Chongwoo

    2017-01-01

    Although existing mechanics-based models of concentric tube robots have been experimentally demonstrated to approximate the actual kinematics, determining accurate estimates of model parameters remains difficult due to the complex relationship between the parameters and available measurements. Further, because the mechanics-based models neglect some phenomena like friction, nonlinear elasticity, and cross section deformation, it is also not clear if model error is due to model simplification or to parameter estimation errors. The parameters of the superelastic materials used in these robots can be slowly time-varying, necessitating periodic re-estimation. This paper proposes a method for estimating the mechanics-based model parameters using an extended Kalman filter as a step toward on-line parameter estimation. Our methodology is validated through both simulation and experiments. PMID:28717554

  12. Characteristics of DC electric fields in transient plasma sheet events

    NASA Astrophysics Data System (ADS)

    Laakso, H. E.; Escoubet, C. P.; Masson, A.

    2015-12-01

    We take an advantage of five different DC electric field measurements in the plasma sheet available from the EFW double probe experiment, EDI electron drift instrument, CODIF and HIA ion spectrometers, and PEACE electron spectrometer on the four Cluster spacecraft. The calibrated observations of the three spectrometers are used to determine the proton and electron velocity moments. The velocity moments can be used to estimate the proton and electron drift velocity and furthermore the DC electric field, assuming that the electron and proton velocity perpendicular to the magnetic field is dominated by the ExB drift motion. Naturally when ions and electrons do not perform a proper drift motion, which can happen in the plasma sheet, the estimated DC electric field from ion and electron motion is not correct. However, surprisingly often the DC electric fields estimated from electron and ion motions are identical suggesting that this field is a real DC electric field around the measurement point. As the measurement techniques are so different, it is quite plausible that when two different measurements yield the same DC electric field, it is the correct field. All five measurements of the DC electric field are usually not simultaneously available, especially on Cluster 2 where CODIF and HIA are not operational, or on Cluster 4 where EDI is off. In this presentation we investigate DC electric field in various transient plasma sheet events such as dipolarization events and BBF's and how the five measurements agree or disagree. There are plenty of important issues that are considered, e.g., (1) what kind of DC electric fields exist in such events and what are their spatial scales, (2) do electrons and ions perform ExB drift motions in these events, and (3) how well the instruments have been calibrated.

  13. Comparison of parental estimate of developmental age with measured IQ in children with neurodevelopmental disorders.

    PubMed

    Chandler, S; Howlin, P; Simonoff, E; Kennedy, J; Baird, G

    2016-07-01

    Formal IQ tests are an important part of the diagnostic and needs-based assessment process for children with neurodevelopmental disorders. However, resources for such assessments are not always available. It has been suggested that parental estimates of their child's developmental age could serve as a proxy IQ when formal measures are unavailable. Parental estimates of their child's developmental age were converted to a developmental quotient (DQ) in 197 children with Autism Spectrum Disorder (ASD) aged 4-9 years, and 108 children with ADHD and intellectual disability (ADHD + ID) aged 7-15 years. Formal IQ assessments were then conducted. Parents completed the Social Communication Questionnaire ((SCQ), a measure of autism symptomatology) and a demographic questionnaire. In the ASD sample, 58% of parent estimates were within 15 points (i.e. one standard deviation) of the child's measured IQ score. Lower measured IQ and lower SCQ total score predicted higher parental accuracy. In the ADHD + ID sample, 74% of parental estimates were within 15 points of measured IQ. In this group, higher child IQ predicted greater parental accuracy. Parents in the ADHD + ID group were more likely to overestimate children's ability level than parents in the ASD group. In this study, the majority of parents of children with ADHD and ID were able to estimate their child's intellectual ability level with some accuracy. Parents of children with ASD were less accurate, but this may be because these parents were focussing more on children's level of adaptive functioning, which is known to be typically lower than cognitive ability in ASD. © 2016 John Wiley & Sons Ltd.

  14. Sampling based State of Health estimation methodology for Li-ion batteries

    NASA Astrophysics Data System (ADS)

    Camci, Fatih; Ozkurt, Celil; Toker, Onur; Atamuradov, Vepa

    2015-03-01

    Storage and management of energy is becoming a more and more important problem every day, especially for electric and hybrid vehicle applications. Li-ion battery is one of the most important technological alternatives for high capacity energy storage and related industrial applications. State of Health (SoH) of Li-ion batteries plays a critical role in their deployment from economic, safety, and availability aspects. Most, if not all, of the studies related to SoH estimation focus on the measurement of a new parameter/physical phenomena related to SoH, or development of new statistical/computational methods using several parameters. This paper presents a new approach for SoH estimation for Li-ion battery systems with multiple battery cells: The main idea is a new circuit topology which enables separation of battery cells into two groups, main and test batteries, whenever a SoH related measurement is to be conducted. All battery cells will be connected to the main battery during the normal mode of operation. When a measurement is needed for SoH estimation, some of the cells will be separated from the main battery, and SoH estimation related measurements will be performed on these units. Compared to classical SoH measurement methods which deal with whole battery system, the proposed method estimates the SoH of the system by separating a small but representative set of cells. While SoH measurements are conducted on these isolated cells, remaining cells in the main battery continue to function in normal mode, albeit in slightly reduced performance levels. Preliminary experimental results are quite promising, and validate the feasibility of the proposed approach. Technical details of the proposed circuit architecture are also summarized in the paper.

  15. Utility of Satellite Magnetic Observations for Estimating Near-Surface Magnetic Anomalies

    NASA Technical Reports Server (NTRS)

    Kim, Hyung Rae; vonFrese, Ralph R. B.; Taylor, Patrick T.; Kim, Jeong Woo; Park, Chan Hong

    2003-01-01

    Regional to continental scale magnetic anomaly maps are becoming increasingly available from airborne, shipborne, and terrestrial surveys. Satellite data are commonly considered to fill the coverage gaps in regional compilations of these near-surface surveys. For the near-surface Antarctic magnetic anomaly map being produced by the Antarctic Digital Magnetic Anomaly Project (ADMAP), we show that near-surface magnetic anomaly estimation is greatly enhanced by the joint inversion of the near-surface data with the satellite observations relative to the conventional technique such as minimum curvature. Orsted observations are especially advantageous relative to the Magsat data that have order-of-magnitude greater measurement errors, albeit at much lower orbital altitudes. CHAMP is observing the geomagnetic field with the same measurement accuracy as the Orsted mission, but at the lower orbital altitudes covered by Magsat. Hence, additional significant improvement in predicting near-surface magnetic anomalies can result as these CHAMP data are available. Our analysis also suggests that considerable new insights on the magnetic properties of the lithosphere may be revealed by a further order-of-magnitude improvement in the accuracy of the magnetometer measurements at minimum orbital altitude.

  16. The Impact of Measurement Noise in GPA Diagnostic Analysis of a Gas Turbine Engine

    NASA Astrophysics Data System (ADS)

    Ntantis, Efstratios L.; Li, Y. G.

    2013-12-01

    The performance diagnostic analysis of a gas turbine is accomplished by estimating a set of internal engine health parameters from available sensor measurements. No physical measuring instruments however can ever completely eliminate the presence of measurement uncertainties. Sensor measurements are often distorted by noise and bias leading to inaccurate estimation results. This paper explores the impact of measurement noise on Gas Turbine GPA analysis. The analysis is demonstrated with a test case where gas turbine performance simulation and diagnostics code TURBOMATCH is used to build a performance model of a model engine similar to Rolls-Royce Trent 500 turbofan engine, and carry out the diagnostic analysis with the presence of different levels of measurement noise. Conclusively, to improve the reliability of the diagnostic results, a statistical analysis of the data scattering caused by sensor uncertainties is made. The diagnostic tool used to deal with the statistical analysis of measurement noise impact is a model-based method utilizing a non-linear GPA.

  17. A TRMM-Based System for Real-Time Quasi-Global Merged Precipitation Estimates

    NASA Technical Reports Server (NTRS)

    Starr, David OC. (Technical Monitor); Huffman, G. J.; Adler, R. F.; Stocker, E. F.; Bolvin, D. T.; Nelkin, E. J.

    2002-01-01

    A new processing system has been developed to combine IR and microwave data into 0.25 degree x 0.25 degree gridded precipitation estimates in near-real time over the latitude band plus or minus 50 degrees. Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) precipitation estimates are used to calibrate Special Sensor Microwave/Imager (SSM/I) estimates, and Advanced Microwave Sounding Unit (AMSU) and Advanced Microwave Scanning Radiometer (AMSR) estimates, when available. The merged microwave estimates are then used to create a calibrated IR estimate in a Probability-Matched-Threshold approach for each individual hour. The microwave and IR estimates are combined for each 3-hour interval. Early results will be shown, including typical tropical and extratropical storm evolution and examples of the diurnal cycle. Major issues will be discussed, including the choice of IR algorithm, the approach for merging the IR and microwave estimates, extension to higher latitudes, retrospective processing back to 1999, and extension to the GPCP One-Degree Daily product (for which the authors are responsible). The work described here provides one approach to using data from the future NASA Global Precipitation Measurement program, which is designed to provide Jill global coverage by low-orbit passive microwave satellites every three hours beginning around 2008.

  18. Multiple marker abundance profiling: combining selected reaction monitoring and data-dependent acquisition for rapid estimation of organelle abundance in subcellular samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooper, Cornelia M.; Stevens, Tim J.; Saukkonen, Anna

    Measuring changes in protein or organelle abundance in the cell is an essential, but challenging aspect of cell biology. Frequently-used methods for determining organelle abundance typically rely on detection of a very few marker proteins, so are unsatisfactory. In silico estimates of protein abundances from publicly available protein spectra can provide useful standard abundance values but contain only data from tissue proteomes, and are not coupled to organelle localization data. A new protein abundance score, the normalized protein abundance scale (NPAS), expands on the number of scored proteins and the scoring accuracy of lower-abundance proteins in Arabidopsis. NPAS was combinedmore » with subcellular protein localization data, facilitating quantitative estimations of organelle abundance during routine experimental procedures. A suite of targeted proteomics markers for subcellular compartment markers was developed, enabling independent verification of in silico estimates for relative organelle abundance. Estimation of relative organelle abundance was found to be reproducible and consistent over a range of tissues and growth conditions. In silico abundance estimations and localization data have been combined into an online tool, multiple marker abundance profiling, available in the SUBA4 toolbox (http://suba.live).« less

  19. Multiple marker abundance profiling: combining selected reaction monitoring and data-dependent acquisition for rapid estimation of organelle abundance in subcellular samples

    DOE PAGES

    Hooper, Cornelia M.; Stevens, Tim J.; Saukkonen, Anna; ...

    2017-10-12

    Measuring changes in protein or organelle abundance in the cell is an essential, but challenging aspect of cell biology. Frequently-used methods for determining organelle abundance typically rely on detection of a very few marker proteins, so are unsatisfactory. In silico estimates of protein abundances from publicly available protein spectra can provide useful standard abundance values but contain only data from tissue proteomes, and are not coupled to organelle localization data. A new protein abundance score, the normalized protein abundance scale (NPAS), expands on the number of scored proteins and the scoring accuracy of lower-abundance proteins in Arabidopsis. NPAS was combinedmore » with subcellular protein localization data, facilitating quantitative estimations of organelle abundance during routine experimental procedures. A suite of targeted proteomics markers for subcellular compartment markers was developed, enabling independent verification of in silico estimates for relative organelle abundance. Estimation of relative organelle abundance was found to be reproducible and consistent over a range of tissues and growth conditions. In silico abundance estimations and localization data have been combined into an online tool, multiple marker abundance profiling, available in the SUBA4 toolbox (http://suba.live).« less

  20. New infrastructure for studies of transmutation and fast systems concepts

    NASA Astrophysics Data System (ADS)

    Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria

    2017-09-01

    In this work we report initial studies on a low power Accelerator-Driven System as a possible experimental facility for the measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.

  1. A low power ADS for transmutation studies in fast systems

    NASA Astrophysics Data System (ADS)

    Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria

    2017-12-01

    In this work, we report studies on a fast low power accelerator driven system model as a possible experimental facility, focusing on its capabilities in terms of measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.

  2. Dynamic state estimation assisted power system monitoring and protection

    NASA Astrophysics Data System (ADS)

    Cui, Yinan

    The advent of phasor measurement units (PMUs) has unlocked several novel methods to monitor, control, and protect bulk electric power systems. This thesis introduces the concept of "Dynamic State Estimation" (DSE), aided by PMUs, for wide-area monitoring and protection of power systems. Unlike traditional State Estimation where algebraic variables are estimated from system measurements, DSE refers to a process to estimate the dynamic states associated with synchronous generators. This thesis first establishes the viability of using particle filtering as a technique to perform DSE in power systems. The utility of DSE for protection and wide-area monitoring are then shown as potential novel applications. The work is presented as a collection of several journal and conference papers. In the first paper, we present a particle filtering approach to dynamically estimate the states of a synchronous generator in a multi-machine setting considering the excitation and prime mover control systems. The second paper proposes an improved out-of-step detection method for generators by means of angular difference. The generator's rotor angle is estimated with a particle filter-based dynamic state estimator and the angular separation is then calculated by combining the raw local phasor measurements with this estimate. The third paper introduces a particle filter-based dual estimation method for tracking the dynamic states of a synchronous generator. It considers the situation where the field voltage measurements are not readily available. The particle filter is modified to treat the field voltage as an unknown input which is sequentially estimated along with the other dynamic states. The fourth paper proposes a novel framework for event detection based on energy functions. The key idea is that any event in the system will leave a signature in WAMS data-sets. It is shown that signatures for four broad classes of disturbance events are buried in the components that constitute the energy function for the system. This establishes a direct correspondence (or mapping) between an event and certain component(s) of the energy function. The last paper considers the dynamic latency effect when the measurements and estimated dynamics are transmitted from remote ends to a centralized location through the networks.

  3. Multisite evaluation of APEX for water quality: II. Regional parameterization

    USDA-ARS?s Scientific Manuscript database

    Phosphorus (P) index assessment requires independent estimates of long-term average annual P loss from multiple locations, management practices, soils, and landscape positions. Because currently available measured data are insufficient, calibrated and validated process-based models have been propos...

  4. Solutions for the diurnally forced advection-diffusion equation to estimate bulk fluid velocity and diffusivity in streambeds from temperature time series

    NASA Astrophysics Data System (ADS)

    Luce, C.; Tonina, D.; Gariglio, F. P.; Applebee, R.

    2012-12-01

    Differences in the diurnal variations of temperature at different depths in streambed sediments are commonly used for estimating vertical fluxes of water in the streambed. We applied spatial and temporal rescaling of the advection-diffusion equation to derive two new relationships that greatly extend the kinds of information that can be derived from streambed temperature measurements. The first equation provides a direct estimate of the Peclet number from the amplitude decay and phase delay information. The analytical equation is explicit (e.g. no numerical root-finding is necessary), and invertable. The thermal front velocity can be estimated from the Peclet number when the thermal diffusivity is known. The second equation allows for an independent estimate of the thermal diffusivity directly from the amplitude decay and phase delay information. Several improvements are available with the new information. The first equation uses a ratio of the amplitude decay and phase delay information; thus Peclet number calculations are independent of depth. The explicit form also makes it somewhat faster and easier to calculate estimates from a large number of sensors or multiple positions along one sensor. Where current practice requires a priori estimation of streambed thermal diffusivity, the new approach allows an independent calculation, improving precision of estimates. Furthermore, when many measurements are made over space and time, expectations of the spatial correlation and temporal invariance of thermal diffusivity are valuable for validation of measurements. Finally, the closed-form explicit solution allows for direct calculation of propagation of uncertainties in error measurements and parameter estimates, providing insight about error expectations for sensors placed at different depths in different environments as a function of surface temperature variation amplitudes. The improvements are expected to increase the utility of temperature measurement methods for studying groundwater-surface water interactions across space and time scales. We discuss the theoretical implications of the new solutions supported by examples with data for illustration and validation.

  5. Source Term Estimates of Radioxenon Released from the BaTek Medical Isotope Production Facility Using External Measured Air Concentrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eslinger, Paul W.; Cameron, Ian M.; Dumais, Johannes R.

    2015-10-01

    Abstract Batan Teknologi (BaTek) operates an isotope production facility in Serpong, Indonesia that supplies 99mTc for use in medical procedures. Atmospheric releases of Xe-133 in the production process at BaTek are known to influence the measurements taken at the closest stations of the International Monitoring System (IMS). The purpose of the IMS is to detect evidence of nuclear explosions, including atmospheric releases of radionuclides. The xenon isotopes released from BaTek are the same as those produced in a nuclear explosion, but the isotopic ratios are different. Knowledge of the magnitude of releases from the isotope production facility helps inform analystsmore » trying to decide whether a specific measurement result came from a nuclear explosion. A stack monitor deployed at BaTek in 2013 measured releases to the atmosphere for several isotopes. The facility operates on a weekly cycle, and the stack data for June 15-21, 2013 show a release of 1.84E13 Bq of Xe-133. Concentrations of Xe-133 in the air are available at the same time from a xenon sampler located 14 km from BaTek. An optimization process using atmospheric transport modeling and the sampler air concentrations produced a release estimate of 1.88E13 Bq. The same optimization process yielded a release estimate of 1.70E13 Bq for a different week in 2012. The stack release value and the two optimized estimates are all within 10 percent of each other. Weekly release estimates of 1.8E13 Bq and a 40 percent facility operation rate yields a rough annual release estimate of 3.7E13 Bq of Xe-133. This value is consistent with previously published estimates of annual releases for this facility, which are based on measurements at three IMS stations. These multiple lines of evidence cross-validate the stack release estimates and the release estimates from atmospheric samplers.« less

  6. Double-survey estimates of bald eagle populations in Oregon

    USGS Publications Warehouse

    Anthony, R.G.; Garrett, Monte G.; Isaacs, F.B.

    1999-01-01

    The literature on abundance of birds of prey is almost devoid of population estimates with statistical rigor. Therefore, we surveyed bald eagle (Haliaeetus leucocephalus) populations on the Crooked and lower Columbia rivers of Oregon and used the double-survey method to estimate populations and sighting probabilities for different survey methods (aerial, boat, vehicle) and bald eagle ages (adults vs. subadults). Sighting probabilities were consistently 20%. The results revealed variable and negative bias (percent relative bias = -9 to -70%) of direct counts and emphasized the importance of estimating populations where some measure of precision and ability to conduct inference tests are available. We recommend use of the double-survey method to estimate abundance of bald eagle populations and other raptors in open habitats.

  7. Comparison of multiple non-invasive methods of measuring cardiac output during pregnancy reveals marked heterogeneity in the magnitude of cardiac output change between women.

    PubMed

    Petersen, John W; Liu, Jing; Chi, Yueh-Yun; Lingis, Melissa; Williams, R Stan; Rhoton-Vlasak, Alice; Segal, Mark S; Conrad, Kirk P

    2017-04-01

    Various non-invasive methods are available to measure cardiac output (CO) during pregnancy. We compared serial measures of CO using various methods to determine which provided the least variability. Ten patients with spontaneous pregnancy had estimation of CO at baseline prior to becoming pregnant and at the end of the first and third trimesters. Echocardiographic data were used to estimate CO using the Teichholz method, Simpson's biplane method, and the Doppler determined velocity time integral (VTI) method. In addition, a Bioz Dx device was used to estimate CO by impedance cardiography. CO estimated with the VTI method had the lowest beat-to-beat variability. CO estimated with the VTI method was higher than CO estimated with the 2D-Teichholz method and Simpson's method. The percent change in CO during pregnancy was similar for all echo methods (VTI, Teichholz, and Simpson's biplane). Baseline CO determined with impedance cardiography was higher than CO determined with the VTI method. However, change in CO during pregnancy was significantly lower when measured with impedance cardiography. There was marked heterogeneity in the degree of rise in CO during the first trimester (-3 to 55%). The wide variation in the gestational rise in CO was unexpected, and at least in part secondary to variable increase in heart rate. We recommend the use of the Doppler determined VTI method for the estimation of CO in pregnancy. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.

  8. Best estimate of luminal cross-sectional area of coronary arteries from angiograms

    NASA Technical Reports Server (NTRS)

    Lee, P. L.; Selzer, R. H.

    1988-01-01

    We have reexamined the problem of estimating the luminal area of an elliptically-shaped coronary artery cross section from two or more radiographic diameter measurements. The expected error is found to be much smaller than the maximum potential error. In the cae of two orthogonal views, closed form expressions have been derived for calculating the area and the uncertainty. Assuming that the underlying ellipse has limited ellipticity (major/minor axis ratio less than five), it is shown that the average uncertainty in the area is less than 14 percent. When more than two views are available, we suggest using a least-squares fit method to extract all available information from the data.

  9. Estimation of Radiative Efficiency of Chemicals with Potentially Significant Global Warming Potential

    EPA Pesticide Factsheets

    The set of commercially available chemical substances in commerce that may have significant global warming potential (GWP) is not well defined. Although there are currently over 200 chemicals with high GWP reported by the Intergovernmental Panel on Climate Change, World Meteorological Organization, or Environmental Protection Agency, there may be hundreds of additional chemicals that may also have significant GWP. Evaluation of various approaches to estimate radiative efficiency (RE) and atmospheric lifetime will help to refine GWP estimates for compounds where no measured IR spectrum is available. This study compares values of RE calculated using computational chemistry techniques for 235 chemical compounds against the best available values. It is important to assess the reliability of the underlying computational methods for computing RE to understand the sources of deviations from the best available values. Computed vibrational frequency data is used to estimate RE values using several Pinnock-type models. The values derived using these models are found to be in reasonable agreement with reported RE values (though significant improvement is obtained through scaling). The effect of varying the computational method and basis set used to calculate the frequency data is also discussed. It is found that the vibrational intensities have a strong dependence on basis set and are largely responsible for differences in computed values of RE in this study. Deviations of

  10. The Usability of CAT System for Assessing the Depressive Level of Japanese-A Study on Psychometric Properties and Response Behavior.

    PubMed

    Iwata, Noboru; Kikuchi, Kenichi; Fujihara, Yuya

    2016-08-01

    An innovative measurement system using a computerized adaptive testing technique based on the item response theory (CAT) has been expanding to measure mental health status. However, little is known about details in its measurement properties based on the empirical data. Moreover, the response time (RT) data, which are not available by a paper-and-pencil measurement but available by a computerized measurement, would be worth investigating for exploring the response behavior. We aimed at constructing the CAT to measure depressive symptomatology in a community population and exploring its measurement properties. Also, we examined the relationships between RTs, individual item responses, and depressive levels. For constructing the CAT system, responses of 2061 workers and university students to 24 depression scale plus four negatively revised positive affect items were subjected to a polytomous IRT analysis. The stopping rule was set for standard error of estimation < 0.30 or the maximum 15 items displayed. The CAT and non-adaptive computer-based test (CBT) were administered to 209 undergraduates, and 168 of them administered again after 1 week. On average, the CAT was converged by 10.4 items. The θ values estimated by CAT and CBT were highly correlated (r = 0.94 and 0.95 for the 1st and 2nd measurements) and with the traditional scoring procedures (r's > 0.90). The test-retest reliability was at a satisfactory level (r = 0.86). RTs to some items significantly correlated with the θ estimates. The mean RT varied by the item contents and wording, i.e., the RT to positive affect items required additional 2 s or longer than the other subscale items. The CAT would be a reliable and practical measurement tool for various purposes including stress check at workplace.

  11. An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements

    NASA Astrophysics Data System (ADS)

    Kang, D.

    2015-12-01

    In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.

  12. Improving Video Based Heart Rate Monitoring.

    PubMed

    Lin, Jian; Rozado, David; Duenser, Andreas

    2015-01-01

    Non-contact measurements of cardiac pulse can provide robust measurement of heart rate (HR) without the annoyance of attaching electrodes to the body. In this paper we explore a novel and reliable method to carry out video-based HR estimation and propose various performance improvement over existing approaches. The investigated method uses Independent Component Analysis (ICA) to detect the underlying HR signal from a mixed source signal present in the RGB channels of the image. The original ICA algorithm was implemented and several modifications were explored in order to determine which one could be optimal for accurate HR estimation. Using statistical analysis, we compared the cardiac pulse rate estimation from the different methods under comparison on the extracted videos to a commercially available oximeter. We found that some of these methods are quite effective and efficient in terms of improving accuracy and latency of the system. We have made the code of our algorithms openly available to the scientific community so that other researchers can explore how to integrate video-based HR monitoring in novel health technology applications. We conclude by noting that recent advances in video-based HR monitoring permit computers to be aware of a user's psychophysiological status in real time.

  13. Assessment of type II diabetes mellitus using irregularly sampled measurements with missing data.

    PubMed

    Barazandegan, Melissa; Ekram, Fatemeh; Kwok, Ezra; Gopaluni, Bhushan; Tulsyan, Aditya

    2015-04-01

    Diabetes mellitus is one of the leading diseases in the developed world. In order to better regulate blood glucose in a diabetic patient, improved modelling of insulin-glucose dynamics is a key factor in the treatment of diabetes mellitus. In the current work, the insulin-glucose dynamics in type II diabetes mellitus can be modelled by using a stochastic nonlinear state-space model. Estimating the parameters of such a model is difficult as only a few blood glucose and insulin measurements per day are available in a non-clinical setting. Therefore, developing a predictive model of the blood glucose of a person with type II diabetes mellitus is important when the glucose and insulin concentrations are only available at irregular intervals. To overcome these difficulties, we resort to online sequential Monte Carlo (SMC) estimation of states and parameters of the state-space model for type II diabetic patients under various levels of randomly missing clinical data. Our results show that this method is efficient in monitoring and estimating the dynamics of the peripheral glucose, insulin and incretins concentration when 10, 25 and 50% of the simulated clinical data were randomly removed.

  14. Detecting photovoltaic solar panels using hyperspectral imagery and estimating solar power production

    NASA Astrophysics Data System (ADS)

    Czirjak, Daniel

    2017-04-01

    Remote sensing platforms have consistently demonstrated the ability to detect, and in some cases identify, specific targets of interest, and photovoltaic solar panels are shown to have a unique spectral signature that is consistent across multiple manufacturers and construction methods. Solar panels are proven to be detectable in hyperspectral imagery using common statistical target detection methods such as the adaptive cosine estimator, and false alarms can be mitigated through the use of a spectral verification process that eliminates pixels that do not have the key spectral features of photovoltaic solar panel reflectance spectrum. The normalized solar panel index is described and is a key component in the false-alarm mitigation process. After spectral verification, these solar panel arrays are confirmed on openly available literal imagery and can be measured using numerous open-source algorithms and tools. The measurements allow for the assessment of overall solar power generation capacity using an equation that accounts for solar insolation, the area of solar panels, and the efficiency of the solar panels conversion of solar energy to power. Using a known location with readily available information, the methods outlined in this paper estimate the power generation capabilities within 6% of the rated power.

  15. Estimating groundwater extraction in a data-sparse coal seam gas region, Australia

    NASA Astrophysics Data System (ADS)

    Keir, Greg; Bulovic, Nevenka; McIntyre, Neil

    2017-04-01

    The semi-arid Surat and Bowen Basins in central Queensland, Australia, are groundwater resources of both national and regional significance. Regional towns, agricultural industries and communities are heavily dependent on the 30 000+ groundwater supply bores for their existence; however groundwater extraction measurements are rare in this area and primarily limited to small irrigation regions. Accordingly, regional groundwater extraction is not well understood, and this may have implications for regional numerical groundwater modelling and impact assessments associated with recent coal seam gas developments. Here we present a novel statistical approach to model regional groundwater extraction that merges flow measurements / estimates with other more commonly available spatial datasets that may be of value, such as climate data, pasture data, surface water availability, etc. A three step modelling approach, combining a property scale magnitude model, a bore scale occurrence model, and a proportional distribution model within properties, is used to estimate bore extraction. We describe the process of model development and selection, and present extraction results on an aquifer-by-aquifer basis suitable for numerical groundwater modelling. Lastly, we conclude with recommendations for future research, particularly related to improvement of attribution of property-scale water demand, and temporal variability in water usage.

  16. Measurement of absolute lung volumes by imaging techniques.

    PubMed

    Clausen, J

    1997-10-01

    In this paper, the techniques available for estimating total lung capacities from standard chest radiographs in children and infants as well as adults are reviewed. These techniques include manual measurements using ellipsoid and planimetry techniques as well as computerized systems. Techniques are also available for making radiographic lung volume measurements from portable chest radiographs. There are inadequate data in the literature to support recommending one specific technique over another. Though measurements of lung volumes by radiographic, plethysmographic, gas dilution or washout techniques result in remarkably similar mean results when groups of normal subjects are tested, in patients with disease, the results of these different basic measurement techniques can differ significantly. Computed tomographic and magnetic resonance techniques can also be used to measure absolute lung volumes and offer the theoretical advantages that the results in individual subjects are less affected by variances of thoracic shape than are measurements made using conventional chest radiographs.

  17. Determining metal origins and availability in fluvial deposits by analysis of geochemical baselines and solid-solution partitioning measurements and modelling.

    PubMed

    Vijver, Martina G; Spijker, Job; Vink, Jos P M; Posthuma, Leo

    2008-12-01

    Metals in floodplain soils and sediments (deposits) can originate from lithogenic and anthropogenic sources, and their availability for uptake in biota is hypothesized to depend on both origin and local sediment conditions. In criteria-based environmental risk assessments, these issues are often neglected, implying local risks to be often over-estimated. Current problem definitions in river basin management tend to require a refined, site-specific focus, resulting in a need to address both aspects. This paper focuses on the determination of local environmental availabilities of metals in fluvial deposits by addressing both the origins of the metals and their partitioning over the solid and solution phases. The environmental availability of metals is assumed to be a key force influencing exposure levels in field soils and sediments. Anthropogenic enrichments of Cu, Zn and Pb in top layers could be distinguished from lithogenic background concentrations and described using an aluminium-proxy. Cd in top layers was attributed to anthropogenic enrichment almost fully. Anthropogenic enrichments for Cu and Zn appeared further to be also represented by cold 2M HNO3 extraction of site samples. For Pb the extractions over-estimated the enrichments. Metal partitioning was measured, and measurements were compared to predictions generated by an empirical regression model and by a mechanistic-kinetic model. The partitioning models predicted metal partitioning in floodplain deposits within about one order of magnitude, though a large inter-sample variability was found for Pb.

  18. Evaluation of a simple, point-scale hydrologic model in simulating soil moisture using the Delaware environmental observing system

    NASA Astrophysics Data System (ADS)

    Legates, David R.; Junghenn, Katherine T.

    2018-04-01

    Many local weather station networks that measure a number of meteorological variables (i.e. , mesonetworks) have recently been established, with soil moisture occasionally being part of the suite of measured variables. These mesonetworks provide data from which detailed estimates of various hydrological parameters, such as precipitation and reference evapotranspiration, can be made which, when coupled with simple surface characteristics available from soil surveys, can be used to obtain estimates of soil moisture. The question is Can meteorological data be used with a simple hydrologic model to estimate accurately daily soil moisture at a mesonetwork site? Using a state-of-the-art mesonetwork that also includes soil moisture measurements across the US State of Delaware, the efficacy of a simple, modified Thornthwaite/Mather-based daily water balance model based on these mesonetwork observations to estimate site-specific soil moisture is determined. Results suggest that the model works reasonably well for most well-drained sites and provides good qualitative estimates of measured soil moisture, often near the accuracy of the soil moisture instrumentation. The model exhibits particular trouble in that it cannot properly simulate the slow drainage that occurs in poorly drained soils after heavy rains and interception loss, resulting from grass not being short cropped as expected also adversely affects the simulation. However, the model could be tuned to accommodate some non-standard siting characteristics.

  19. Urinary C-peptide as a method for monitoring body mass changes in captive bonobos (Pan paniscus).

    PubMed

    Deschner, Tobias; Kratzsch, Jürgen; Hohmann, Gottfried

    2008-11-01

    In recent years methodological improvements have allowed for more precise estimates of nutrient intake in wild primates. However, estimates of energetic condition have remained relatively imprecise due to the difficulties of estimating digestive efficiency and energy expenditure in these animals. In the absence of a reliable intake-expenditure calculation, a method is needed that directly links changes in energetic condition, such as body mass, to physiological changes that can be detected via markers in body excretions such as urine or feces. One promising marker is C-peptide, a metabolic byproduct of insulin synthesis. Here we present the results of a food restriction experiment carried out in a group of captive bonobos (Pan paniscus). We measured changes in food availability and body mass and determined urinary C-peptide levels with the help of a time-resolved fluoroimmunoassay routinely used for measuring C-peptide in human blood. Urinary C-peptide levels decreased during a period of food restriction and increased again when food availability was continuously increased. During this refeeding phase an increase in body mass was significantly correlated with an increase in urinary C-peptide levels. Our results suggest that urinary C-peptide levels are an accurate indicator of individual energy balance. In conclusion, measuring C-peptide in urine is a promising method to quantify the energetic condition of wild apes.

  20. Estimation of cerebral metabolic rate of oxygen consumption using combined multiwavelength photoacoustic microscopy and Doppler microultrasound

    NASA Astrophysics Data System (ADS)

    Jiang, Yan; Zemp, Roger

    2018-01-01

    The metabolic rate of oxygen consumption is an important metric of tissue oxygen metabolism and is especially critical in the brain, yet few methods are available for measuring it. We use a custom combined photoacoustic-microultrasound system and demonstrate cerebral oxygen consumption estimation in vivo. In particular, the cerebral metabolic rate of oxygen consumption was estimated in a murine model during variation of inhaled oxygen from hypoxia to hyperoxia. The hypothesis of brain autoregulation was confirmed with our method even though oxygen saturation and flow in vessels changed.

  1. Estimated hepatitis C prevalence and key population sizes in San Francisco: A foundation for elimination.

    PubMed

    Facente, Shelley N; Grebe, Eduard; Burk, Katie; Morris, Meghan D; Murphy, Edward L; Mirzazadeh, Ali; Smith, Aaron A; Sanchez, Melissa A; Evans, Jennifer L; Nishimura, Amy; Raymond, Henry F

    2018-01-01

    Initiated in 2016, End Hep C SF is a comprehensive initiative to eliminate hepatitis C (HCV) infection in San Francisco. The introduction of direct-acting antivirals to treat and cure HCV provides an opportunity for elimination. To properly measure progress, an estimate of baseline HCV prevalence, and of the number of people in various subpopulations with active HCV infection, is required to target and measure the impact of interventions. Our analysis was designed to incorporate multiple relevant data sources and estimate HCV burden for the San Francisco population as a whole, including specific key populations at higher risk of infection. Our estimates are based on triangulation of data found in case registries, medical records, observational studies, and published literature from 2010 through 2017. We examined subpopulations based on sex, age and/or HCV risk group. When multiple sources of data were available for subpopulation estimates, we calculated a weighted average using inverse variance weighting. Credible ranges (CRs) were derived from 95% confidence intervals of population size and prevalence estimates. We estimate that 21,758 residents of San Francisco are HCV seropositive (CR: 10,274-42,067), representing an overall seroprevalence of 2.5% (CR: 1.2%- 4.9%). Of these, 16,408 are estimated to be viremic (CR: 6,505-37,407), though this estimate includes treated cases; up to 12,257 of these (CR: 2,354-33,256) are people who are untreated and infectious. People who injected drugs in the last year represent 67.9% of viremic HCV infections. We estimated approximately 7,400 (51%) more HCV seropositive cases than are included in San Francisco's HCV surveillance case registry. Our estimate provides a useful baseline against which the impact of End Hep C SF can be measured.

  2. The UK wave energy resource

    NASA Astrophysics Data System (ADS)

    Winter, A. J. B.

    1980-10-01

    Previous estimates of wave energy around the United Kingdom have been made by extrapolating measurements from a few sites to the whole UK seaboard. Here directional wave spectra are used from a numerical wave model developed by the Meteorological Office to make estimates which are verified where possible by observation. It is concluded that around 30 GW of power is available for capture by wave energy converters: when estimates of converter spacing and efficiency are considered an average of about 7 GW of electrical power could be supplied. This resource estimate is smaller than previous ones, though consistent with them when factors such as the directional properties of waves and the likelihood that converters will be sited near coasts are included.

  3. Network Reconstruction From High-Dimensional Ordinary Differential Equations.

    PubMed

    Chen, Shizhe; Shojaie, Ali; Witten, Daniela M

    2017-01-01

    We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.

  4. The United States Total DEPosition (TDEP) Project for Sulfur and Nitrogen

    EPA Science Inventory

    Estimates of total sulfur and nitrogen deposition are needed for use in critical loads and other ecosystem assessments. Measurements are available for some chemical species at individual locations for air concentration and wet deposition, but not dry deposition. Modeling provid...

  5. State of balance of the cryosphere

    NASA Technical Reports Server (NTRS)

    Van Der Veen, C. J.

    1991-01-01

    Available observations and mass balance estimates of the cryosphere are summarized. Problems discussed include mountain glaciers, the Greenland ice sheet, the Antarctic ice sheet, conventional glacier measurement techniques, and satellite applications in glacier mass balance studies. It is concluded that the interior part of the Greenland ice sheet is thickening or in near equilibrium. Estimates of the mass balance of the Antarctic ice sheet suggest that it is positive, although the error limits allow for a slightly negative balance.

  6. Acoustic cues available for ranging by humpback whales.

    PubMed

    Mercado, Eduardo; Schneider, Jennifer N; Green, Sean R; Wang, Chong; Rubin, Rachael D; Banks, Patchouly N

    2007-05-01

    Field measurements of sound propagation in a humpback whale habitat were collected to identify cues that a humpback whale might use to estimate its distance from sound sources. The data show that spectral cues are sufficient for estimating the relative distance a sound has traveled in such environments, and that several other cues may also provide useful information. It is suggested that listening humpback whales may use multiple cues in parallel to determine the range to singing whales.

  7. Estimating changes in heat energy stored within a column of wetland surface water and factors controlling their importance in the surface energy budget

    USGS Publications Warehouse

    Shoemaker, W. Barclay; Sumner, David M.; Castillo, Adrian

    2005-01-01

    Changes in heat energy stored within a column of wetland surface water can be a considerable component of the surface energy budget, an attribute that is demonstrated by comparing changes in stored heat energy to net radiation at seven sites in the wetland areas of southern Florida, including the Everglades. The magnitude of changes in stored heat energy approached the magnitude of net radiation more often during the winter dry season than during the summer wet season. Furthermore, the magnitude of changes in stored heat energy in wetland surface water generally decreased as surface energy budgets were upscaled temporally. A new method was developed to estimate changes in stored heat energy that overcomes an important data limitation, namely, the limited spatial and temporal availability of water temperature measurements. The new method is instead based on readily available air temperature measurements and relies on the convolution of air temperature changes with a regression‐defined transfer function to estimate changes in water temperature. The convolution‐computed water temperature changes are used with water depths and heat capacity to estimate changes in stored heat energy within the Everglades wetland areas. These results likely can be adapted to other humid subtropical wetlands characterized by open water, saw grass, and rush vegetation type communities.

  8. Improving lidar turbulence estimates for wind energy

    NASA Astrophysics Data System (ADS)

    Newman, J. F.; Clifton, A.; Churchfield, M. J.; Klein, P.

    2016-09-01

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidars were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.

  9. Improving Lidar Turbulence Estimates for Wind Energy: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer; Clifton, Andrew; Churchfield, Matthew

    2016-10-01

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less

  10. Use of eddy-covariance methods to "calibrate" simple estimators of evapotranspiration

    USGS Publications Warehouse

    Sumner, David M.; Geurink, Jeffrey S.; Swancar, Amy

    2017-01-01

    Direct measurement of actual evapotranspiration (ET) provides quantification of this large component of the hydrologic budget, but typically requires long periods of record and large instrumentation and labor costs. Simple surrogate methods of estimating ET, if “calibrated” to direct measurements of ET, provide a reliable means to quantify ET. Eddy-covariance measurements of ET were made for 12 years (2004-2015) at an unimproved bahiagrass (Paspalum notatum) pasture in Florida. These measurements were compared to annual rainfall derived from rain gage data and monthly potential ET (PET) obtained from a long-term (since 1995) U.S. Geological Survey (USGS) statewide, 2-kilometer, daily PET product. The annual proportion of ET to rainfall indicates a strong correlation (r2=0.86) to annual rainfall; the ratio increases linearly with decreasing rainfall. Monthly ET rates correlated closely (r2=0.84) to the USGS PET product. The results indicate that simple surrogate methods of estimating actual ET show positive potential in the humid Florida climate given the ready availability of historical rainfall and PET.

  11. Improving Lidar Turbulence Estimates for Wind Energy

    DOE PAGES

    Newman, Jennifer F.; Clifton, Andrew; Churchfield, Matthew J.; ...

    2016-10-03

    Remote sensing devices (e.g., lidars) are quickly becoming a cost-effective and reliable alternative to meteorological towers for wind energy applications. Although lidars can measure mean wind speeds accurately, these devices measure different values of turbulence intensity (TI) than an instrument on a tower. In response to these issues, a lidar TI error reduction model was recently developed for commercially available lidars. The TI error model first applies physics-based corrections to the lidar measurements, then uses machine-learning techniques to further reduce errors in lidar TI estimates. The model was tested at two sites in the Southern Plains where vertically profiling lidarsmore » were collocated with meteorological towers. Results indicate that the model works well under stable conditions but cannot fully mitigate the effects of variance contamination under unstable conditions. To understand how variance contamination affects lidar TI estimates, a new set of equations was derived in previous work to characterize the actual variance measured by a lidar. Terms in these equations were quantified using a lidar simulator and modeled wind field, and the new equations were then implemented into the TI error model.« less

  12. Economic Analysis of the Return-on-Investment of a Worksite Wellness Program for a Large Multistate Retail Grocery Organization.

    PubMed

    Light, Emily M W; Kline, Allison S; Drosky, Megan A; Chapman, Larry S

    2015-08-01

    The objective of this study is to measure the return on investment (ROI) of the Price Chopper/Golub Corporation employee population who participate in wellness programs available to them. Medical claims data, risk level, and presence of comorbidities such as diabetes and heart disease were compared in a matched retrospective cohort of participants and nonparticipants, with 2008, 2009, and 2010 serving as measurement years. Program costs and estimated savings were used to calculate an ROI of $4.33 for every dollar invested in wellness programs. Reductions in medical costs were observed at several risk and participation levels, with an average savings of $133 per participant and a 3-year savings estimate of $285,706. The positive ROI and savings estimate indicate that wellness interventions added economic value to Price Chopper/Golub Corporation.

  13. High resolution time of arrival estimation for a cooperative sensor system

    NASA Astrophysics Data System (ADS)

    Morhart, C.; Biebl, E. M.

    2010-09-01

    Distance resolution of cooperative sensors is limited by the signal bandwidth. For the transmission mainly lower frequency bands are used which are more narrowband than classical radar frequencies. To compensate this resolution problem the combination of a pseudo-noise coded pulse compression system with superresolution time of arrival estimation is proposed. Coded pulsecompression allows secure and fast distance measurement in multi-user scenarios which can easily be adapted for data transmission purposes (Morhart and Biebl, 2009). Due to the lack of available signal bandwidth the measurement accuracy degrades especially in multipath scenarios. Superresolution time of arrival algorithms can improve this behaviour by estimating the channel impulse response out of a band-limited channel view. For the given test system the implementation of a MUSIC algorithm permitted a two times better distance resolution as the standard pulse compression.

  14. Students Left Behind: Measuring 10th to 12th Grade Student Persistence Rates in Texas High Schools

    PubMed Central

    Domina, Thurston; Ghosh-Dastidar, Bonnie; Tienda, Marta

    2012-01-01

    The No Child Left Behind Act requires states to publish high school graduation rates for public schools and the U.S. Department of Education is currently considering a mandate to standardize high school graduation rate reporting. However, no consensus exists among researchers or policy-makers about how to measure high school graduation rates. In this paper, we use longitudinal data tracking a cohort of students at 82 Texas public high schools to assess the accuracy and precision of three widely-used high school graduation rate measures: Texas’s official graduation rates, and two competing estimates based on publicly available enrollment data from the Common Core of Data. Our analyses show that these widely-used approaches yield inaccurate and highly imprecise estimates of high school graduation and persistence rates. We propose several guidelines for using existing graduation and persistence rate data and argue that a national effort to track students as they progress through high school is essential to reconcile conflicting estimates. PMID:23077375

  15. Students Left Behind: Measuring 10(th) to 12(th) Grade Student Persistence Rates in Texas High Schools.

    PubMed

    Domina, Thurston; Ghosh-Dastidar, Bonnie; Tienda, Marta

    2010-06-01

    The No Child Left Behind Act requires states to publish high school graduation rates for public schools and the U.S. Department of Education is currently considering a mandate to standardize high school graduation rate reporting. However, no consensus exists among researchers or policy-makers about how to measure high school graduation rates. In this paper, we use longitudinal data tracking a cohort of students at 82 Texas public high schools to assess the accuracy and precision of three widely-used high school graduation rate measures: Texas's official graduation rates, and two competing estimates based on publicly available enrollment data from the Common Core of Data. Our analyses show that these widely-used approaches yield inaccurate and highly imprecise estimates of high school graduation and persistence rates. We propose several guidelines for using existing graduation and persistence rate data and argue that a national effort to track students as they progress through high school is essential to reconcile conflicting estimates.

  16. Estimate of Radiosonde Dry Bias From Far-Infrared Measurements on the Antarctic Plateau

    NASA Astrophysics Data System (ADS)

    Rizzi, R.; Maestri, T.; Arosio, C.

    2018-03-01

    The experimental data set of downwelling radiance spectra measured at the ground in clear conditions during 2013 by a Far-Infrared Fourier Transform Spectrometer at Dome-C, Antarctica, presented in Rizzi et al. (2016, https://doi.org/10.1002/2016JD025341) is used to estimate the effect of solar heating of the radiosonde humidity sensor, called dry bias. The effect is quite evident comparing residuals for the austral summer and winter clear cases and can be modeled by an increase of the water vapor concentration at all levels by about 15%. Such an estimate has become possible only after a new version of the simulation code and spectroscopic data has become available, which has substantially improved the modeling of water vapor absorption in the far infrared. The negative yearly spectral bias reported in Rizzi et al. (2016, https://doi.org/10.1002/2016JD025341) is in fact greatly reduced when compared to the same measurement data set.

  17. Covariance Partition Priors: A Bayesian Approach to Simultaneous Covariance Estimation for Longitudinal Data.

    PubMed

    Gaskins, J T; Daniels, M J

    2016-01-02

    The estimation of the covariance matrix is a key concern in the analysis of longitudinal data. When data consists of multiple groups, it is often assumed the covariance matrices are either equal across groups or are completely distinct. We seek methodology to allow borrowing of strength across potentially similar groups to improve estimation. To that end, we introduce a covariance partition prior which proposes a partition of the groups at each measurement time. Groups in the same set of the partition share dependence parameters for the distribution of the current measurement given the preceding ones, and the sequence of partitions is modeled as a Markov chain to encourage similar structure at nearby measurement times. This approach additionally encourages a lower-dimensional structure of the covariance matrices by shrinking the parameters of the Cholesky decomposition toward zero. We demonstrate the performance of our model through two simulation studies and the analysis of data from a depression study. This article includes Supplementary Material available online.

  18. Estimation of the interior parameters from Mars nutations and from Doppler measurements

    NASA Astrophysics Data System (ADS)

    Yseboodt, M.; Rivoldini, A.; Le Maistre, S.; Dehant, V. M. A.

    2017-12-01

    The presence of a liquid core inside Mars changes the nutations: the nutation amplitudes can be resonantly amplified because of a free mode, called the free core nutation (FCN).We quantify how the internal structure, in particular the size of the core, affects the nutation amplifications and the Doppler observable between a Martian lander and the Earth.Present day core size estimates suggest that the effect is the largest on the prograde semi-annual and retrograde ter-annual nutation.We solve the inverse problem assuming a given precision on the nutation amplifications provided by an extensive set of geodesy measurements and we estimate the precision on the core properties. Such measurements will be available in the near future thanks to the geodesy experiments RISE (InSight mission) and LaRa (ExoMars mission).We find that the precision on the core properties is very dependent on the proximity of the FCN period to the ter-annual forcing (-229 days) and the assumed a priori precision on the nutations.

  19. Methods of Constructing a Blended Performance Function Suitable for Formation Flight

    NASA Technical Reports Server (NTRS)

    Ryan, Jack

    2017-01-01

    Two methods for constructing performance functions for formation fight-for-drag-reduction suitable for use with an extreme-seeking control system are presented. The first method approximates an a prior measured or estimated drag-reduction performance function by combining real-time measurements of readily available parameters. The parameters are combined with weightings determined from a minimum squares optimization to form a blended performance function.

  20. STX--Fortran-4 program for estimates of tree populations from 3P sample-tree-measurements

    Treesearch

    L. R. Grosenbaugh

    1967-01-01

    Describes how to use an improved and greatly expanded version of an earlier computer program (1964) that converts dendrometer measurements of 3P-sample trees to population values in terms of whatever units user desires. Many new options are available, including that of obtaining a product-yield and appraisal report based on regression coefficients supplied by user....

  1. Measuring Compartment Size and Gas Solubility in Marine Mammals

    DTIC Science & Technology

    2015-09-30

    bends? Effect of diving behaviour and physiology on modelled gas exchange for three species: Ziphius cavirostris, Mesoplodon densirostris and Hyperoodon...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Measuring Compartment Size and Gas Solubility in Marine...is to develop methods to estimate marine mamal tissue compartment sizes, and tissue gas solubility. We aim to improve the data available for the

  2. Estimating the coverage of mental health programmes: a systematic review.

    PubMed

    De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram

    2014-04-01

    The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys.

  3. Intercomparison of 51 radiometers for determining global horizontal irradiance and direct normal irradiance measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habte, Aron; Sengupta, Manajit; Andreas, Afshin

    Accurate solar radiation measurements require properly installed and maintained radiometers with calibrations traceable to the World Radiometric Reference. This study analyzes the performance of 51 commercially available and prototype radiometers used for measuring global horizontal irradiances or direct normal irradiances. These include pyranometers, pyrheliometers, rotating shadowband radiometers, and a pyranometer with an internal shading mask deployed at the National Renewable Energy Laboratory's (NREL) Solar Radiation Research Laboratory. The radiometers in this study were deployed for one year (from April 1, 2011, through March 31, 2012), and their measurements were compared under clear-sky, partly cloudy, and mostly cloudy conditions to referencemore » values of low estimated measurement uncertainties. The intent of this paper is to present a general overview of each radiometer's performance based on the instrumentation and environmental conditions available at NREL.« less

  4. What time is it? Deep learning approaches for circadian rhythms

    PubMed Central

    Agostinelli, Forest; Ceglia, Nicholas; Shahbaba, Babak; Sassone-Corsi, Paolo; Baldi, Pierre

    2016-01-01

    Motivation: Circadian rhythms date back to the origins of life, are found in virtually every species and every cell, and play fundamental roles in functions ranging from metabolism to cognition. Modern high-throughput technologies allow the measurement of concentrations of transcripts, metabolites and other species along the circadian cycle creating novel computational challenges and opportunities, including the problems of inferring whether a given species oscillate in circadian fashion or not, and inferring the time at which a set of measurements was taken. Results: We first curate several large synthetic and biological time series datasets containing labels for both periodic and aperiodic signals. We then use deep learning methods to develop and train BIO_CYCLE, a system to robustly estimate which signals are periodic in high-throughput circadian experiments, producing estimates of amplitudes, periods, phases, as well as several statistical significance measures. Using the curated data, BIO_CYCLE is compared to other approaches and shown to achieve state-of-the-art performance across multiple metrics. We then use deep learning methods to develop and train BIO_CLOCK to robustly estimate the time at which a particular single-time-point transcriptomic experiment was carried. In most cases, BIO_CLOCK can reliably predict time, within approximately 1 h, using the expression levels of only a small number of core clock genes. BIO_CLOCK is shown to work reasonably well across tissue types, and often with only small degradation across conditions. BIO_CLOCK is used to annotate most mouse experiments found in the GEO database with an inferred time stamp. Availability and Implementation: All data and software are publicly available on the CircadiOmics web portal: circadiomics.igb.uci.edu/. Contacts: fagostin@uci.edu or pfbaldi@uci.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307647

  5. Estimation of Metabolism Characteristics for Heat-Injured Bacteria Using Dielectrophoretic Impedance Measurement Method

    NASA Astrophysics Data System (ADS)

    Amako, Eri; Enjoji, Takaharu; Uchida, Satoshi; Tochikubo, Fumiyoshi

    Constant monitoring and immediate control of fermentation processes have been required for advanced quality preservation in food industry. In the present work, simple estimation of metabolic states for heat-injured Escherichia coli (E. coli) in a micro-cell was investigated using dielectrophoretic impedance measurement (DEPIM) method. Temporal change in the conductance between micro-gap (ΔG) was measured for various heat treatment temperatures. In addition, the dependence of enzyme activity, growth capacity and membrane situation for E. coli on heat treatment temperature was also analyzed with conventional biological methods. Consequently, a correlation between ΔG and those biological properties was obtained quantitatively. This result suggests that DEPIM method will be available for an effective monitoring technique for complex change in various biological states of microorganisms.

  6. Treatment effects model for assessing disease management: measuring outcomes and strengthening program management.

    PubMed

    Wendel, Jeanne; Dumitras, Diana

    2005-06-01

    This paper describes an analytical methodology for obtaining statistically unbiased outcomes estimates for programs in which participation decisions may be correlated with variables that impact outcomes. This methodology is particularly useful for intraorganizational program evaluations conducted for business purposes. In this situation, data is likely to be available for a population of managed care members who are eligible to participate in a disease management (DM) program, with some electing to participate while others eschew the opportunity. The most pragmatic analytical strategy for in-house evaluation of such programs is likely to be the pre-intervention/post-intervention design in which the control group consists of people who were invited to participate in the DM program, but declined the invitation. Regression estimates of program impacts may be statistically biased if factors that impact participation decisions are correlated with outcomes measures. This paper describes an econometric procedure, the Treatment Effects model, developed to produce statistically unbiased estimates of program impacts in this type of situation. Two equations are estimated to (a) estimate the impacts of patient characteristics on decisions to participate in the program, and then (b) use this information to produce a statistically unbiased estimate of the impact of program participation on outcomes. This methodology is well-established in economics and econometrics, but has not been widely applied in the DM outcomes measurement literature; hence, this paper focuses on one illustrative application.

  7. Using mixture tuned match filtering to measure changes in subpixel vegetation area in Las Vegas, Nevada

    NASA Astrophysics Data System (ADS)

    Brelsford, Christa; Shepherd, Doug

    2013-09-01

    In desert cities, securing sufficient water supply to meet the needs of both existing population and future growth is a complex problem with few easy solutions. Grass lawns are a major driver of water consumption and accurate measurements of vegetation area are necessary to understand drivers of changes in household water consumption. Measuring vegetation change in a heterogeneous urban environment requires sub-pixel estimation of vegetation area. Mixture Tuned Match Filtering has been successfully applied to target detection for materials that only cover small portions of a satellite image pixel. There have been few successful applications of MTMF to fractional area estimation, despite theory that suggests feasibility. We use a ground truth dataset over ten times larger than that available for any previous MTMF application to estimate the bias between ground truth data and matched filter results. We find that the MTMF algorithm underestimates the fractional area of vegetation by 5-10%, and calculate that averaging over 20 to 30 pixels is necessary to correct this bias. We conclude that with a large ground truth dataset, using MTMF for fractional area estimation is possible when results can be estimated at a lower spatial resolution than the base image. When this method is applied to estimating vegetation area in Las Vegas, NV spatial and temporal trends are consistent with expectations from known population growth and policy goals.

  8. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks.

    PubMed

    Rumschinski, Philipp; Borchers, Steffen; Bosio, Sandro; Weismantel, Robert; Findeisen, Rolf

    2010-05-25

    Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates.

  9. Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks

    PubMed Central

    2010-01-01

    Background Mathematical modeling and analysis have become, for the study of biological and cellular processes, an important complement to experimental research. However, the structural and quantitative knowledge available for such processes is frequently limited, and measurements are often subject to inherent and possibly large uncertainties. This results in competing model hypotheses, whose kinetic parameters may not be experimentally determinable. Discriminating among these alternatives and estimating their kinetic parameters is crucial to improve the understanding of the considered process, and to benefit from the analytical tools at hand. Results In this work we present a set-based framework that allows to discriminate between competing model hypotheses and to provide guaranteed outer estimates on the model parameters that are consistent with the (possibly sparse and uncertain) experimental measurements. This is obtained by means of exact proofs of model invalidity that exploit the polynomial/rational structure of biochemical reaction networks, and by making use of an efficient strategy to balance solution accuracy and computational effort. Conclusions The practicability of our approach is illustrated with two case studies. The first study shows that our approach allows to conclusively rule out wrong model hypotheses. The second study focuses on parameter estimation, and shows that the proposed method allows to evaluate the global influence of measurement sparsity, uncertainty, and prior knowledge on the parameter estimates. This can help in designing further experiments leading to improved parameter estimates. PMID:20500862

  10. Series resistance compensation for whole-cell patch-clamp studies using a membrane state estimator

    PubMed Central

    Sherman, AJ; Shrier, A; Cooper, E

    1999-01-01

    Whole-cell patch-clamp techniques are widely used to measure membrane currents from isolated cells. While suitable for a broad range of ionic currents, the series resistance (R(s)) of the recording pipette limits the bandwidth of the whole-cell configuration, making it difficult to measure rapid ionic currents. To increase bandwidth, it is necessary to compensate for R(s). Most methods of R(s) compensation become unstable at high bandwidth, making them hard to use. We describe a novel method of R(s) compensation that overcomes the stability limitations of standard designs. This method uses a state estimator, implemented with analog computation, to compute the membrane potential, V(m), which is then used in a feedback loop to implement a voltage clamp; we refer to this as state estimator R(s) compensation. To demonstrate the utility of this approach, we built an amplifier incorporating state estimator R(s) compensation. In benchtop tests, our amplifier showed significantly higher bandwidths and improved stability when compared with a commercially available amplifier. We demonstrated that state estimator R(s) compensation works well in practice by recording voltage-gated Na(+) currents under voltage-clamp conditions from dissociated neonatal rat sympathetic neurons. We conclude that state estimator R(s) compensation should make it easier to measure large rapid ionic currents with whole-cell patch-clamp techniques. PMID:10545359

  11. Crowdsourcing urban air temperatures through smartphone battery temperatures in São Paulo, Brazil

    NASA Astrophysics Data System (ADS)

    Droste, Arjan; Pape, Jan-Jaap; Overeem, Aart; Leijnse, Hidde; Steeneveld, Gert-Jan; Van Delden, Aarnout; Uijlenhoet, Remko

    2017-04-01

    Crowdsourcing as a method to obtain and apply vast datasets is rapidly becoming prominent in meteorology, especially for urban areas where traditional measurements are scarce. Earlier studies showed that smartphone battery temperature readings allow for estimating the daily and city-wide air temperature via a straightforward heat transfer model. This study advances these model estimations by studying spatially and temporally smaller scales. The accuracy of temperature retrievals as a function of the number of battery readings is also studied. An extensive dataset of over 10 million battery temperature readings is available for São Paulo (Brazil), for estimating hourly and daily air temperatures. The air temperature estimates are validated with air temperature measurements from a WMO station, an Urban Fluxnet site, and crowdsourced data from 7 hobby meteorologists' private weather stations. On a daily basis temperature estimates are good, and we show they improve by optimizing model parameters for neighbourhood scales as categorized in Local Climate Zones. Temperature differences between Local Climate Zones can be distinguished from smartphone battery temperatures. When validating the model for hourly temperature estimates, initial results are poor, but are vastly improved by using a diurnally varying parameter function in the heat transfer model rather than one fixed value for the entire day. The obtained results show the potential of large crowdsourced datasets in meteorological studies, and the value of smartphones as a measuring platform when routine observations are lacking.

  12. Experimental FSO network availability estimation using interactive fog condition monitoring

    NASA Astrophysics Data System (ADS)

    Turán, Ján.; Ovseník, Łuboš

    2016-12-01

    Free Space Optics (FSO) is a license free Line of Sight (LOS) telecommunication technology which offers full duplex connectivity. FSO uses infrared beams of light to provide optical broadband connection and it can be installed literally in a few hours. Data rates go through from several hundreds of Mb/s to several Gb/s and range is from several 100 m up to several km. FSO link advantages: Easy connection establishment, License free communication, No excavation are needed, Highly secure and safe, Allows through window connectivity and single customer service and Compliments fiber by accelerating the first and last mile. FSO link disadvantages: Transmission media is air, Weather and climate dependence, Attenuation due to rain, snow and fog, Scattering of laser beam, Absorption of laser beam, Building motion and Air pollution. In this paper FSO availability evaluation is based on long term measured data from Fog sensor developed and installed at TUKE experimental FSO network in TUKE campus, Košice, Slovakia. Our FSO experimental network has three links with different physical distances between each FSO heads. Weather conditions have a tremendous impact on FSO operation in terms of FSO availability. FSO link availability is the percentage of time over a year that the FSO link will be operational. It is necessary to evaluate the climate and weather at the actual geographical location where FSO link is going to be mounted. It is important to determine the impact of a light scattering, absorption, turbulence and receiving optical power at the particular FSO link. Visibility has one of the most critical influences on the quality of an FSO optical transmission channel. FSO link availability is usually estimated using visibility information collected from nearby airport weather stations. Raw data from fog sensor (Fog Density, Relative Humidity, Temperature measured at each ms) are collected and processed by FSO Simulator software package developed at our Department. Based on FSO link data the FSO link and experimental FSO network availability was estimated for years from 2007 up to 2015. The average FSO network availability up to 98,3378 % was measured (for the BER 10-9). From the experimental data also Hybrid RF/FSO link availability was evaluated. As the weather conditions for FSO and RF link are complementary (FSO works well in rain and RF works well in fog) Hybrid FSO/RF system long time average availability was much better up to 99,9986 %.

  13. Updated computations and estimates of streamflows tributary to Carson Valley, Douglas County, Nevada, and Alpine County, California, 1990-2002

    USGS Publications Warehouse

    Maurer, Douglas K.; Watkins, Sharon A.; Burrowws, Robert L.

    2004-01-01

    Rapid population growth in Carson Valley has caused concern over the continued availability of water resources to sustain future growth. The U.S. Geological Survey, in cooperation with Douglas County, began a study to update estimates of water-budget components in Carson Valley for current climatic conditions. Data collected at 19 sites included 9 continuous records of tributary streamflows, 1 continuous record of outflow from the valley, and 408 measurements of 10 perennially flowing but ungaged drainages. These data were compiled and analyzed to provide updated computations and estimates of streamflows tributary to Carson Valley, 1990-2002. Mean monthly and annual flows were computed from continuous records for the period 1990-2002 for five streams, and for the period available, 1990-97, for four streams. Daily mean flow from ungaged drainages was estimated using multi-variate regressions of individual discharge measurements against measured flow at selected continuous gages. From the estimated daily mean flows, monthly and annual mean flows were calculated from 1990 to 2002. These values were used to compute estimates of mean monthly and annual flows for the ungaged perennial drainages. Using the computed and estimated mean annual flows, annual unit-area runoff was computed for the perennial drainages, which ranged from 0.30 to 2.02 feet. For the period 1990-2002, estimated inflow of perennial streams tributary to Carson Valley totaled about 25,900 acre-feet per year. Inflow computed from gaged perennial drainages totaled 10,300 acre-feet per year, and estimated inflow from ungaged perennial drainages totaled 15,600 acre-feet per year. The annual flow of perennial streams ranges from 4,210 acre-feet at Clear Creek to 450 acre-feet at Stutler Canyon Creek. Differences in unit-area runoff and in the seasonal timing of flow likely are caused by differences in geologic setting, altitude, slope, or aspect of the individual drainages. The remaining drainages are ephemeral and supply inflow to the valley floor only during spring runoff in wet years or during large precipitation events. Annual unit-area runoff for the perennial drainages was used to estimate inflow from ephemeral drainages totaling 11,700 acre-feet per year. The totaled estimate of perennial and ephemeral tributary inflows to Carson Valley is 37,600 acre-feet per year. Gaged perennial inflow is 27 percent of the total, ungaged perennial inflow is 42 percent, and ephemeral inflow is 31 percent. The estimate is from 50 to 60 percent greater than three previous estimates, one made for a larger area and similar to two other estimates made for larger areas. The combined uncertainty of the estimates totaled about 33 percent of the total inflow or about 12,000 acre-feet per year.

  14. [How has social status been measured in health research? A review of the international literature].

    PubMed

    Cabieses, Báltica; Zitko, Pedro; Pinedo, Rafael; Espinoza, Manuel; Albor, Christo

    2011-06-01

    Social status (SS) is a multidimensional variable that is used widely in health research. There is no single optimal method for estimating social status. Rather, in each case the measurement may vary depending on the research subject, the base theory considered, the population of interest, the event of interest and, in some cases, the available information. This literature review develops the following topics related to SS measurement, based on the international scientific sources available electronically: i) identification of the role of SS in the context of social epidemiology research, ii) description of the principal indicators and methodological approaches used to measure SS in health research, and iii) analysis of the distinct difficulties of SS measurement in specific populations such as ethnic groups, women, children, the elderly, and in rural vs. urban contexts. The review finally makes it possible to describe some of the implications of SS measurement in Latin American countries.

  15. Flash flood hazard assessment through modelling in small semi-arid watersheds. The example of the Beni Mellal watershed in Morocco

    NASA Astrophysics Data System (ADS)

    Werren, G.; Balin, D.; Reynard, E.; Lane, S. N.

    2012-04-01

    Flood modelling is essential for flood hazard assessment. Modelling becomes a challenge in small, ungauged watersheds prone to flash floods, like the ones draining the town of Beni Mellal (Morocco). Four temporary streams meet in the urban area of Beni Mellal, producing every year sheet floods, harmful to infrastructure and to people. Here, statistical analysis may not give realistic results, but the study of these repeated real flash flood events may provide a better understanding of watershed specific hydrology. This study integrates a larger cooperation project between Switzerland and Morroco, aimed at knowledge transfer in disaster risk reduction, especially through hazard mapping and land-use planning, related to implementation of hazard maps. Hydrologic and hydraulic modelling was carried out to obtain hazard maps. An important point was to find open source data and methods that could still produce a realistic model for the area concerned, in order to provide easy-to-use, cost-effective tools for risk management in developing countries like Morocco, where routine data collection is largely lacking. The data used for modelling is the Web available TRMM 3-Hour 0.25 degree rainfall data provided by the Tropical Rainfall Measurement Mission Project (TRMM). Hydrologic modelling for discharge estimation was undertaken using methods available in the HEC-HMS software provided by the US Army Corps of Engineers® (USACE). Several transfer models were used, so as to choose the best-suited method available. As no model calibration was possible for no measured flow data was available, a one-at-the-time sensitivity analysis was performed on the parameters chosen, in order to detect their influence on the results. But the most important verification method remained field observation, through post-flood field campaigns aimed at mapping water surfaces and depths in the flooded areas, as well as river section monitoring, where rough discharge estimates could be obtained using empirical equations. Another information source was local knowledge, as people could give a rough estimation of concentration time by describing flood evolution. Finally, hydraulic modelling of the flooded areas in the urban perimeter was performed using the USACE HEC-RAS® software capabilities. A specific challenge at this stage was field morphology, as the flooded areas form large alluvial fans, with very different flood behaviour compared to flood plains. Model "calibration" at this stage was undertaken using the mapped water surfaces and depths. Great care was taken for field geometry design, where field observations, measured cross sections and field images were used to improve the existing DTM data. The model included protection dikes already built by local authorities in their flood-fight effort. Because of flash-flood specific behaviour, only maximal flooded surfaces and flow velocities were simulated through steady flow analysis in HEC-RAS. The discharge estimates obtained for the chosen event were comparable to 10-year return periods as estimated by the watershed authorities. Times of concentration correspond to this previous estimation and to local people descriptions. The modelled water surfaces reflect field reality. Flash-flood modelling demands extensive knowledge of the studied field in order to compensate data scarcity. However, more precise data, like radar rainfall estimates available in Morocco, would definitely improve outputs. In this perspective, better data access at the local level and good use of the available methods could benefit the disaster risk reduction effort as a whole.

  16. Estimating the snow water equivalent on a glacierized high elevation site (Forni Glacier, Italy)

    NASA Astrophysics Data System (ADS)

    Senese, Antonella; Maugeri, Maurizio; Meraldi, Eraldo; Verza, Gian Pietro; Azzoni, Roberto Sergio; Compostella, Chiara; Diolaiuti, Guglielmina

    2018-04-01

    We present and compare 11 years of snow data (snow depth and snow water equivalent, SWE) measured by an automatic weather station (AWS) and corroborated by data from field campaigns on the Forni Glacier in Italy. The aim of the analysis is to estimate the SWE of new snowfall and the annual SWE peak based on the average density of the new snow at the site (corresponding to the snowfall during the standard observation period of 24 h) and automated snow depth measurements. The results indicate that the daily SR50 sonic ranger measurements and the available snow pit data can be used to estimate the mean new snow density value at the site, with an error of ±6 kg m-3. Once the new snow density is known, the sonic ranger makes it possible to derive SWE values with an RMSE of 45 mm water equivalent (if compared with snow pillow measurements), which turns out to be about 8 % of the total SWE yearly average. Therefore, the methodology we present is interesting for remote locations such as glaciers or high alpine regions, as it makes it possible to estimate the total SWE using a relatively inexpensive, low-power, low-maintenance, and reliable instrument such as the sonic ranger.

  17. Time and temperature dependent modulus of pyrrone and polyimide moldings

    NASA Technical Reports Server (NTRS)

    Lander, L. L.

    1972-01-01

    A method is presented by which the modulus obtained from a stress relaxation test can be used to estimate the modulus which would be obtained from a sonic vibration test. The method was applied to stress relaxation, sonic vibration, and high speed stress-strain data which was obtained on a flexible epoxy. The modulus as measured by the three test methods was identical for identical test times, and a change of test temperature was equivalent to a shift in the logarithmic time scale. An estimate was then made of the dynamic modulus of moldings of two Pyrrones and two polyimides, using stress relaxation data and the method of analysis which was developed for the epoxy. Over the common temperature range (350 to 500 K) in which data from both types of tests were available, the estimated dynamic modulus value differed by only a few percent from the measured value. As a result, it is concluded that, over the 500 to 700 K temperature range, the estimated dynamic modulus values are accurate.

  18. On the angular error of intensity vector based direction of arrival estimation in reverberant sound fields.

    PubMed

    Levin, Dovid; Habets, Emanuël A P; Gannot, Sharon

    2010-10-01

    An acoustic vector sensor provides measurements of both the pressure and particle velocity of a sound field in which it is placed. These measurements are vectorial in nature and can be used for the purpose of source localization. A straightforward approach towards determining the direction of arrival (DOA) utilizes the acoustic intensity vector, which is the product of pressure and particle velocity. The accuracy of an intensity vector based DOA estimator in the presence of noise has been analyzed previously. In this paper, the effects of reverberation upon the accuracy of such a DOA estimator are examined. It is shown that particular realizations of reverberation differ from an ideal isotropically diffuse field, and induce an estimation bias which is dependent upon the room impulse responses (RIRs). The limited knowledge available pertaining the RIRs is expressed statistically by employing the diffuse qualities of reverberation to extend Polack's statistical RIR model. Expressions for evaluating the typical bias magnitude as well as its probability distribution are derived.

  19. Source term estimation of radioxenon released from the Fukushima Dai-ichi nuclear reactors using measured air concentrations and atmospheric transport modeling.

    PubMed

    Eslinger, P W; Biegalski, S R; Bowyer, T W; Cooper, M W; Haas, D A; Hayes, J C; Hoffman, I; Korpach, E; Yi, J; Miley, H S; Rishel, J P; Ungar, K; White, B; Woods, V T

    2014-01-01

    Systems designed to monitor airborne radionuclides released from underground nuclear explosions detected radioactive fallout across the northern hemisphere resulting from the Fukushima Dai-ichi Nuclear Power Plant accident in March 2011. Sampling data from multiple International Modeling System locations are combined with atmospheric transport modeling to estimate the magnitude and time sequence of releases of (133)Xe. Modeled dilution factors at five different detection locations were combined with 57 atmospheric concentration measurements of (133)Xe taken from March 18 to March 23 to estimate the source term. This analysis suggests that 92% of the 1.24 × 10(19) Bq of (133)Xe present in the three operating reactors at the time of the earthquake was released to the atmosphere over a 3 d period. An uncertainty analysis bounds the release estimates to 54-129% of available (133)Xe inventory. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. In the Blink of an Eye: Relating Positive-Feedback Sensitivity to Striatal Dopamine D2-Like Receptors through Blink Rate

    PubMed Central

    Groman, Stephanie M.; James, Alex S.; Seu, Emanuele; Tran, Steven; Clark, Taylor A.; Harpster, Sandra N.; Crawford, Maverick; Burtner, Joanna Lee; Feiler, Karen; Roth, Robert H.; Elsworth, John D.; London, Edythe D.

    2014-01-01

    For >30 years, positron emission tomography (PET) has proven to be a powerful approach for measuring aspects of dopaminergic transmission in the living human brain; this technique has revealed important relationships between dopamine D2-like receptors and dimensions of normal behavior, such as human impulsivity, and psychopathology, particularly behavioral addictions. Nevertheless, PET is an indirect estimate that lacks cellular and functional resolution and, in some cases, is not entirely pharmacologically specific. To identify the relationships between PET estimates of D2-like receptor availability and direct in vitro measures of receptor number, affinity, and function, we conducted neuroimaging and behavioral and molecular pharmacological assessments in a group of adult male vervet monkeys. Data gathered from these studies indicate that variation in D2-like receptor PET measurements is related to reversal-learning performance and sensitivity to positive feedback and is associated with in vitro estimates of the density of functional dopamine D2-like receptors. Furthermore, we report that a simple behavioral measure, eyeblink rate, reveals novel and crucial links between neuroimaging assessments and in vitro measures of dopamine D2 receptors. PMID:25339755

  1. BOREAS HYD-3 Subcanopy Meteorological Measurements

    NASA Technical Reports Server (NTRS)

    Hardy, Janet P.; Hall, Forrest G. (Editor); Knapp, David E. (Editor); Davis, Robert E.; Smith, David E. (Technical Monitor)

    2000-01-01

    The Boreal Ecosystem-Atmosphere Study (BOREAS) Hydrology (HYD)-3 team collected several data sets related to the hydrology of forested areas. This data set includes measurements of wind speed and direction; air temperature; relative humidity; and canopy, trunk, and snow surface temperatures within three forest types. The data were collected in the southern study area/Old Jack Pine (SSA-OJP) (1994), and SSA-OBS (Old Black Spruce), and SSA-OA (Old Aspen) (1996). Measurements were taken for three days in 1994 and four days at each site in 1996. These measurements were intended to be short term to allow the relationship between subcanopy measurements and those collected above the forest canopy to be determined. The subcanopy estimates of wind speed were used in a snow melt model to help predict the timing of snow ablation. The data are available in tabular ASCII files. The subcanopy meteorological measurement data are available from the Earth Observing System Data and Information System (EOSDIS) Oak Ridge National Laboratory (ORNL) Distributed Active Archive Center (DAAC). The data files are available on a CD-ROM (see document number 20010000884).

  2. Rain attenuation studies from radiometric and rain DSD measurements at two tropical locations

    NASA Astrophysics Data System (ADS)

    Halder, Tuhina; Adhikari, Arpita; Maitra, Animesh

    2018-05-01

    Efficient use of satellite communication in tropical regions demands proper characterization of rain attenuation, particularly, in view of the available popular propagation models which are mostly based on temperate climatic data. Thus rain attenuations at frequencies 22.234, 23.834 and 31.4/30 GHz over two tropical locations Kolkata (22.57°N, 88.36°E, India) and Belem (1.45°S, 48.49° W, Brazil), have been estimated for the year 2010 and 2011, respectively. The estimation has been done utilizing ground-based disdrometer observations and radiometric measurements over Earth-space path. The results show that rain attenuation estimations from radiometric data are reliable only at low rain rates (<30 mm/h). However, the rain attenuation estimations from disdrometer measurements show good agreement with the ITU-R model, even at high rain rates (upto100 mm/h). Despite having significant variability in terms of drop size distribution (DSD), the attenuation values calculated from DSD data (disdrometer measurements) at Kolkata and Belem differ a little for the rain rates below 30 mm/h. However, the attenuation values, obtained from radiometric measurements at the two places, show significant deviations ranging from 0.54 dB to 3.2 dB up to a rain rate of 30 mm/h, on account of different rain heights, mean atmospheric temperatures and climatology of the two locations.

  3. Soil Bulk Density by Soil Type, Land Use and Data Source: Putting the Error in SOC Estimates

    NASA Astrophysics Data System (ADS)

    Wills, S. A.; Rossi, A.; Loecke, T.; Ramcharan, A. M.; Roecker, S.; Mishra, U.; Waltman, S.; Nave, L. E.; Williams, C. O.; Beaudette, D.; Libohova, Z.; Vasilas, L.

    2017-12-01

    An important part of SOC stock and pool assessment is the assessment, estimation, and application of bulk density estimates. The concept of bulk density is relatively simple (the mass of soil in a given volume), the specifics Bulk density can be difficult to measure in soils due to logistical and methodological constraints. While many estimates of SOC pools use legacy data in their estimates, few concerted efforts have been made to assess the process used to convert laboratory carbon concentration measurements and bulk density collection into volumetrically based SOC estimates. The methodologies used are particularly sensitive in wetlands and organic soils with high amounts of carbon and very low bulk densities. We will present an analysis across four database measurements: NCSS - the National Cooperative Soil Survey Characterization dataset, RaCA - the Rapid Carbon Assessment sample dataset, NWCA - the National Wetland Condition Assessment, and ISCN - the International soil Carbon Network. The relationship between bulk density and soil organic carbon will be evaluated by dataset and land use/land cover information. Prediction methods (both regression and machine learning) will be compared and contrasted across datasets and available input information. The assessment and application of bulk density, including modeling, aggregation and error propagation will be evaluated. Finally, recommendations will be made about both the use of new data in soil survey products (such as SSURGO) and the use of that information as legacy data in SOC pool estimates.

  4. Decadal variations in atmospheric water vapor time series estimated using GNSS, ERA-Interim, and synoptic data

    NASA Astrophysics Data System (ADS)

    Alshawaf, Fadwa; Dick, Galina; Heise, Stefan; Balidakis, Kyriakos; Schmidt, Torsten; Wickert, Jens

    2017-04-01

    Ground-based GNSS (Global Navigation Satellite Systems) have efficiently been used since the 1990s as a meteorological observing system. Recently scientists used GNSS time series of precipitable water vapor (PWV) for climate research although they may not be sufficiently long. In this work, we compare the trend estimated from GNSS time series with that estimated from European Center for Medium-RangeWeather Forecasts Reanalysis (ERA-Interim) data and meteorological measurements.We aim at evaluating climate evolution in Central Europe by monitoring different atmospheric variables such as temperature and PWV. PWV time series were obtained by three methods: 1) estimated from ground-based GNSS observations using the method of precise point positioning, 2) inferred from ERA-Interim data, and 3) determined based on daily surface measurements of temperature and relative humidity. The other variables are available from surface meteorological stations or received from ERA-Interim. The PWV trend component estimated from GNSS data strongly correlates (>70%) with that estimated from the other data sets. The linear trend is estimated by straight line fitting over 30 years of seasonally-adjusted PWV time series obtained using the meteorological measurements. The results show a positive trend in the PWV time series with an increase of 0.2-0.7 mm/decade with a mean standard deviations of 0.016 mm/decade. In this paper, we present the results at three GNSS stations. The temporal increment of the PWV correlates with the temporal increase in the temperature levels.

  5. The eGFR-C study: accuracy of glomerular filtration rate (GFR) estimation using creatinine and cystatin C and albuminuria for monitoring disease progression in patients with stage 3 chronic kidney disease--prospective longitudinal study in a multiethnic population.

    PubMed

    Lamb, Edmund J; Brettell, Elizabeth A; Cockwell, Paul; Dalton, Neil; Deeks, Jon J; Harris, Kevin; Higgins, Tracy; Kalra, Philip A; Khunti, Kamlesh; Loud, Fiona; Ottridge, Ryan S; Sharpe, Claire C; Sitch, Alice J; Stevens, Paul E; Sutton, Andrew J; Taal, Maarten W

    2014-01-14

    Uncertainty exists regarding the optimal method to estimate glomerular filtration rate (GFR) for disease detection and monitoring. Widely used GFR estimates have not been validated in British ethnic minority populations. Iohexol measured GFR will be the reference against which each estimating equation will be compared. The estimating equations will be based upon serum creatinine and/or cystatin C. The eGFR-C study has 5 components: 1) A prospective longitudinal cohort study of 1300 adults with stage 3 chronic kidney disease followed for 3 years with reference (measured) GFR and test (estimated GFR [eGFR] and urinary albumin-to-creatinine ratio) measurements at baseline and 3 years. Test measurements will also be undertaken every 6 months. The study population will include a representative sample of South-Asians and African-Caribbeans. People with diabetes and proteinuria (ACR ≥30 mg/mmol) will comprise 20-30% of the study cohort.2) A sub-study of patterns of disease progression of 375 people (125 each of Caucasian, Asian and African-Caribbean origin; in each case containing subjects at high and low risk of renal progression). Additional reference GFR measurements will be undertaken after 1 and 2 years to enable a model of disease progression and error to be built.3) A biological variability study to establish reference change values for reference and test measures.4) A modelling study of the performance of monitoring strategies on detecting progression, utilising estimates of accuracy, patterns of disease progression and estimates of measurement error from studies 1), 2) and 3).5) A comprehensive cost database for each diagnostic approach will be developed to enable cost-effectiveness modelling of the optimal strategy.The performance of the estimating equations will be evaluated by assessing bias, precision and accuracy. Data will be modelled as a linear function of time utilising all available (maximum 7) time points compared with the difference between baseline and final reference values. The percentage of participants demonstrating large error with the respective estimating equations will be compared. Predictive value of GFR estimates and albumin-to-creatinine ratio will be compared amongst subjects that do or do not show progressive kidney function decline. The eGFR-C study will provide evidence to inform the optimal GFR estimate to be used in clinical practice. ISRCTN42955626.

  6. How long do patients with chronic disease expect to live? A systematic review of the literature

    PubMed Central

    Salem, Joseph

    2016-01-01

    Objective To systematically identify and summarise the literature on perceived life expectancy among individuals with non-cancer chronic disease. Setting Published and grey literature up to and including September 2016 where adults with non-cancer chronic disease were asked to estimate their own life expectancy. Participants From 6837 screened titles, 9 articles were identified that met prespecified criteria for inclusion. Studies came from the UK, Netherlands and USA. A total of 729 participants were included (heart failure (HF) 573; chronic obstructive pulmonary disease (COPD) 89; end-stage renal failure 62; chronic kidney disease (CKD) 5). No papers reporting on other lung diseases, neurodegenerative disease or cirrhosis were found. Primary and secondary outcome measures All measures of self-estimated life expectancy were accepted. Self-estimated life expectancy was compared, where available, with observed survival, physician-estimated life expectancy and model-estimated life expectancy. Meta-analysis was not conducted due to the heterogeneity of the patient groups and study methodologies. Results Among patients with HF, median self-estimated life expectancy was 40% longer than predicted by a validated model. Outpatients receiving haemodialysis were more optimistic about prognosis than their nephrologists and overestimated their chances of surviving 5 years. Patients with HF and COPD were approximately three times more likely to die in the next year than they predicted. Data available for patients with CKD were of insufficient quality to draw conclusions. Conclusions Individuals with chronic disease may have unrealistically optimistic expectations of their prognosis. More research is needed to understand how perceived life expectancy affects behaviour. Meanwhile, clinicians should attempt to identify each patient's prognostic preferences and provide information in a way that they can understand and use to inform their decisions. Trial registration number CRD42015020732. PMID:28039288

  7. No estuarine intertidal bathymetry? No worries! Estimating intertidal depth contours from readily available GIS data

    EPA Science Inventory

    The importance of littoral elevation to the distribution of intertidal species has long been a cornerstone of estuarine ecology and its historical importance to navigation cannot be understated. However, historically, intertidal elevation measurements have been sparse likely due ...

  8. Measurement of ammonia emissions from tropical seabird colonies

    NASA Astrophysics Data System (ADS)

    Riddick, S. N.; Blackall, T. D.; Dragosits, U.; Daunt, F.; Braban, C. F.; Tang, Y. S.; MacFarlane, W.; Taylor, S.; Wanless, S.; Sutton, M. A.

    2014-06-01

    The excreta (guano) of seabirds at their breeding colonies represents a notable source of ammonia (NH3) emission to the atmosphere, with effects on surrounding ecosystems through nitrogen compounds being thereby transported from sea to land. Previous measurements in temperate UK conditions quantified emission hotspots and allowed preliminary global upscaling. However, thermodynamic processes and water availability limit NH3 formation from guano, which suggests that the proportion of excreted nitrogen that volatilizes as NH3 may potentially be higher at tropical seabird colonies than similar colonies in temperate or sub-polar regions. To investigate such differences, we measured NH3 concentrations and environmental conditions at two tropical seabird colonies during the breeding season: a colony of 20,000 tern spp. and noddies on Michaelmas Cay, Great Barrier Reef, and a colony of 200,000 Sooty terns on Ascension Island, Atlantic Ocean. At both sites time-integrated NH3 concentrations and meteorological parameters were measured. In addition, at Ascension Island, semi-continuous hourly NH3 concentrations and micrometeorological parameters were measured throughout the campaign. Ammonia emissions, quantified using a backwards Lagrangian atmospheric dispersion model, were estimated at 21.8 μg m-2 s-1 and 18.9 μg m-2 s-1 from Michaelmas Cay and Ascension Island, respectively. High temporal resolution NH3 data at Ascension Island estimated peak hourly emissions up to 377 μg NH3 m2 s-1. The estimated percentage fraction of total guano nitrogen volatilized was 67% at Michaelmas Cay and 32% at Ascension Island, with the larger value at the former site attributed to higher water availability. These values are much larger than published data for sub-polar locations, pointing to a substantial climatic dependence on emission of atmospheric NH3 from seabird colonies.

  9. Validity and reliability of the abdominal test and evaluation systems tool (ABTEST) to accurately measure abdominal force.

    PubMed

    Glenn, Jordan M; Galey, Madeline; Edwards, Abigail; Rickert, Bradley; Washington, Tyrone A

    2015-07-01

    Ability to generate force from the core musculature is a critical factor for sports and general activities with insufficiencies predisposing individuals to injury. This study evaluated isometric force production as a valid and reliable method of assessing abdominal force using the abdominal test and evaluation systems tool (ABTEST). Secondary analysis estimated 1-repetition maximum on commercially available abdominal machine compared to maximum force and average power on ABTEST system. This study utilized test-retest reliability and comparative analysis for validity. Reliability was measured using test-retest design on ABTEST. Validity was measured via comparison to estimated 1-repetition maximum on a commercially available abdominal device. Participants applied isometric, abdominal force against a transducer and muscular activation was evaluated measuring normalized electromyographic activity at the rectus-abdominus, rectus-femoris, and erector-spinae. Test, re-test force production on ABTEST was significantly correlated (r=0.84; p<0.001). Mean electromyographic activity for the rectus-abdominus (72.93% and 75.66%), rectus-femoris (6.59% and 6.51%), and erector-spinae (6.82% and 5.48%) were observed for trial-1 and trial-2, respectively. Significant correlations for the estimated 1-repetition maximum were found for average power (r=0.70, p=0.002) and maximum force (r=0.72, p<0.001). Data indicate the ABTEST can accurately measure rectus-abdominus force isolated from hip-flexor involvement. Negligible activation of erector-spinae substantiates little subjective effort among participants in the lower back. Results suggest ABTEST is a valid and reliable method of evaluating abdominal force. Copyright © 2014 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  10. Radiation dose estimation for marine mussels following exposure to tritium: Best practice for use of the ERICA tool in ecotoxicological studies.

    PubMed

    Dallas, Lorna J; Devos, Alexandre; Fievet, Bruno; Turner, Andrew; Lyons, Brett P; Jha, Awadhesh N

    2016-05-01

    Accurate dosimetry is critically important for ecotoxicological and radioecological studies on the potential effects of environmentally relevant radionuclides, such as tritium ((3)H). Previous studies have used basic dosimetric equations to estimate dose from (3)H exposure in ecologically important organisms, such as marine mussels. This study compares four different methods of estimating dose to adult mussels exposed to 1 or 15 MBq L(-1) tritiated water (HTO) under laboratory conditions. These methods were (1) an equation converting seawater activity concentrations to dose rate with fixed parameters; (2) input into the ERICA tool of seawater activity concentrations only; (3) input into the ERICA tool of estimated whole organism concentrations (woTACs), comprising dry activity plus estimated tissue free water tritium (TFWT) activity (TFWT volume × seawater activity concentration); and (4) input into the ERICA tool of measured whole organism activity concentrations, comprising dry activity plus measured TFWT activity (TFWT volume × TFWT activity concentration). Methods 3 and 4 are recommended for future ecotoxicological experiments as they produce values for individual animals and are not reliant on transfer predictions (estimation of concentration ratio). Method 1 may be suitable if measured whole organism concentrations are not available, as it produced results between 3 and 4. As there are technical complications to accurately measuring TFWT, we recommend that future radiotoxicological studies on mussels or other aquatic invertebrates measure whole organism activity in non-dried tissues (i.e. incorporating TFWT and dry activity as one, rather than as separate fractions) and input this data into the ERICA tool. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. surrosurv: An R package for the evaluation of failure time surrogate endpoints in individual patient data meta-analyses of randomized clinical trials.

    PubMed

    Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan

    2018-03-01

    Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Estimates of Soil Moisture Using the Land Information System for Land Surface Water Storage: Case Study for the Western States Water Mission

    NASA Astrophysics Data System (ADS)

    Liu, P. W.; Famiglietti, J. S.; Levoe, S.; Reager, J. T., II; David, C. H.; Kumar, S.; Li, B.; Peters-Lidard, C. D.

    2017-12-01

    Soil moisture is one of the critical factors in terrestrial hydrology. Accurate soil moisture information improves estimation of terrestrial water storage and fluxes, that is essential for water resource management including sustainable groundwater pumping and agricultural irrigation practices. It is particularly important during dry periods when water stress is high. The Western States Water Mission (WSWM), a multiyear mission project of NASA's Jet Propulsion Laboratory, is operated to understand and estimate quantities of the water availability in the western United States by integrating observations and measurements from in-situ and remote sensing sensors, and hydrological models. WSWM data products have been used to assess and explore the adverse impacts of the California drought (2011-2016) and provide decision-makers information for water use planning. Although the observations are often more accurate, simulations using land surface models can provide water availability estimates at desired spatio-temporal scales. The Land Information System (LIS), developed by NASA's Goddard Space Flight Center, integrates developed land surface models and data processing and management tools, that enables to utilize the measurements and observations from various platforms as forcings in the high performance computing environment to forecast the hydrologic conditions. The goal of this study is to implement the LIS in the western United States for estimates of soil moisture. We will implement the NOAH-MP model at the 12km North America Land Data Assimilation System grid and compare to other land surface models included in the LIS. Findings will provide insight into the differences between model estimates and model physics. Outputs from a multi-model ensemble from LIS can also be used to enhance estimated reliability and provide quantification of uncertainty. We will compare the LIS-based soil moisture estimates to the SMAP enhanced 9 km soil moisture product to understand the mechanistic differences between the model and observation. These outcomes will contribute to the WSWM for providing robust products.

  13. Energy saving in WWTP: Daily benchmarking under uncertainty and data availability limitations.

    PubMed

    Torregrossa, D; Schutz, G; Cornelissen, A; Hernández-Sancho, F; Hansen, J

    2016-07-01

    Efficient management of Waste Water Treatment Plants (WWTPs) can produce significant environmental and economic benefits. Energy benchmarking can be used to compare WWTPs, identify targets and use these to improve their performance. Different authors have performed benchmark analysis on monthly or yearly basis but their approaches suffer from a time lag between an event, its detection, interpretation and potential actions. The availability of on-line measurement data on many WWTPs should theoretically enable the decrease of the management response time by daily benchmarking. Unfortunately this approach is often impossible because of limited data availability. This paper proposes a methodology to perform a daily benchmark analysis under database limitations. The methodology has been applied to the Energy Online System (EOS) developed in the framework of the project "INNERS" (INNovative Energy Recovery Strategies in the urban water cycle). EOS calculates a set of Key Performance Indicators (KPIs) for the evaluation of energy and process performances. In EOS, the energy KPIs take in consideration the pollutant load in order to enable the comparison between different plants. For example, EOS does not analyse the energy consumption but the energy consumption on pollutant load. This approach enables the comparison of performances for plants with different loads or for a single plant under different load conditions. The energy consumption is measured by on-line sensors, while the pollutant load is measured in the laboratory approximately every 14 days. Consequently, the unavailability of the water quality parameters is the limiting factor in calculating energy KPIs. In this paper, in order to overcome this limitation, the authors have developed a methodology to estimate the required parameters and manage the uncertainty in the estimation. By coupling the parameter estimation with an interval based benchmark approach, the authors propose an effective, fast and reproducible way to manage infrequent inlet measurements. Its use enables benchmarking on a daily basis and prepares the ground for further investigation. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. A simple method for estimating gross carbon budgets for vegetation in forest ecosystems.

    PubMed

    Ryan, Michael G.

    1991-01-01

    Gross carbon budgets for vegetation in forest ecosystems are difficult to construct because of problems in scaling flux measurements made on small samples over short periods of time and in determining belowground carbon allocation. Recently, empirical relationships have been developed to estimate total belowground carbon allocation from litterfall, and maintenance respiration from tissue nitrogen content. I outline a method for estimating gross carbon budgets using these empirical relationships together with data readily available from ecosystem studies (aboveground wood and canopy production, aboveground wood and canopy biomass, litterfall, and tissue nitrogen contents). Estimates generated with this method are compared with annual carbon fixation estimates from the Forest-BGC model for a lodgepole pine (Pinus contorta Dougl.) and a Pacific silver fir (Abies amabilis Dougl.) chronosequence.

  15. Estimation of diastolic intraventricular pressure gradients by Doppler M-mode echocardiography

    NASA Technical Reports Server (NTRS)

    Greenberg, N. L.; Vandervoort, P. M.; Firstenberg, M. S.; Garcia, M. J.; Thomas, J. D.

    2001-01-01

    Previous studies have shown that small intraventricular pressure gradients (IVPG) are important for efficient filling of the left ventricle (LV) and as a sensitive marker for ischemia. Unfortunately, there has previously been no way of measuring these noninvasively, severely limiting their research and clinical utility. Color Doppler M-mode (CMM) echocardiography provides a spatiotemporal velocity distribution along the inflow tract throughout diastole, which we hypothesized would allow direct estimation of IVPG by using the Euler equation. Digital CMM images, obtained simultaneously with intracardiac pressure waveforms in six dogs, were processed by numerical differentiation for the Euler equation, then integrated to estimate IVPG and the total (left atrial to left ventricular apex) pressure drop. CMM-derived estimates agreed well with invasive measurements (IVPG: y = 0.87x + 0.22, r = 0.96, P < 0.001, standard error of the estimate = 0.35 mmHg). Quantitative processing of CMM data allows accurate estimation of IVPG and tracking of changes induced by beta-adrenergic stimulation. This novel approach provides unique information on LV filling dynamics in an entirely noninvasive way that has previously not been available for assessment of diastolic filling and function.

  16. An Unscented Kalman-Particle Hybrid Filter for Space Object Tracking

    NASA Astrophysics Data System (ADS)

    Raihan A. V, Dilshad; Chakravorty, Suman

    2018-03-01

    Optimal and consistent estimation of the state of space objects is pivotal to surveillance and tracking applications. However, probabilistic estimation of space objects is made difficult by the non-Gaussianity and nonlinearity associated with orbital mechanics. In this paper, we present an unscented Kalman-particle hybrid filtering framework for recursive Bayesian estimation of space objects. The hybrid filtering scheme is designed to provide accurate and consistent estimates when measurements are sparse without incurring a large computational cost. It employs an unscented Kalman filter (UKF) for estimation when measurements are available. When the target is outside the field of view (FOV) of the sensor, it updates the state probability density function (PDF) via a sequential Monte Carlo method. The hybrid filter addresses the problem of particle depletion through a suitably designed filter transition scheme. To assess the performance of the hybrid filtering approach, we consider two test cases of space objects that are assumed to undergo full three dimensional orbital motion under the effects of J 2 and atmospheric drag perturbations. It is demonstrated that the hybrid filters can furnish fast, accurate and consistent estimates outperforming standard UKF and particle filter (PF) implementations.

  17. Absolute measures of the completeness of the fossil record

    NASA Technical Reports Server (NTRS)

    Foote, M.; Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)

    1999-01-01

    Measuring the completeness of the fossil record is essential to understanding evolution over long timescales, particularly when comparing evolutionary patterns among biological groups with different preservational properties. Completeness measures have been presented for various groups based on gaps in the stratigraphic ranges of fossil taxa and on hypothetical lineages implied by estimated evolutionary trees. Here we present and compare quantitative, widely applicable absolute measures of completeness at two taxonomic levels for a broader sample of higher taxa of marine animals than has previously been available. We provide an estimate of the probability of genus preservation per stratigraphic interval, and determine the proportion of living families with some fossil record. The two completeness measures use very different data and calculations. The probability of genus preservation depends almost entirely on the Palaeozoic and Mesozoic records, whereas the proportion of living families with a fossil record is influenced largely by Cenozoic data. These measurements are nonetheless highly correlated, with outliers quite explicable, and we find that completeness is rather high for many animal groups.

  18. Estimation Method of Center of Inertia Frequency based on Multiple Synchronized Phasor Measurement Data

    NASA Astrophysics Data System (ADS)

    Hashiguchi, Takuhei; Watanabe, Masayuki; Goda, Tadahiro; Mitani, Yasunori; Saeki, Osamu; Hojo, Masahide; Ukai, Hiroyuki

    Open access and deregulation have been introduced into Japan and some independent power producers (IPP) and power producer and suppliers (PPS) are participating in the power generation business, which is possible to makes power system dynamics more complex. To maintain power system condition under various situations, it is essential that a real time measurement system over wide area is available. Therefore we started a project to construct an original measurement system by the use of phasor measurement units (PMU) in Japan. This paper describes the estimation method of a center of inertia frequency by applying actual measurement data. The application of this method enables us to extract power system oscillations from measurement data appropriately. Moreover, the analysis of power system dynamics for power system oscillations occurring in western Japan 60Hz system is shown. These results will lead to the clarification of power system dynamics and may make it possible to realize the monitoring of power system oscillations associated with power system stability.

  19. Evaluation of global horizontal irradiance to plane-of-array irradiance models at locations across the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lave, Matthew; Hayes, William; Pohl, Andrew

    2015-02-02

    We report an evaluation of the accuracy of combinations of models that estimate plane-of-array (POA) irradiance from measured global horizontal irradiance (GHI). This estimation involves two steps: 1) decomposition of GHI into direct and diffuse horizontal components and 2) transposition of direct and diffuse horizontal irradiance (DHI) to POA irradiance. Measured GHI and coincident measured POA irradiance from a variety of climates within the United States were used to evaluate combinations of decomposition and transposition models. A few locations also had DHI measurements, allowing for decoupled analysis of either the decomposition or the transposition models alone. Results suggest that decompositionmore » models had mean bias differences (modeled versus measured) that vary with climate. Transposition model mean bias differences depended more on the model than the location. Lastly, when only GHI measurements were available and combinations of decomposition and transposition models were considered, the smallest mean bias differences were typically found for combinations which included the Hay/Davies transposition model.« less

  20. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.

  1. Phytoplankton growth rates in a light-limited environment, San Francisco Bay

    USGS Publications Warehouse

    Alpine, Andrea E.; Cloern, James E.

    1988-01-01

    This study was motivated by the need for quantitative measures of phytoplankton population growth rate in an estuarine environment, and was designed around the presumption that growth rates can be related empirically to light exposure. We conducted the study in San Francisco Bay (California, USA), which has large horizontal gradients in light availability (Zp:Zm) typical of many coastal plain estuaries, and nutrient concentrations that often exceed those presumed to limit phytoplankton growth (Cloern et al. 1985). We tested the hypothesis that light availability is the primary control of phytoplankton growth, and that previous estimates of growth rate based on the ratio of productivity to biomass (Cloern et al. 1985) are realistic. Specifically, we wanted to verify that growth rate varies spatially along horizontal gradients of light availability indexed as Zp:Zm, such that phytoplankton turnover rate is rapid in shallow clear areas (high Zp:Zm) and slow in deep turbid areas (low Zp:Zm). We used an in situ incubation technique which simulated vertical mixing, and measured both changes in cell number and carbon production as independent estimates of growth rate across a range of Zp:Zm ratios.

  2. Partitioning Evapotranspiration in Semiarid Grassland and Shrubland Ecosystems Using Diurnal Surface Temperature Variation

    NASA Technical Reports Server (NTRS)

    Moran, M. Susan; Scott, Russell L.; Keefer, Timothy O.; Paige, Ginger B.; Emmerich, William E.; Cosh, Michael H.; O'Neill, Peggy E.

    2007-01-01

    The encroachment of woody plants in grasslands across the Western U.S. will affect soil water availability by altering the contributions of evaporation (E) and transpiration (T) to total evapotranspiration (ET). To study this phenomenon, a network of flux stations is in place to measure ET in grass- and shrub-dominated ecosystems throughout the Western U.S. A method is described and tested here to partition the daily measurements of ET into E and T based on diurnal surface temperature variations of the soil and standard energy balance theory. The difference between the mid-afternoon and pre-dawn soil surface temperature, termed Apparent Thermal Inertia (I(sub A)), was used to identify days when E was negligible, and thus, ET=T. For other days, a three-step procedure based on energy balance equations was used to estimate Qe contributions of daily E and T to total daily ET. The method was tested at Walnut Gulch Experimental Watershed in southeast Arizona based on Bowen ratio estimates of ET and continuous measurements of surface temperature with an infrared thermometer (IRT) from 2004- 2005, and a second dataset of Bowen ratio, IRT and stem-flow gage measurements in 2003. Results showed that reasonable estimates of daily T were obtained for a multi-year period with ease of operation and minimal cost. With known season-long daily T, E and ET, it is possible to determine the soil water availability associated with grass- and shrub-dominated sites and better understand the hydrologic impact of regional woody plant encroachment.

  3. Effects of immobilization mask material on surface dose

    PubMed Central

    Hadley, Scott W.; Kelly, Robin; Lam, Kwok

    2005-01-01

    This work investigates the increase in surface dose caused by thermoplastic masks used for patient positioning and immobilization. A thermoplastic mask is custom fit by stretching a heated mask over the patient at the time of treatment simulation. This mask is then used at treatment to increase the reproducibility of the patient position. The skin sparing effect of mega‐voltage X‐ray beams can be reduced when the patient's skin surface is under the mask material. The sheet of thermoplastic mask has holes to reduce this effect and is available from one manufacturer with two different sizes of holes, one larger than the other. This work investigates the increase in surface dose caused by the mask material and quantifies the difference between the two samples of masks available. The change in the dose buildup was measured using an Attix parallel plate chamber by measuring tissue maximum ratios (TMRs) using solid water. Measurements were made with and without the mask material on the surface of the solid water for 6‐MV and 15‐MV X‐ray beams. The effective thickness of equivalent water was estimated from the TMR curves, and the increase in surface dose was estimated. The buildup effect was measured to be equivalent to 2.2 mm to 0.6 mm for masks that have been stretched by different amounts. The surface dose was estimated to change from 16% and 12% for 6 MV and 15 MV, respectively, to 27% to 61% for 6 MV and 18% to 40% for 15 MV with the mask samples. PACS number: 87.53.Dq PMID:15770192

  4. On predicting monitoring system effectiveness

    NASA Astrophysics Data System (ADS)

    Cappello, Carlo; Sigurdardottir, Dorotea; Glisic, Branko; Zonta, Daniele; Pozzi, Matteo

    2015-03-01

    While the objective of structural design is to achieve stability with an appropriate level of reliability, the design of systems for structural health monitoring is performed to identify a configuration that enables acquisition of data with an appropriate level of accuracy in order to understand the performance of a structure or its condition state. However, a rational standardized approach for monitoring system design is not fully available. Hence, when engineers design a monitoring system, their approach is often heuristic with performance evaluation based on experience, rather than on quantitative analysis. In this contribution, we propose a probabilistic model for the estimation of monitoring system effectiveness based on information available in prior condition, i.e. before acquiring empirical data. The presented model is developed considering the analogy between structural design and monitoring system design. We assume that the effectiveness can be evaluated based on the prediction of the posterior variance or covariance matrix of the state parameters, which we assume to be defined in a continuous space. Since the empirical measurements are not available in prior condition, the estimation of the posterior variance or covariance matrix is performed considering the measurements as a stochastic variable. Moreover, the model takes into account the effects of nuisance parameters, which are stochastic parameters that affect the observations but cannot be estimated using monitoring data. Finally, we present an application of the proposed model to a real structure. The results show how the model enables engineers to predict whether a sensor configuration satisfies the required performance.

  5. Mobile robot self-localization system using single webcam distance measurement technology in indoor environments.

    PubMed

    Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen

    2014-01-27

    A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment.

  6. Mobile Robot Self-Localization System Using Single Webcam Distance Measurement Technology in Indoor Environments

    PubMed Central

    Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen

    2014-01-01

    A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment. PMID:24473282

  7. Techniques for measuring intercepted and absorbed PAR in corn canopies

    NASA Technical Reports Server (NTRS)

    Gallo, K. P.; Daughtry, C. S. T.

    1984-01-01

    The quantity of radiation potentially available for photosynthesis that is captured by the crop is best described as absorbed photosynthetically active radiation (PAR). Absorbed PAR (APAR) is the difference between descending and ascending fluxes. The four components of APAR were measured above and within two planting densities of corn (Zea mays L.) and several methods of measuring and estimating APAR were examined. A line quantum sensor that spatially averages the photosynthetic photon flux density provided a rapid and portable method of measuring APAR. PAR reflectance from the soil (Typic Argiaquoll) surface decreased from 10% to less than 1% of the incoming PAR as the canopy cover increased. PAR reflectance from the canopy decreased to less than 3% at maximum vegetative cover. Intercepted PAR (1 - transmitted PAR) generally overestimated absorbed PAR by less than 4% throughout most of the growing season. Thus intercepted PAR appears to be a reasonable estimate of absorbed PAR.

  8. Wave height estimates from pressure and velocity data at an intermediate depth in the presence of uniform currents

    NASA Astrophysics Data System (ADS)

    Basu, Biswajit

    2017-12-01

    Bounds on estimates of wave heights (valid for large amplitudes) from pressure and flow measurements at an arbitrary intermediate depth have been provided. Two-dimensional irrotational steady water waves over a flat bed with a finite depth in the presence of underlying uniform currents have been considered in the analysis. Five different upper bounds based on a combination of pressure and velocity field measurements have been derived, though there is only one available lower bound on the wave height in the case of the speed of current greater than or less than the wave speed. This article is part of the theme issue 'Nonlinear water waves'.

  9. HDDM: Hierarchical Bayesian estimation of the Drift-Diffusion Model in Python.

    PubMed

    Wiecki, Thomas V; Sofer, Imri; Frank, Michael J

    2013-01-01

    The diffusion model is a commonly used tool to infer latent psychological processes underlying decision-making, and to link them to neural mechanisms based on response times. Although efficient open source software has been made available to quantitatively fit the model to data, current estimation methods require an abundance of response time measurements to recover meaningful parameters, and only provide point estimates of each parameter. In contrast, hierarchical Bayesian parameter estimation methods are useful for enhancing statistical power, allowing for simultaneous estimation of individual subject parameters and the group distribution that they are drawn from, while also providing measures of uncertainty in these parameters in the posterior distribution. Here, we present a novel Python-based toolbox called HDDM (hierarchical drift diffusion model), which allows fast and flexible estimation of the the drift-diffusion model and the related linear ballistic accumulator model. HDDM requires fewer data per subject/condition than non-hierarchical methods, allows for full Bayesian data analysis, and can handle outliers in the data. Finally, HDDM supports the estimation of how trial-by-trial measurements (e.g., fMRI) influence decision-making parameters. This paper will first describe the theoretical background of the drift diffusion model and Bayesian inference. We then illustrate usage of the toolbox on a real-world data set from our lab. Finally, parameter recovery studies show that HDDM beats alternative fitting methods like the χ(2)-quantile method as well as maximum likelihood estimation. The software and documentation can be downloaded at: http://ski.clps.brown.edu/hddm_docs/

  10. Estimated flow-duration curves for selected ungaged sites in Kansas

    USGS Publications Warehouse

    Studley, S.E.

    2001-01-01

    Flow-duration curves for 1968-98 were estimated for 32 ungaged sites in the Missouri, Smoky Hill-Saline, Solomon, Marais des Cygnes, Walnut, Verdigris, and Neosho River Basins in Kansas. Also included from a previous report are estimated flow-duration curves for 16 ungaged sites in the Cimarron and lower Arkansas River Basins in Kansas. The method of estimation used six unique factors of flow duration: (1) mean streamflow and percentage duration of mean streamflow, (2) ratio of 1-percent-duration streamflow to mean streamflow, (3) ratio of 0.1-percent-duration streamflow to 1-percent-duration streamflow, (4) ratio of 50-percent-duration streamflow to mean streamflow, (5) percentage duration of appreciable streamflow (0.10 cubic foot per second), and (6) average slope of the flow-duration curve. These factors were previously developed from a regionalized study of flow-duration curves using streamflow data for 1921-76 from streamflow-gaging stations with drainage areas of 100 to 3,000 square miles. The method was tested on a currently (2001) measured, continuous-record streamflow-gaging station on Salt Creek near Lyndon, Kansas, with a drainage area of 111 square miles and was found to adequately estimate the computed flow-duration curve for the station. The method also was tested on a currently (2001) measured, continuous-record, streamflow-gaging station on Soldier Creek near Circleville, Kansas, with a drainage area of 49.3 square miles. The results of the test on Soldier Creek near Circleville indicated that the method could adequately estimate flow-duration curves for sites with drainage areas of less than 100 square miles. The low-flow parts of the estimated flow-duration curves were verified or revised using 137 base-flow discharge measurements made during 1999-2000 at the 32 ungaged sites that were correlated with base-flow measurements and flow-duration analyses performed at nearby, long-term, continuous-record, streamflow-gaging stations (index stations). The method did not adequately estimate the flow-duration curves for two sites in the western one-third of the State because of substantial changes in farming practices (terracing and intensive ground-water withdrawal) that were not accounted for in the two previous studies (Furness, 1959; Jordan, 1983). For these two sites, there was enough historic, continuous-streamflow record available to perform record-extension techniques correlated to their respective index stations for the development of the estimated flow-duration curves. The estimated flow-duration curves at the ungaged sites can be used for projecting future flow frequencies for assessment of total maximum daily loads (TMDLs) or other water-quality constituents, water-availability studies, and for basin-characteristic studies.

  11. Transcript copy number estimation using a mouse whole-genome oligonucleotide microarray

    PubMed Central

    Carter, Mark G; Sharov, Alexei A; VanBuren, Vincent; Dudekula, Dawood B; Carmack, Condie E; Nelson, Charlie; Ko, Minoru SH

    2005-01-01

    The ability to quantitatively measure the expression of all genes in a given tissue or cell with a single assay is an exciting promise of gene-expression profiling technology. An in situ-synthesized 60-mer oligonucleotide microarray designed to detect transcripts from all mouse genes was validated, as well as a set of exogenous RNA controls derived from the yeast genome (made freely available without restriction), which allow quantitative estimation of absolute endogenous transcript abundance. PMID:15998450

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y.; Bank, J.; Wan, Y. H.

    The total inertia stored in all rotating masses that are connected to power systems, such as synchronous generations and induction motors, is an essential force that keeps the system stable after disturbances. To ensure bulk power system stability, there is a need to estimate the equivalent inertia available from a renewable generation plant. An equivalent inertia constant analogous to that of conventional rotating machines can be used to provide a readily understandable metric. This paper explores a method that utilizes synchrophasor measurements to estimate the equivalent inertia that a wind plant provides to the system.

  13. Estimated maximal and current brain volume predict cognitive ability in old age.

    PubMed

    Royle, Natalie A; Booth, Tom; Valdés Hernández, Maria C; Penke, Lars; Murray, Catherine; Gow, Alan J; Maniega, Susana Muñoz; Starr, John; Bastin, Mark E; Deary, Ian J; Wardlaw, Joanna M

    2013-12-01

    Brain tissue deterioration is a significant contributor to lower cognitive ability in later life; however, few studies have appropriate data to establish how much influence prior brain volume and prior cognitive performance have on this association. We investigated the associations between structural brain imaging biomarkers, including an estimate of maximal brain volume, and detailed measures of cognitive ability at age 73 years in a large (N = 620), generally healthy, community-dwelling population. Cognitive ability data were available from age 11 years. We found positive associations (r) between general cognitive ability and estimated brain volume in youth (male, 0.28; females, 0.12), and in measured brain volume in later life (males, 0.27; females, 0.26). Our findings show that cognitive ability in youth is a strong predictor of estimated prior and measured current brain volume in old age but that these effects were the same for both white and gray matter. As 1 of the largest studies of associations between brain volume and cognitive ability with normal aging, this work contributes to the wider understanding of how some early-life factors influence cognitive aging. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Frequency and Angular Variations of Land Surface Microwave Emissivities: Can we Estimate SSM/T and AMSU Emissivities from SSM/I Emissivities?

    NASA Technical Reports Server (NTRS)

    Prigent, Catherine; Wigneron, Jean-Pierre; Rossow, William B.; Pardo-Carrion, Juan R.

    1999-01-01

    To retrieve temperature and humidity profiles from SSM/T and AMSU, it is important to quantify the contribution of the Earth surface emission. So far, no global estimates of the land surface emissivities are available at SSM/T and AMSU frequencies and scanning conditions. The land surface emissivities have been previously calculated for the globe from the SSM/I conical scanner between 19 and 85 GHz. To analyze the feasibility of deriving SSM/T and AMSU land surface emissivities from SSM/I emissivities, the spectral and angular variations of the emissivities are studied, with the help of ground-based measurements, models and satellite estimates. Up to 100 GHz, for snow and ice free areas, the SSM/T and AMSU emissivities can be derived with useful accuracy from the SSM/I emissivities- The emissivities can be linearly interpolated in frequency. Based on ground-based emissivity measurements of various surface types, a simple model is proposed to estimate SSM/T and AMSU emissivities for all zenith angles knowing only the emissivities for the vertical and horizontal polarizations at 53 deg zenith angle. The method is tested on the SSM/T-2 91.655 GHz channels. The mean difference between the SSM/T-2 and SSM/I-derived emissivities is less than or equal to 0.01 for all zenith angles with an r.m.s. difference of approx. = 0.02. Above 100 GHz, preliminary results are presented at 150 GHz, based on SSM/T-2 observations and are compared with the very few estimations available in the literature.

  15. Quantification of myocardial infarction: a comparison of single photon-emission computed tomography with pyrophosphate to serial plasma MB-creatine kinase measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jansen, D.E.; Corbett, J.R.; Wolfe, C.L.

    1985-08-01

    Single photon-emission computed tomography (SPECT) with /sup 99m/Tc-pyrophosphate (PPi) has been shown to estimate size of myocardial infarction accurately in animals. The authors tested the hypothesis that SPECT with /sup /sup 99m//Tc-PPi and blood pool subtraction can provide prompt and accurate estimates of size of myocardial infarction in patients. SPECT estimates are potentially available early after the onset of infarction and should correlate with estimates of infarct size calculated from serial measurements of plasma MB-creatine kinase (CK) activity. Thirty-three patients with acute myocardial infarction and 16 control patients without acute myocardial infarction were studied. Eleven of the patients had transmuralmore » anterior myocardial infarction, 16 had transmural inferior myocardial infarction, and six had nontransmural myocardial infarction. SPECT was performed with a commercially available rotating gamma camera. Identical projection images of the distribution of 99mTc-PPi and the ungated cardiac blood pool were acquired sequentially over 180 degrees. Reconstructed sections were color coded and superimposed for purposes of localization of infarct. Areas of increased PPi uptake within myocardial infarcts were thresholded at 65% of peak activity. The blood pool was thresholded at 50% and subtracted to determine the endocardial border for the left ventricle. Myocardial infarcts ranged in size from 1 to 126 gram equivalents (geq) MB-CK. The correlation of MB-CK estimates of size of infarct with size determined by SPECT (both in geq) was good (r = .89 with a regression line of y = 13.1 + 1.5x).« less

  16. Peak flow estimation in ungauged basins by means of water level data analysis

    NASA Astrophysics Data System (ADS)

    Corato, G.; Moramarco, T.; Tucciarelli, T.

    2009-04-01

    Discharge hydrograph estimation in rivers is usually carried out by means of water level measurements and the use of a water depth - discharge relationship. The water depth - discharge curve is obtained by integrating local velocities measured in a given section at specified water depth values. To build up such curve is very expensive and very often the highest points, used for the peak flow estimation, are the result of rough extrapolation of points corresponding to much lower water depths. Recently, discharge estimation methodologies based only on the analysis of synchronous water level data recorded in two different river sections far some kilometers from each other have been developed. These methodologies are based only on the analysis of the water levels, the knowledge of the river bed elevations within the two sections, and the use of a diffusive flow routing numerical model. The bed roughness estimation, in terms of average Manning coefficient, is carried out along with the discharge hydrograph estimation. The 1D flow routing model is given by the following Saint Venant equations, simplified according to the diffusive hypothesis: ‚-+ ‚q-= 0 ‚t ‚x (1) ‚h+ (Sf - S0) = 0 ‚x (2) where q(x,t) is the discharge, h(x,t) is the water depth, Sf is the energy slope and S0 is the bed slope. The energy slope is related to the average n Manning coefficient by the Chezy relationship: -q2n2- Sf = 2ℜ4•3 (3) whereℜ is the hydraulic radius and gs the river section. The upstream boundary condition of the flow routing model is given by the measured upstream water level hydrograph. The computational domain is extended some kilometers downstream the second measurement section and the downstream boundary condition is properly approximated. This avoids the use of the downstream measured data for the solution of the system (1)-(3) and limits the model error even in the case of subcritical flow. The optimal average Manning coefficient is obtained by fitting the water level data available in the downstream measurement section with the computed ones. The optimal discharge hydrograph estimated in the upstream measurement section is given by the function q(0,t) computed in the first section (where x = 0) using the optimal Manning coefficient. Two different fitting quality criteria are compared and their practical implications are discussed; the first one is the equality of the computed and the measured time peak lag between the first and the second measurement section; the second one is the minimization of the total square error between the measured and the computed downstream water level hydrographs. The uniqueness and identifiability properties of the associated inverse problem are analyzed, and a model error analysis is carried out addressing the most relevant sources of error arising from the adopted approximations. Three case studies previously used for the validation of the proposed methodology are reviewed. The first two are water level hydrographs collected in two sections of the Arno river (Tuscany, Italy) and the Tiber river (Umbria, Italy). Water level and discharge hydrographs recorded during many storm events were available in both cases. The optimal average Manning coefficient has been estimated in both cases using the data of a single event, properly selected among all the available ones. In the third case, concerning hystorical data collected in a small tributary of the Tanagro river (Campania, Italy), three water level hydrographs were measured in three different sections of the channel. This allowed to carry on the discharge estimation using the data collected in only two of the three sections, using the data of the third one for validation. The results obtained in the three test cases highlight the advantages and the limits of the adopted analysis. The advantage is the simplicity of the hardware required for the data acquisition, that can be easily performed continuously in time, also during very bad weather conditions and using a long distance control. A first limit is the assumption of negligible inflow between the two measurement sections. Because the distance between the two sections must be large enough to measure the time lag between the two hydrographs, this limit can result in a difficult selection of the measurement sections. A second limit is the real heterogeneity of the bed roughness, that provides a shape of the water level hydrograph different from the computed one. Preliminary results of a new, multiparametric data analysis, are finally presented.

  17. Nonparametric Discrete Survival Function Estimation with Uncertain Endpoints Using an Internal Validation Subsample

    PubMed Central

    Zee, Jarcy; Xie, Sharon X.

    2015-01-01

    Summary When a true survival endpoint cannot be assessed for some subjects, an alternative endpoint that measures the true endpoint with error may be collected, which often occurs when obtaining the true endpoint is too invasive or costly. We develop an estimated likelihood function for the situation where we have both uncertain endpoints for all participants and true endpoints for only a subset of participants. We propose a nonparametric maximum estimated likelihood estimator of the discrete survival function of time to the true endpoint. We show that the proposed estimator is consistent and asymptotically normal. We demonstrate through extensive simulations that the proposed estimator has little bias compared to the naïve Kaplan-Meier survival function estimator, which uses only uncertain endpoints, and more efficient with moderate missingness compared to the complete-case Kaplan-Meier survival function estimator, which uses only available true endpoints. Finally, we apply the proposed method to a dataset for estimating the risk of developing Alzheimer's disease from the Alzheimer's Disease Neuroimaging Initiative. PMID:25916510

  18. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    PubMed

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  19. Estimation of the two-dimensional presampled modulation transfer function of digital radiography devices using one-dimensional test objects

    PubMed Central

    Wells, Jered R.; Dobbins, James T.

    2012-01-01

    Purpose: The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1D test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Methods: Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ/i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. Results: The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm−1) and approximate circular symmetry at frequencies below 4 mm−1. While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm−1. Slit measurement near 45° revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm × 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm−1) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Conclusions: Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation. PMID:23039654

  20. Estimation of the two-dimensional presampled modulation transfer function of digital radiography devices using one-dimensional test objects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, Jered R.; Dobbins, James T. III; Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, Durham, North Carolina 27705

    2012-10-15

    Purpose: The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1Dmore » test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Methods: Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ/i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. Results: The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm{sup -1}) and approximate circular symmetry at frequencies below 4 mm{sup -1}. While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm{sup -1}. Slit measurement near 45 Degree-Sign revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm Multiplication-Sign 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm{sup -1}) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Conclusions: Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation.« less

  1. Estimation of the two-dimensional presampled modulation transfer function of digital radiography devices using one-dimensional test objects.

    PubMed

    Wells, Jered R; Dobbins, James T

    2012-10-01

    The modulation transfer function (MTF) of medical imaging devices is commonly reported in the form of orthogonal one-dimensional (1D) measurements made near the vertical and horizontal axes with a slit or edge test device. A more complete description is found by measuring the two-dimensional (2D) MTF. Some 2D test devices have been proposed, but there are some issues associated with their use: (1) they are not generally available; (2) they may require many images; (3) the results may have diminished accuracy; and (4) their implementation may be particularly cumbersome. This current work proposes the application of commonly available 1D test devices for practical and accurate estimation of the 2D presampled MTF of digital imaging systems. Theory was developed and applied to ensure adequate fine sampling of the system line spread function for 1D test devices at orientations other than approximately vertical and horizontal. Methods were also derived and tested for slit nonuniformity correction at arbitrary angle. Techniques were validated with experimental measurements at ten angles using an edge test object and three angles using a slit test device on an indirect-detection flat-panel system [GE Revolution XQ∕i (GE Healthcare, Waukesha, WI)]. The 2D MTF was estimated through a simple surface fit with interpolation based on Delaunay triangulation of the 1D edge-based MTF measurements. Validation by synthesis was also performed with simulated images from a hypothetical direct-detection flat-panel device. The 2D MTF derived from physical measurements yielded an average relative precision error of 0.26% for frequencies below the cutoff (2.5 mm(-1)) and approximate circular symmetry at frequencies below 4 mm(-1). While slit analysis generally agreed with the results of edge analysis, the two showed subtle differences at frequencies above 4 mm(-1). Slit measurement near 45° revealed radial asymmetry in the MTF resulting from the square pixel aperture (0.2 mm × 0.2 mm), a characteristic which was not necessarily appreciated with the orthogonal 1D MTF measurements. In simulation experiments, both slit- and edge-based measurements resolved the radial asymmetries in the 2D MTF. The average absolute relative accuracy error in the 2D MTF between the DC and cutoff (2.5 mm(-1)) frequencies was 0.13% with average relative precision error of 0.11%. Other simulation results were similar to those derived from physical data. Overall, the general availability, acceptance, accuracy, and ease of implementation of 1D test devices for MTF assessment make this a valuable technique for 2D MTF estimation.

  2. LCDs are better: psychophysical and photometric estimates of the temporal characteristics of CRT and LCD monitors.

    PubMed

    Lagroix, Hayley E P; Yanko, Matthew R; Spalek, Thomas M

    2012-07-01

    Many cognitive and perceptual phenomena, such as iconic memory and temporal integration, require brief displays. A critical requirement is that the image not remain visible after its offset. It is commonly believed that liquid crystal displays (LCD) are unsuitable because of their poor temporal response characteristics relative to cathode-ray-tube (CRT) screens. Remarkably, no psychophysical estimates of visible persistence are available to verify this belief. A series of experiments in which white stimuli on a black background produced discernible persistence on CRT but not on LCD screens, during both dark- and light-adapted viewing, falsified this belief. Similar estimates using black stimuli on a white background produced no visible persistence on either screen. That said, photometric measurements are available that seem to confirm the poor temporal characteristics of LCD screens, but they were obtained before recent advances in LCD technology. Using current LCD screens, we obtained photometric estimates of rise time far shorter (1-6 ms) than earlier estimates (20-150 ms), and approaching those of CRTs (<1 ms). We conclude that LCDs are preferable to CRTs when visible persistence is a concern, except when black-on-white displays are used.

  3. Measuring Compartment Size and Gas Solubility in Marine Mammals

    DTIC Science & Technology

    2014-09-30

    analyzed by gas chromatography . Injection of the sample into the gas chromatograph is done using a sample loop to minimize volume injection error. We...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Measuring Compartment Size and Gas Solubility in Marine...study is to develop methods to estimate marine mammal tissue compartment sizes, and tissue gas solubility. We aim to improve the data available for

  4. Guidance of Autonomous Aerospace Vehicles for Vertical Soft Landing using Nonlinear Control Theory

    DTIC Science & Technology

    2015-08-11

    Measured and Kalman filter Estimate of the Roll Attitude of the Quad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.4...and faster Hart- ley et al. [2013]. With availability of small, light, high fidelity sensors (Inertial Measurement Units IMU ) and processors on board...is a product of inverse of rotation matrix and inertia matrix for the quad frame. Since both the matrix are invertible at all times except when roll

  5. Autotune Calibrates Models to Building Use Data

    ScienceCinema

    None

    2018-01-16

    Models of existing buildings are currently unreliable unless calibrated manually by a skilled professional. Autotune, as the name implies, automates this process by calibrating the model of an existing building to measured data, and is now available as open source software. This enables private businesses to incorporate Autotune into their products so that their customers can more effectively estimate cost savings of reduced energy consumption measures in existing buildings.

  6. Mapping poverty using mobile phone and satellite data

    PubMed Central

    Pezzulo, Carla; Bjelland, Johannes; Iqbal, Asif M.; Hadiuzzaman, Khandakar N.; Lu, Xin; Wetter, Erik; Tatem, Andrew J.

    2017-01-01

    Poverty is one of the most important determinants of adverse health outcomes globally, a major cause of societal instability and one of the largest causes of lost human potential. Traditional approaches to measuring and targeting poverty rely heavily on census data, which in most low- and middle-income countries (LMICs) are unavailable or out-of-date. Alternate measures are needed to complement and update estimates between censuses. This study demonstrates how public and private data sources that are commonly available for LMICs can be used to provide novel insight into the spatial distribution of poverty. We evaluate the relative value of modelling three traditional poverty measures using aggregate data from mobile operators and widely available geospatial data. Taken together, models combining these data sources provide the best predictive power (highest r2 = 0.78) and lowest error, but generally models employing mobile data only yield comparable results, offering the potential to measure poverty more frequently and at finer granularity. Stratifying models into urban and rural areas highlights the advantage of using mobile data in urban areas and different data in different contexts. The findings indicate the possibility to estimate and continually monitor poverty rates at high spatial resolution in countries with limited capacity to support traditional methods of data collection. PMID:28148765

  7. Mapping poverty using mobile phone and satellite data.

    PubMed

    Steele, Jessica E; Sundsøy, Pål Roe; Pezzulo, Carla; Alegana, Victor A; Bird, Tomas J; Blumenstock, Joshua; Bjelland, Johannes; Engø-Monsen, Kenth; de Montjoye, Yves-Alexandre; Iqbal, Asif M; Hadiuzzaman, Khandakar N; Lu, Xin; Wetter, Erik; Tatem, Andrew J; Bengtsson, Linus

    2017-02-01

    Poverty is one of the most important determinants of adverse health outcomes globally, a major cause of societal instability and one of the largest causes of lost human potential. Traditional approaches to measuring and targeting poverty rely heavily on census data, which in most low- and middle-income countries (LMICs) are unavailable or out-of-date. Alternate measures are needed to complement and update estimates between censuses. This study demonstrates how public and private data sources that are commonly available for LMICs can be used to provide novel insight into the spatial distribution of poverty. We evaluate the relative value of modelling three traditional poverty measures using aggregate data from mobile operators and widely available geospatial data. Taken together, models combining these data sources provide the best predictive power (highest r 2 = 0.78) and lowest error, but generally models employing mobile data only yield comparable results, offering the potential to measure poverty more frequently and at finer granularity. Stratifying models into urban and rural areas highlights the advantage of using mobile data in urban areas and different data in different contexts. The findings indicate the possibility to estimate and continually monitor poverty rates at high spatial resolution in countries with limited capacity to support traditional methods of data collection. © 2017 The Authors.

  8. Absolute and estimated values of macular pigment optical density in young and aged Asian participants with or without age-related macular degeneration.

    PubMed

    Ozawa, Yoko; Shigeno, Yuta; Nagai, Norihiro; Suzuki, Misa; Kurihara, Toshihide; Minami, Sakiko; Hirano, Eri; Shinoda, Hajime; Kobayashi, Saori; Tsubota, Kazuo

    2017-08-29

    Lutein and zeaxanthin are suggested micronutrient supplements to prevent the progression of age-related macular degeneration (AMD), a leading cause of blindness worldwide. To monitor the levels of lutein/zeaxanthin in the macula, macular pigment optical density (MPOD) is measured. A commercially available device (MPSII®, Elektron Technology, Switzerland), using technology based on heterochromatic flicker photometry, can measure both absolute and estimated values of MPOD. However, whether the estimated value is applicable to Asian individuals and/or AMD patients remains to be determined. The absolute and estimated values of MPOD were measured using the MPSII® device in 77 participants with a best-corrected visual acuity (BCVA) > 0.099 (logMAR score). The studied eyes included 17 young (20-29 years) healthy, 26 aged (>50 years) healthy, 18 aged and AMD-fellow, and 16 aged AMD eyes. The mean BCVA among the groups were not significantly different. Both absolute and estimated values were measurable in all eyes of young healthy group. However, absolute values were measurable in only 57.7%, 66.7%, and 43.8%, of the aged healthy, AMD-fellow, and AMD groups, respectively, and 56.7% of the eyes included in the 3 aged groups. In contrast, the estimated value was measurable in 84.6%, 88.9% and 93.8% of the groups, respectively, and 88.3% of eyes in the pooled aged group. The estimated value was correlated with absolute value in individuals from all groups by Spearman's correlation coefficient analyses (young healthy: R 2  = 0.885, P = 0.0001; aged healthy: R 2  = 0.765, P = 0.001; AMD-fellow: R 2  = 0.851, P = 0.0001; and AMD: R 2  = 0.860, P = 0.013). Using the estimated value, significantly lower MPOD values were found in aged AMD-related eyes, which included both AMD-fellow and AMD eyes, compared with aged healthy eyes by Student's t-test (P = 0.02). Absolute, in contrast to estimated, value was measurable in a limited number of aged participants; however, it was correlated with estimated value both in young and aged Asian populations with or without AMD. These results may inform future clinical studies investigating the measurement of MPOD in understanding the role of macular pigments in the pathogenesis of AMD.

  9. Estimation of groundwater consumption by phreatophytes using diurnal water table fluctuations: A saturated‐unsaturated flow assessment

    USGS Publications Warehouse

    Loheide, Steven P.; Butler, James J.; Gorelick, Steven M.

    2005-01-01

    Groundwater consumption by phreatophytes is a difficult‐to‐measure but important component of the water budget in many arid and semiarid environments. Over the past 70 years the consumptive use of groundwater by phreatophytes has been estimated using a method that analyzes diurnal trends in hydrographs from wells that are screened across the water table (White, 1932). The reliability of estimates obtained with this approach has never been rigorously evaluated using saturated‐unsaturated flow simulation. We present such an evaluation for common flow geometries and a range of hydraulic properties. Results indicate that the major source of error in the White method is the uncertainty in the estimate of specific yield. Evapotranspirative consumption of groundwater will often be significantly overpredicted with the White method if the effects of drainage time and the depth to the water table on specific yield are ignored. We utilize the concept of readily available specific yield as the basis for estimation of the specific yield value appropriate for use with the White method. Guidelines are defined for estimating readily available specific yield based on sediment texture. Use of these guidelines with the White method should enable the evapotranspirative consumption of groundwater to be more accurately quantified.

  10. Valuing the Economic Costs of Allergic Rhinitis, Acute Bronchitis, and Asthma from Exposure to Indoor Dampness and Mold in the US

    PubMed Central

    2016-01-01

    Two foundational methods for estimating the total economic burden of disease are cost of illness (COI) and willingness to pay (WTP). WTP measures the full cost to society, but WTP estimates are difficult to compute and rarely available. COI methods are more often used but less likely to reflect full costs. This paper attempts to estimate the full economic cost (2014$) of illnesses resulting from exposure to dampness and mold using COI methods and WTP where the data is available. A limited sensitivity analysis of alternative methods and assumptions demonstrates a wide potential range of estimates. In the final estimates, the total annual cost to society attributable to dampness and mold is estimated to be $3.7 (2.3–4.7) billion for allergic rhinitis, $1.9 (1.1–2.3) billion for acute bronchitis, $15.1 (9.4–20.6) billion for asthma morbidity, and $1.7 (0.4–4.5) billion for asthma mortality. The corresponding costs from all causes, not limited to dampness and mold, using the same approach would be $24.8 billion for allergic rhinitis, $13.5 billion for acute bronchitis, $94.5 billion for asthma morbidity, and $10.8 billion for asthma mortality. PMID:27313630

  11. Predicting Grizzly Bear Density in Western North America

    PubMed Central

    Mowat, Garth; Heard, Douglas C.; Schwarz, Carl J.

    2013-01-01

    Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend. PMID:24367552

  12. Predicting grizzly bear density in western North America.

    PubMed

    Mowat, Garth; Heard, Douglas C; Schwarz, Carl J

    2013-01-01

    Conservation of grizzly bears (Ursus arctos) is often controversial and the disagreement often is focused on the estimates of density used to calculate allowable kill. Many recent estimates of grizzly bear density are now available but field-based estimates will never be available for more than a small portion of hunted populations. Current methods of predicting density in areas of management interest are subjective and untested. Objective methods have been proposed, but these statistical models are so dependent on results from individual study areas that the models do not generalize well. We built regression models to relate grizzly bear density to ultimate measures of ecosystem productivity and mortality for interior and coastal ecosystems in North America. We used 90 measures of grizzly bear density in interior ecosystems, of which 14 were currently known to be unoccupied by grizzly bears. In coastal areas, we used 17 measures of density including 2 unoccupied areas. Our best model for coastal areas included a negative relationship with tree cover and positive relationships with the proportion of salmon in the diet and topographic ruggedness, which was correlated with precipitation. Our best interior model included 3 variables that indexed terrestrial productivity, 1 describing vegetation cover, 2 indices of human use of the landscape and, an index of topographic ruggedness. We used our models to predict current population sizes across Canada and present these as alternatives to current population estimates. Our models predict fewer grizzly bears in British Columbia but more bears in Canada than in the latest status review. These predictions can be used to assess population status, set limits for total human-caused mortality, and for conservation planning, but because our predictions are static, they cannot be used to assess population trend.

  13. The magnitude of ivacaftor effects on fluid secretion via R117H-CFTR channels: Human in vivo measurements

    PubMed Central

    Char, Jessica E.; Dunn, Colleen; Davies, Zoe; Milla, Carlos; Moss, Richard B.; Wine, Jeffrey J.

    2017-01-01

    We optically measured effects of orally available ivacaftor (Kalydeco®) on sweat rates of identified glands in 3 R117H subjects, each having a unique set of additional mutations, and compared them with 5 healthy control subjects tested contemporaneously. We injected β-adrenergic agonists intradermally to stimulate CFTR-dependent ‘C-sweat’ and methacholine to stimulate ‘M-sweat’, which persists in CF subjects. We focused on an R117H-7T/F508del subject who produced quantifiable C-sweat off ivacaftor and was available for 1 blinded, 3 off ivacaftor, and 3 on ivacaftor tests, allowing us to estimate in vivo fold-increase in sweat rates produced by ivacaftor’s effect on the open probability (PO) of R117H-CFTR. Measured sweat rates must be corrected for sweat losses. With estimated sweat losses of 0.023 to 0.08 nl·gland-1·min-1, ivacaftor increased the average C-sweat rates 3–7 fold, and estimated function as % of WT were 4.1–12% off ivacaftor and 21.9–32% on ivacaftor (larger values reflect increased loss estimates). Based on single tests, an R117H-7T/ R117H-7T subject showed 6–9% WT function off ivacaftor and 28–43% on ivacaftor. Repeat testing of an R117H-5T/F508del subject detected only trace responding to ivacaftor. We conclude that in vivo, R117H PO is strongly increased by ivacaftor, but channel number, mainly determined by variable deletion of exon 10, has a marked influence on outcomes. PMID:28419121

  14. Exposure measurement error in PM2.5 health effects studies: A pooled analysis of eight personal exposure validation studies

    PubMed Central

    2014-01-01

    Background Exposure measurement error is a concern in long-term PM2.5 health studies using ambient concentrations as exposures. We assessed error magnitude by estimating calibration coefficients as the association between personal PM2.5 exposures from validation studies and typically available surrogate exposures. Methods Daily personal and ambient PM2.5, and when available sulfate, measurements were compiled from nine cities, over 2 to 12 days. True exposure was defined as personal exposure to PM2.5 of ambient origin. Since PM2.5 of ambient origin could only be determined for five cities, personal exposure to total PM2.5 was also considered. Surrogate exposures were estimated as ambient PM2.5 at the nearest monitor or predicted outside subjects’ homes. We estimated calibration coefficients by regressing true on surrogate exposures in random effects models. Results When monthly-averaged personal PM2.5 of ambient origin was used as the true exposure, calibration coefficients equaled 0.31 (95% CI:0.14, 0.47) for nearest monitor and 0.54 (95% CI:0.42, 0.65) for outdoor home predictions. Between-city heterogeneity was not found for outdoor home PM2.5 for either true exposure. Heterogeneity was significant for nearest monitor PM2.5, for both true exposures, but not after adjusting for city-average motor vehicle number for total personal PM2.5. Conclusions Calibration coefficients were <1, consistent with previously reported chronic health risks using nearest monitor exposures being under-estimated when ambient concentrations are the exposure of interest. Calibration coefficients were closer to 1 for outdoor home predictions, likely reflecting less spatial error. Further research is needed to determine how our findings can be incorporated in future health studies. PMID:24410940

  15. LINKING REGIONAL AEROSOL EMISSION CHANGES WITH MULTIPLE IMPACT MEASURES THROUGH DIRECT AND CLOUD-RELATED FORCING ESTIMATES

    EPA Science Inventory

    Outputs expected from this project include improved confidence in direct radiative forcing and cloud radiative forcing, particularly over the United States and with regard to United States emissions publicly available, documented data sets including emission inventories of siz...

  16. Continuous-time interval model identification of blood glucose dynamics for type 1 diabetes

    NASA Astrophysics Data System (ADS)

    Kirchsteiger, Harald; Johansson, Rolf; Renard, Eric; del Re, Luigi

    2014-07-01

    While good physiological models of the glucose metabolism in type 1 diabetic patients are well known, their parameterisation is difficult. The high intra-patient variability observed is a further major obstacle. This holds for data-based models too, so that no good patient-specific models are available. Against this background, this paper proposes the use of interval models to cover the different metabolic conditions. The control-oriented models contain a carbohydrate and insulin sensitivity factor to be used for insulin bolus calculators directly. Available clinical measurements were sampled on an irregular schedule which prompts the use of continuous-time identification, also for the direct estimation of the clinically interpretable factors mentioned above. An identification method is derived and applied to real data from 28 diabetic patients. Model estimation was done on a clinical data-set, whereas validation results shown were done on an out-of-clinic, everyday life data-set. The results show that the interval model approach allows a much more regular estimation of the parameters and avoids physiologically incompatible parameter estimates.

  17. Nearshore Measurements From a Small UAV.

    NASA Astrophysics Data System (ADS)

    Holman, R. A.; Brodie, K. L.; Spore, N.

    2016-02-01

    Traditional measurements of nearshore hydrodynamics and evolving bathymetry are expensive and dangerous and must be frequently repeated to track the rapid changes of typical ocean beaches. However, extensive research into remote sensing methods using cameras or radars mounted on fixed towers has resulted in increasingly mature algorithms for estimating bathymetry, currents and wave characteristics. This naturally raises questions about how easily and effectively these algorithms can be applied to optical data from low-cost, easily-available UAV platforms. This paper will address the characteristics and quality of data taken from a small, low-cost UAV, the DJI Phantom. In particular, we will study the stability of imagery from a vehicle `parked' at 300 feet altitude, methods to stabilize remaining wander, and the quality of nearshore bathymetry estimates from the resulting image time series, computed using the cBathy algorithm. Estimates will be compared to ground truth surveys collected at the Field Research Facility at Duck, NC.

  18. Hyper-X Post-Flight Trajectory Reconstruction

    NASA Technical Reports Server (NTRS)

    Karlgaard, Christopher D.; Tartabini, Paul V.; Blanchard, RobertC.; Kirsch, Michael; Toniolo, Matthew D.

    2004-01-01

    This paper discusses the formulation and development of a trajectory reconstruction tool for the NASA X{43A/Hyper{X high speed research vehicle, and its implementation for the reconstruction and analysis of ight test data. Extended Kalman ltering techniques are employed to reconstruct the trajectory of the vehicle, based upon numerical integration of inertial measurement data along with redundant measurements of the vehicle state. The equations of motion are formulated in order to include the effects of several systematic error sources, whose values may also be estimated by the ltering routines. Additionally, smoothing algorithms have been implemented in which the nal value of the state (or an augmented state that includes other systematic error parameters to be estimated) and covariance are propagated back to the initial time to generate the best-estimated trajectory, based upon all available data. The methods are applied to the problem of reconstructing the trajectory of the Hyper-X vehicle from ight data.

  19. The benefit of using additional hydrological information from earth observations and reanalysis data on water allocation decisions in irrigation districts

    NASA Astrophysics Data System (ADS)

    Kaune, Alexander; López, Patricia; Werner, Micha; de Fraiture, Charlotte

    2017-04-01

    Hydrological information on water availability and demand is vital for sound water allocation decisions in irrigation districts, particularly in times of water scarcity. However, sub-optimal water allocation decisions are often taken with incomplete hydrological information, which may lead to agricultural production loss. In this study we evaluate the benefit of additional hydrological information from earth observations and reanalysis data in supporting decisions in irrigation districts. Current water allocation decisions were emulated through heuristic operational rules for water scarce and water abundant conditions in the selected irrigation districts. The Dynamic Water Balance Model based on the Budyko framework was forced with precipitation datasets from interpolated ground measurements, remote sensing and reanalysis data, to determine the water availability for irrigation. Irrigation demands were estimated based on estimates of potential evapotranspiration and coefficient for crops grown, adjusted with the interpolated precipitation data. Decisions made using both current and additional hydrological information were evaluated through the rate at which sub-optimal decisions were made. The decisions made using an amended set of decision rules that benefit from additional information on demand in the districts were also evaluated. Results show that sub-optimal decisions can be reduced in the planning phase through improved estimates of water availability. Where there are reliable observations of water availability through gauging stations, the benefit of the improved precipitation data is found in the improved estimates of demand, equally leading to a reduction of sub-optimal decisions.

  20. Estimating prevalence of coronary heart disease for small areas using collateral indicators of morbidity.

    PubMed

    Congdon, Peter

    2010-01-01

    Different indicators of morbidity for chronic disease may not necessarily be available at a disaggregated spatial scale (e.g., for small areas with populations under 10 thousand). Instead certain indicators may only be available at a more highly aggregated spatial scale; for example, deaths may be recorded for small areas, but disease prevalence only at a considerably higher spatial scale. Nevertheless prevalence estimates at small area level are important for assessing health need. An instance is provided by England where deaths and hospital admissions for coronary heart disease are available for small areas known as wards, but prevalence is only available for relatively large health authority areas. To estimate CHD prevalence at small area level in such a situation, a shared random effect method is proposed that pools information regarding spatial morbidity contrasts over different indicators (deaths, hospitalizations, prevalence). The shared random effect approach also incorporates differences between small areas in known risk factors (e.g., income, ethnic structure). A Poisson-multinomial equivalence may be used to ensure small area prevalence estimates sum to the known higher area total. An illustration is provided by data for London using hospital admissions and CHD deaths at ward level, together with CHD prevalence totals for considerably larger local health authority areas. The shared random effect involved a spatially correlated common factor, that accounts for clustering in latent risk factors, and also provides a summary measure of small area CHD morbidity.

  1. Estimating Prevalence of Coronary Heart Disease for Small Areas Using Collateral Indicators of Morbidity

    PubMed Central

    Congdon, Peter

    2010-01-01

    Different indicators of morbidity for chronic disease may not necessarily be available at a disaggregated spatial scale (e.g., for small areas with populations under 10 thousand). Instead certain indicators may only be available at a more highly aggregated spatial scale; for example, deaths may be recorded for small areas, but disease prevalence only at a considerably higher spatial scale. Nevertheless prevalence estimates at small area level are important for assessing health need. An instance is provided by England where deaths and hospital admissions for coronary heart disease are available for small areas known as wards, but prevalence is only available for relatively large health authority areas. To estimate CHD prevalence at small area level in such a situation, a shared random effect method is proposed that pools information regarding spatial morbidity contrasts over different indicators (deaths, hospitalizations, prevalence). The shared random effect approach also incorporates differences between small areas in known risk factors (e.g., income, ethnic structure). A Poisson-multinomial equivalence may be used to ensure small area prevalence estimates sum to the known higher area total. An illustration is provided by data for London using hospital admissions and CHD deaths at ward level, together with CHD prevalence totals for considerably larger local health authority areas. The shared random effect involved a spatially correlated common factor, that accounts for clustering in latent risk factors, and also provides a summary measure of small area CHD morbidity. PMID:20195439

  2. Human casualties in earthquakes: Modelling and mitigation

    USGS Publications Warehouse

    Spence, R.J.S.; So, E.K.M.

    2011-01-01

    Earthquake risk modelling is needed for the planning of post-event emergency operations, for the development of insurance schemes, for the planning of mitigation measures in the existing building stock, and for the development of appropriate building regulations; in all of these applications estimates of casualty numbers are essential. But there are many questions about casualty estimation which are still poorly understood. These questions relate to the causes and nature of the injuries and deaths, and the extent to which they can be quantified. This paper looks at the evidence on these questions from recent studies. It then reviews casualty estimation models available, and finally compares the performance of some casualty models in making rapid post-event casualty estimates in recent earthquakes.

  3. Absolute colorimetric characterization of a DSLR camera

    NASA Astrophysics Data System (ADS)

    Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo

    2014-03-01

    A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.

  4. Validation of Lower Tier Exposure Tools Used for REACH: Comparison of Tools Estimates With Available Exposure Measurements.

    PubMed

    van Tongeren, Martie; Lamb, Judith; Cherrie, John W; MacCalman, Laura; Basinas, Ioannis; Hesse, Susanne

    2017-10-01

    Tier 1 exposure tools recommended for use under REACH are designed to easily identify situations that may pose a risk to health through conservative exposure predictions. However, no comprehensive evaluation of the performance of the lower tier tools has previously been carried out. The ETEAM project aimed to evaluate several lower tier exposure tools (ECETOC TRA, MEASE, and EMKG-EXPO-TOOL) as well as one higher tier tool (STOFFENMANAGER®). This paper describes the results of the external validation of tool estimates using measurement data. Measurement data were collected from a range of providers, both in Europe and United States, together with contextual information. Individual measurement and aggregated measurement data were obtained. The contextual information was coded into the tools to obtain exposure estimates. Results were expressed as percentage of measurements exceeding the tool estimates and presented by exposure category (non-volatile liquid, volatile liquid, metal abrasion, metal processing, and powder handling). We also explored tool performance for different process activities as well as different scenario conditions and exposure levels. In total, results from nearly 4000 measurements were obtained, with the majority for the use of volatile liquids and powder handling. The comparisons of measurement results with tool estimates suggest that the tools are generally conservative. However, the tools were more conservative when estimating exposure from powder handling compared to volatile liquids and other exposure categories. In addition, results suggested that tool performance varies between process activities and scenario conditions. For example, tools were less conservative when estimating exposure during activities involving tabletting, compression, extrusion, pelletisation, granulation (common process activity PROC14) and transfer of substance or mixture (charging and discharging) at non-dedicated facilities (PROC8a; powder handling only). With the exception of STOFFENMANAGER® (for estimating exposure during powder handling), the tools were less conservative for scenarios with lower estimated exposure levels. This is the most comprehensive evaluation of the performance of REACH exposure tools carried out to date. The results show that, although generally conservative, the tools may not always achieve the performance specified in the REACH guidance, i.e. using the 75th or 90th percentile of the exposure distribution for the risk characterisation. Ongoing development, adjustment, and recalibration of the tools with new measurement data are essential to ensure adequate characterisation and control of worker exposure to hazardous substances. © The Author 2017. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  5. Esophageal Manometry and Regional Transpulmonary Pressure in Lung Injury.

    PubMed

    Yoshida, Takeshi; Amato, Marcelo B P; Grieco, Domenico Luca; Chen, Lu; Lima, Cristhiano A S; Roldan, Rollin; Morais, Caio C A; Gomes, Susimeire; Costa, Eduardo L V; Cardoso, Paulo F G; Charbonney, Emmanuel; Richard, Jean-Christophe M; Brochard, Laurent; Kavanagh, Brian P

    2018-04-15

    Esophageal manometry is the clinically available method to estimate pleural pressure, thus enabling calculation of transpulmonary pressure (Pl). However, many concerns make it uncertain in which lung region esophageal manometry reflects local Pl. To determine the accuracy of esophageal pressure (Pes) and in which regions esophageal manometry reflects pleural pressure (Ppl) and Pl; to assess whether lung stress in nondependent regions can be estimated at end-inspiration from Pl. In lung-injured pigs (n = 6) and human cadavers (n = 3), Pes was measured across a range of positive end-expiratory pressure, together with directly measured Ppl in nondependent and dependent pleural regions. All measurements were obtained with minimal nonstressed volumes in the pleural sensors and esophageal balloons. Expiratory and inspiratory Pl was calculated by subtracting local Ppl or Pes from airway pressure; inspiratory Pl was also estimated by subtracting Ppl (calculated from chest wall and respiratory system elastance) from the airway plateau pressure. In pigs and human cadavers, expiratory and inspiratory Pl using Pes closely reflected values in dependent to middle lung (adjacent to the esophagus). Inspiratory Pl estimated from elastance ratio reflected the directly measured nondependent values. These data support the use of esophageal manometry in acute respiratory distress syndrome. Assuming correct calibration, expiratory Pl derived from Pes reflects Pl in dependent to middle lung, where atelectasis usually predominates; inspiratory Pl estimated from elastance ratio may indicate the highest level of lung stress in nondependent "baby" lung, where it is vulnerable to ventilator-induced lung injury.

  6. Evaluation of leaf wetness duration models for operational use in strawberry disease-warning systems in four US states.

    PubMed

    Montone, Verona O; Fraisse, Clyde W; Peres, Natalia A; Sentelhas, Paulo C; Gleason, Mark; Ellis, Michael; Schnabel, Guido

    2016-11-01

    Leaf wetness duration (LWD) plays a key role in disease development and is often used as an input in disease-warning systems. LWD is often estimated using mathematical models, since measurement by sensors is rarely available and/or reliable. A strawberry disease-warning system called "Strawberry Advisory System" (SAS) is used by growers in Florida, USA, in deciding when to spray their strawberry fields to control anthracnose and Botrytis fruit rot. Currently, SAS is implemented at six locations, where reliable LWD sensors are deployed. A robust LWD model would facilitate SAS expansion from Florida to other regions where reliable LW sensors are not available. The objective of this study was to evaluate the use of mathematical models to estimate LWD and time of spray recommendations in comparison to on site LWD measurements. Specific objectives were to (i) compare model estimated and observed LWD and resulting differences in timing and number of fungicide spray recommendations, (ii) evaluate the effects of weather station sensors precision on LWD models performance, and (iii) compare LWD models performance across four states in the USA. The LWD models evaluated were the classification and regression tree (CART), dew point depression (DPD), number of hours with relative humidity equal or greater than 90 % (NHRH ≥90 %), and Penman-Monteith (P-M). P-M model was expected to have the lowest errors, since it is a physically based and thus portable model. Indeed, the P-M model estimated LWD most accurately (MAE <2 h) at a weather station with high precision sensors but was the least accurate when lower precision sensors of relative humidity and estimated net radiation (based on solar radiation and temperature) were used (MAE = 3.7 h). The CART model was the most robust for estimating LWD and for advising growers on fungicide-spray timing for anthracnose and Botrytis fruit rot control and is therefore the model we recommend for expanding the strawberry disease warning beyond Florida, to other locations where weather stations may be deployed with lower precision sensors, and net radiation observations are not available.

  7. Evaluation of leaf wetness duration models for operational use in strawberry disease-warning systems in four US states

    NASA Astrophysics Data System (ADS)

    Montone, Verona O.; Fraisse, Clyde W.; Peres, Natalia A.; Sentelhas, Paulo C.; Gleason, Mark; Ellis, Michael; Schnabel, Guido

    2016-11-01

    Leaf wetness duration (LWD) plays a key role in disease development and is often used as an input in disease-warning systems. LWD is often estimated using mathematical models, since measurement by sensors is rarely available and/or reliable. A strawberry disease-warning system called "Strawberry Advisory System" (SAS) is used by growers in Florida, USA, in deciding when to spray their strawberry fields to control anthracnose and Botrytis fruit rot. Currently, SAS is implemented at six locations, where reliable LWD sensors are deployed. A robust LWD model would facilitate SAS expansion from Florida to other regions where reliable LW sensors are not available. The objective of this study was to evaluate the use of mathematical models to estimate LWD and time of spray recommendations in comparison to on site LWD measurements. Specific objectives were to (i) compare model estimated and observed LWD and resulting differences in timing and number of fungicide spray recommendations, (ii) evaluate the effects of weather station sensors precision on LWD models performance, and (iii) compare LWD models performance across four states in the USA. The LWD models evaluated were the classification and regression tree (CART), dew point depression (DPD), number of hours with relative humidity equal or greater than 90 % (NHRH ≥90 %), and Penman-Monteith (P-M). P-M model was expected to have the lowest errors, since it is a physically based and thus portable model. Indeed, the P-M model estimated LWD most accurately (MAE <2 h) at a weather station with high precision sensors but was the least accurate when lower precision sensors of relative humidity and estimated net radiation (based on solar radiation and temperature) were used (MAE = 3.7 h). The CART model was the most robust for estimating LWD and for advising growers on fungicide-spray timing for anthracnose and Botrytis fruit rot control and is therefore the model we recommend for expanding the strawberry disease warning beyond Florida, to other locations where weather stations may be deployed with lower precision sensors, and net radiation observations are not available.

  8. Fusing metabolomics data sets with heterogeneous measurement errors

    PubMed Central

    Waaijenborg, Sandra; Korobko, Oksana; Willems van Dijk, Ko; Lips, Mirjam; Hankemeier, Thomas; Wilderjans, Tom F.; Smilde, Age K.

    2018-01-01

    Combining different metabolomics platforms can contribute significantly to the discovery of complementary processes expressed under different conditions. However, analysing the fused data might be hampered by the difference in their quality. In metabolomics data, one often observes that measurement errors increase with increasing measurement level and that different platforms have different measurement error variance. In this paper we compare three different approaches to correct for the measurement error heterogeneity, by transformation of the raw data, by weighted filtering before modelling and by a modelling approach using a weighted sum of residuals. For an illustration of these different approaches we analyse data from healthy obese and diabetic obese individuals, obtained from two metabolomics platforms. Concluding, the filtering and modelling approaches that both estimate a model of the measurement error did not outperform the data transformation approaches for this application. This is probably due to the limited difference in measurement error and the fact that estimation of measurement error models is unstable due to the small number of repeats available. A transformation of the data improves the classification of the two groups. PMID:29698490

  9. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting.

    PubMed

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-10-02

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.

  10. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting

    PubMed Central

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-01-01

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099

  11. Machine Learning on Images: Combining Passive Microwave and Optical Data to Estimate Snow Water Equivalent

    NASA Astrophysics Data System (ADS)

    Dozier, J.; Tolle, K.; Bair, N.

    2014-12-01

    We have a problem that may be a specific example of a generic one. The task is to estimate spatiotemporally distributed estimates of snow water equivalent (SWE) in snow-dominated mountain environments, including those that lack on-the-ground measurements. Several independent methods exist, but all are problematic. The remotely sensed date of disappearance of snow from each pixel can be combined with a calculation of melt to reconstruct the accumulated SWE for each day back to the last significant snowfall. Comparison with streamflow measurements in mountain ranges where such data are available shows this method to be accurate, but the big disadvantage is that SWE can only be calculated retroactively after snow disappears, and even then only for areas with little accumulation during the melt season. Passive microwave sensors offer real-time global SWE estimates but suffer from several issues, notably signal loss in wet snow or in forests, saturation in deep snow, subpixel variability in the mountains owing to the large (~25 km) pixel size, and SWE overestimation in the presence of large grains such as depth and surface hoar. Throughout the winter and spring, snow-covered area can be measured at sub-km spatial resolution with optical sensors, with accuracy and timeliness improved by interpolating and smoothing across multiple days. So the question is, how can we establish the relationship between Reconstruction—available only after the snow goes away—and passive microwave and optical data to accurately estimate SWE during the snow season, when the information can help forecast spring runoff? Linear regression provides one answer, but can modern machine learning techniques (used to persuade people to click on web advertisements) adapt to improve forecasts of floods and droughts in areas where more than one billion people depend on snowmelt for their water resources?

  12. Household availability of ultra-processed foods and obesity in nineteen European countries.

    PubMed

    Monteiro, Carlos Augusto; Moubarac, Jean-Claude; Levy, Renata Bertazzi; Canella, Daniela Silva; Louzada, Maria Laura da Costa; Cannon, Geoffrey

    2018-01-01

    To assess household availability of NOVA food groups in nineteen European countries and to analyse the association between availability of ultra-processed foods and prevalence of obesity. Ecological, cross-sectional study. Europe. Estimates of ultra-processed foods calculated from national household budget surveys conducted between 1991 and 2008. Estimates of obesity prevalence obtained from national surveys undertaken near the budget survey time. Across the nineteen countries, median average household availability amounted to 33·9 % of total purchased dietary energy for unprocessed or minimally processed foods, 20·3 % for processed culinary ingredients, 19·6 % for processed foods and 26·4 % for ultra-processed foods. The average household availability of ultra-processed foods ranged from 10·2 % in Portugal and 13·4 % in Italy to 46·2 % in Germany and 50·4 % in the UK. A significant positive association was found between national household availability of ultra-processed foods and national prevalence of obesity among adults. After adjustment for national income, prevalence of physical inactivity, prevalence of smoking, measured or self-reported prevalence of obesity, and time lag between estimates on household food availability and obesity, each percentage point increase in the household availability of ultra-processed foods resulted in an increase of 0·25 percentage points in obesity prevalence. The study contributes to a growing literature showing that the consumption of ultra-processed foods is associated with an increased risk of diet-related non-communicable diseases. Its findings reinforce the need for public policies and actions that promote consumption of unprocessed or minimally processed foods and make ultra-processed foods less available and affordable.

  13. Using environmental tracers to determine the relative importance of travel times in the unsaturated and saturated zones for the delay of nitrate reduction measures

    NASA Astrophysics Data System (ADS)

    Gerber, Christoph; Purtschert, Roland; Hunkeler, Daniel; Hug, Rainer; Sültenfuss, Jürgen

    2018-06-01

    Groundwater quality in many regions with intense agriculture has deteriorated due to the leaching of nitrate and other agricultural pollutants. Modified agricultural practices can reduce the input of nitrate to groundwater bodies, but it is crucial to determine the time span over which these measures become effective at reducing nitrate levels in pumping wells. Such estimates can be obtained from hydrogeological modeling or lumped-parameter models (LPM) in combination with environmental tracer data. Two challenges in such tracer-based estimates are (i) accounting for the different modes of transport in the unsaturated zone (USZ), and (ii) assessing uncertainties. Here we extend a recently published Bayesian inference scheme for simple LPMs to include an explicit USZ model and apply it to the Dünnerngäu aquifer, Switzerland. Compared to a previous estimate of travel times in the aquifer based on a 2D hydrogeological model, our approach provides a more accurate assessment of the dynamics of nitrate concentrations in the aquifer. We find that including tracer measurements (3H/3He, 85Kr, 39Ar, 4He) reduces uncertainty in nitrate predictions if nitrate time series at wells are not available or short, but does not necessarily lead to better predictions if long nitrate time series are available. Additionally, the combination of tracer data with nitrate time series allows for a separation of the travel times in the unsaturated and saturated zone.

  14. Repeated Measurements on Distinct Scales With Censoring—A Bayesian Approach Applied to Microarray Analysis of Maize

    PubMed Central

    Love, Tanzy; Carriquiry, Alicia

    2009-01-01

    We analyze data collected in a somatic embryogenesis experiment carried out on Zea mays at Iowa State University. The main objective of the study was to identify the set of genes in maize that actively participate in embryo development. Embryo tissue was sampled and analyzed at various time periods and under different mediums and light conditions. As is the case in many microarray experiments, the operator scanned each slide multiple times to find the slide-specific ‘optimal’ laser and sensor settings. The multiple readings of each slide are repeated measurements on different scales with differing censoring; they cannot be considered to be replicate measurements in the traditional sense. Yet it has been shown that the choice of reading can have an impact on genetic inference. We propose a hierarchical modeling approach to estimating gene expression that combines all available readings on each spot and accounts for censoring in the observed values. We assess the statistical properties of the proposed expression estimates using a simulation experiment. As expected, combining all available scans using an approach with good statistical properties results in expression estimates with noticeably lower bias and root mean squared error relative to other approaches that have been proposed in the literature. Inferences drawn from the somatic embryogenesis experiment, which motivated this work changed drastically when data were analyzed using the standard approaches or using the methodology we propose. PMID:19960120

  15. Efficacy of denture adhesives in maxillary dentures using gnathodynamometry: a comparative study.

    PubMed

    Polyzois, Gregory; Lagouvardos, Panagiotis; Frangou, Maria; Stefaniotis, Theodoros

    2011-07-01

    The purpose of this study was to investigate the effect of four commercially available denture adhesives on the incisal and premolar dislodgement forces of maxillary complete dentures by using an electronic and disposable gnathodynamometer and compare the measured incisal forces for differences. This study was conducted with 12 complete maxillary denture wearers. Four commercially available denture adhesives Super Corega(®), Corega Ultra(®), Super Corega Powder(®) and Fittydent Cationic(®) were investigated. Testing protocol and sequence included baseline measurements without adhesives (control) for previous and new dentures and then replications of measurements with the four adhesives. Maximum dislodgement forces were recorded in two sites between central incisors and the left 2nd premolars by using an electronic and disposable gnathodynamometer. To estimate the effect of the different adhesives on the dislodgement forces, data were analyzed by a 2- and 3-way ANOVA, while for estimating the agreement of the two devices a Bland-Altman and Mountain plots were used. ANOVAs indicated significant differences between adhesives (p < 0.05), denture types (p < 0.05) and biting sites (p < 0.05) with both devices. Bland-Altman plot and Mountain plots indicated a poor agreement of the two devices. It was concluded that denture adhesives increase the denture dislodgement forces, but with differences among them. The two devices do not highly agree with each other, but each one alone is useful in estimating dislodgement forces in clinical practice and research.

  16. [Analysis of variance of repeated data measured by water maze with SPSS].

    PubMed

    Qiu, Hong; Jin, Guo-qin; Jin, Ru-feng; Zhao, Wei-kang

    2007-01-01

    To introduce the method of analyzing repeated data measured by water maze with SPSS 11.0, and offer a reference statistical method to clinical and basic medicine researchers who take the design of repeated measures. Using repeated measures and multivariate analysis of variance (ANOVA) process of the general linear model in SPSS and giving comparison among different groups and different measure time pairwise. Firstly, Mauchly's test of sphericity should be used to judge whether there were relations among the repeatedly measured data. If any (P

  17. Scheduling job shop - A case study

    NASA Astrophysics Data System (ADS)

    Abas, M.; Abbas, A.; Khan, W. A.

    2016-08-01

    The scheduling in job shop is important for efficient utilization of machines in the manufacturing industry. There are number of algorithms available for scheduling of jobs which depend on machines tools, indirect consumables and jobs which are to be processed. In this paper a case study is presented for scheduling of jobs when parts are treated on available machines. Through time and motion study setup time and operation time are measured as total processing time for variety of products having different manufacturing processes. Based on due dates different level of priority are assigned to the jobs and the jobs are scheduled on the basis of priority. In view of the measured processing time, the times for processing of some new jobs are estimated and for efficient utilization of the machines available an algorithm is proposed and validated.

  18. Airborne gamma radiation soil moisture measurements over short flight lines

    NASA Technical Reports Server (NTRS)

    Peck, Eugene L.; Carrol, Thomas R.; Lipinski, Daniel M.

    1990-01-01

    Results are presented on airborne gamma radiation measurements of soil moisture condition, carried out along short flight lines as part of the First International Satellite Land Surface Climatology Project Field Experiment (FIFE). Data were collected over an area in Kansas during the summers of 1987 and 1989. The airborne surveys, together with ground measurements, provide the most comprehensive set of airborne and ground truth data available in the U.S. for calibrating and evaluating airborne gamma flight lines. Analysis showed that, using standard National Weather Service weights for the K, Tl, and Gc radiation windows, the airborne soil moisture estimates for the FIFE lines had a root mean square error of no greater than 3.0 percent soil moisture. The soil moisture estimates for sections having acquisition time of at least 15 sec were found to be reliable.

  19. Event-Based $H_\\infty $ State Estimation for Time-Varying Stochastic Dynamical Networks With State- and Disturbance-Dependent Noises.

    PubMed

    Sheng, Li; Wang, Zidong; Zou, Lei; Alsaadi, Fuad E

    2017-10-01

    In this paper, the event-based finite-horizon H ∞ state estimation problem is investigated for a class of discrete time-varying stochastic dynamical networks with state- and disturbance-dependent noises [also called (x,v) -dependent noises]. An event-triggered scheme is proposed to decrease the frequency of the data transmission between the sensors and the estimator, where the signal is transmitted only when certain conditions are satisfied. The purpose of the problem addressed is to design a time-varying state estimator in order to estimate the network states through available output measurements. By employing the completing-the-square technique and the stochastic analysis approach, sufficient conditions are established to ensure that the error dynamics of the state estimation satisfies a prescribed H ∞ performance constraint over a finite horizon. The desired estimator parameters can be designed via solving coupled backward recursive Riccati difference equations. Finally, a numerical example is exploited to demonstrate the effectiveness of the developed state estimation scheme.

  20. Estimating the coverage of mental health programmes: a systematic review

    PubMed Central

    De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram

    2014-01-01

    Background The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Methods Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. Results We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Conclusions Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys. PMID:24760874

  1. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    USGS Publications Warehouse

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  2. Military Participants at U.S. Atmospheric Nuclear Weapons Testing— Methodology for Estimating Dose and Uncertainty

    PubMed Central

    Till, John E.; Beck, Harold L.; Aanenson, Jill W.; Grogan, Helen A.; Mohler, H. Justin; Mohler, S. Shawn; Voillequé, Paul G.

    2014-01-01

    Methods were developed to calculate individual estimates of exposure and dose with associated uncertainties for a sub-cohort (1,857) of 115,329 military veterans who participated in at least one of seven series of atmospheric nuclear weapons tests or the TRINITY shot carried out by the United States. The tests were conducted at the Pacific Proving Grounds and the Nevada Test Site. Dose estimates to specific organs will be used in an epidemiological study to investigate leukemia and male breast cancer. Previous doses had been estimated for the purpose of compensation and were generally high-sided to favor the veteran's claim for compensation in accordance with public law. Recent efforts by the U.S. Department of Defense (DOD) to digitize the historical records supporting the veterans’ compensation assessments make it possible to calculate doses and associated uncertainties. Our approach builds upon available film badge dosimetry and other measurement data recorded at the time of the tests and incorporates detailed scenarios of exposure for each veteran based on personal, unit, and other available historical records. Film badge results were available for approximately 25% of the individuals, and these results assisted greatly in reconstructing doses to unbadged persons and in developing distributions of dose among military units. This article presents the methodology developed to estimate doses for selected cancer cases and a 1% random sample of the total cohort of veterans under study. PMID:24758578

  3. Comparative performance of short-term diffusion barrier charcoal canisters and long-term alpha-track monitors for indoor 222Rn measurements.

    PubMed

    Martz, D E; George, J L; Langner, G H

    1991-04-01

    The accuracy and precision of indoor 222Rn measurements obtained with the use of diffusion barrier charcoal canisters (DBCC) under actual field conditions were determined by comparing the integrated average of 26 successive 7-d exposures of DBCC in each of 16 occupied residences over a 6-mo period with simultaneous measurements using four types of commercially available alpha-track monitors (ATM) and one type of scintillation chamber continuous 222Rn monitor. The results suggest that properly calibrated DBCCs provide very good estimates of the integrated 222Rn concentrations in residential structures over the standard 1-wk exposure period despite the occurrence of large diurnal variations in the actual 222Rn concentrations. The results also suggest that a relatively small number of 1-wk DBCC measurements at selected times throughout the calendar year would provide estimates of the annual average indoor 222Rn concentrations that compare favorably with single long-term ATM measurements.

  4. Estimation of neutron spectrum in the low-level gamma spectroscopy system using unfolding procedure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knežević, D., E-mail: david.knezevic@df.uns.ac.rs; Jovančević, N.; Krmar, M.

    2016-03-25

    The radiation resulting from neutron interactions with Ge nuclei in active volume of HPGe detectors is one of the main concerns in low-level gamma spectroscopy measurements [1,2]. It is usually not possible to measure directly spectrum of neutrons which strike detector. This paper explore the possibility of estimation of neutron spectrum using measured activities of certain Ge(n,γ) and Ge(n,n’) reactions (obtained from low-level gamma measurements), available ENDF cross section data and unfolding procedures. In this work HPGe detector with passive shield made from commercial low background lead was used for the measurement. The most important objective of this study wasmore » to reconstruct muon induced neutron spectrum created in the shield of the HPGe detector. MAXED [3] and GRAVEL [4] algorithms for neutron spectra unfolding were used. The results of those two algorithms were compared and we analyzed the sensitivity of the unfolding procedure to the various input parameters.« less

  5. Remote Sensing Forage Quality for Browsing Herbivores: A Case Study of Cutting Edge Koala Conservation

    NASA Astrophysics Data System (ADS)

    Youngentob, K. N.; Au, J.; Held, A. A.; Foley, W. J.; Possingham, H. P.

    2014-12-01

    Managing landscapes for conservation requires a capacity to measure habitat quality. Although multiple factors are often responsible for the distribution and abundance of herbivores, spatial variations in the quality and quantity of plant forage are known to be important for many species. While we cannot see the chemical complexity of landscapes with our naked-eye, advances in imaging spectroscopy are making it possible to assess the quality of forage on a landscape-scale. Much research in this area has focused on the ability to estimate foliar nitrogen (N), because N is believed to be a limiting nutrient for many leaf eating animals. However, the total quantity of foliar N does not necessarily reflect the amount of N that can be utilized by herbivores. Available nitrogen (AvailN) is an invitro measure of forage quality that integrates the influence of tannins and fibre on the amount of foliar N that is available for digestion by herbivores. This may be a more meaningful measure of forage quality than total N for the many herbivorous species that are sensitive to the effects of tannins. Our previous research has demonstrated that it is possible to estimate this integrated measure of foliar nutritional quality at an individual tree crown level across multiple tree species using imaging spectroscopy (HyMap). Here we present a case study of how this remote sensing data is being used to help inform landscape management and conservation decisions for an iconic Australian species, the koala (Phascolarctos cinereus). We review the methods involved in developing maps of integrated measures of foliar nutritional quality for browsing herbivores with airborne imaging spectroscopy data and discuss their applications for wildlife management.

  6. Statistical Aspects of Reliability, Maintainability, and Availability.

    DTIC Science & Technology

    1987-10-01

    A total of 33 research reports were issued, and 35 papers were published in scientific journals or are in press. Research topics included optimal assembly of systems, multistate system theory , testing whether new is better than used nonparameter survival function estimation measuring information in censored models, generalizations of total positively and

  7. Harvesting systems for the northern forest hardwoods

    Treesearch

    Chris B. LeDoux

    2011-01-01

    This monograph is a summary of research results and environmental compliance measures for timber harvesting operations. Data are presented from the Northern Research Station's forest inventory and analysis of 20 states in the northern forest hardwoods. Harvesting systems available in the region today are summarized. Equations for estimating harvesting costs are...

  8. Estimating herbaceous biomass of grassland vegetation using the reference unit method

    Treesearch

    Eric D. Boyda; Jack L. Butler; Lan Xu

    2015-01-01

    Aboveground net primary production provides valuable information on wildlife habitat, fire fuel loads, and forage availability. Aboveground net primary production in herbaceous plant communities is typically measured by clipping aboveground biomass. However, the high costs associated with physically harvesting plant biomass may prevent collecting sufficient...

  9. DESIGN STRATEGY FOR ASSESSING MULTI-PATHWAY EXPOSURE FOR CHILDREN: THE MINNESOTA CHILDREN'S PESTICIDE EXPOSURE STUDY (MNCPES)

    EPA Science Inventory

    Although children are exposed to a variety of environmental hazards, including pesticides, there is a scarcity of information available to estimate exposures realistically. This article reports on one of the first attempts to measure multi-pathway pesticide exposures in a popu...

  10. Evaluation of SWAT for estimating ET in irrigated and dryland cropping systems in the Texas High Plains

    USDA-ARS?s Scientific Manuscript database

    Hydrologic models such as SWAT are used extensively for predicting water availability and water quality responses to alternative management practices. Modeling results have been used by regulatory agencies for developing remedial measures for impaired water bodies and for water planning purposes. Ho...

  11. AN OVERVIEW OF PARTITIONING AND BIOAVAILABILITY OF PAHS IN SEDIMENTS AND SOILS

    EPA Science Inventory

    Understanding and predicting any adverse effects of PAHs depends on generating a reliable measure or estimate of how much PAH is available for uptake. Simply knowing the total amount of PAH in soil, water or sediment is insufficient for determining whether or not these compounds ...

  12. Dengue data and surveillance in Tanzania: a systematic literature review.

    PubMed

    Ward, Tara; Samuel, Moody; Maoz, Dorit; Runge-Ranzinger, Silvia; Boyce, Ross; Toledo, Joao; Velayudhan, Raman; Horstick, Olaf

    2017-08-01

    Although there is evidence that dengue virus is circulating in Tanzania, the country lacks a dengue surveillance system. Consequently, the true estimate of dengue seroprevalence, as well as the incidence in the population, the frequency and magnitude of outbreaks is unknown. This study therefore sought to systematically review available dengue data from Tanzania. The systematic review was conducted and reported using the PRISMA tool. Five databases (PubMed, Embase, Web of Science, WHOLIS and Google Scholar) were searched for articles using various keywords on the illness, data and geographical location. Identified articles were assessed for inclusion based on predefined eligibility criteria. Data were extracted from included articles, analysed and reported. Based on the 10 seroprevalence studies in defined populations with estimates of acute confirmed infections that were included in the review, the estimated seroprevalence of past dengue infection in Tanzania ranged from 50.6% in a health facility-based study to 11% in a population-based study. Acute confirmed infections of dengue were estimated to be as high as 38.2% of suspected cases. Only one study reported on an outbreak. It is evident that dengue needs to become part of regular disease surveillance in Tanzania. Control measures need to be instituted with a focus on building human resource capacity and integrating dengue control measures in ongoing health programmes, for both preventive and curative interventions. Systematic reviews are valuable in assessing health issues when surveillance data are not available. © 2017 John Wiley & Sons Ltd.

  13. The Validity and Applicability of Using a Generic Exposure Assessment Model for Occupational Exposure to Nano-Objects and Their Aggregates and Agglomerates.

    PubMed

    Bekker, Cindy; Voogd, Eef; Fransman, Wouter; Vermeulen, Roel

    2016-11-01

    Control banding can be used as a first-tier assessment to control worker exposure to nano-objects and their aggregates and agglomerates (NOAA). In a second tier, more advanced modelling approaches are needed to produce quantitative exposure estimates. As currently no general quantitative nano-specific exposure models are available, this study evaluated the validity and applicability of using a generic exposure assessment model (the Advanced REACH Tool-ART) for occupational exposure to NOAA. The predictive capability of ART for occupational exposure to NOAA was tested by calculating the relative bias and correlations (Pearson) between the model estimates and measured concentrations using a dataset of 102 NOAA exposure measurements collected during experimental and workplace exposure studies. Moderate to (very) strong correlations between the ART estimates and measured concentrations were found. Estimates correlated better to measured concentration levels of dust (r = 0.76, P < 0.01) than liquid aerosols (r = 0.51, P = 0.19). However, ART overestimated the measured NOAA concentrations for both the experimental and field measurements (factor 2-127). Overestimation was highest at low concentrations and decreased with increasing concentration. Correlations seemed to be better when looking at the nanomaterials individually compared to combined scenarios, indicating that nanomaterial-specific characteristics are not well captured within the mechanistic model of the ART. Although ART in its current state is not capable to estimate occupational exposure to NOAA, the strong correlations for the individual nanomaterials indicate that the ART (and potentially other generic exposure models) have the potential to be extended or adapted for exposure to NOAA. In the future, studies investigating the potential to estimate exposure to NOAA should incorporate more explicitly nanomaterial-specific characteristics in their models. © The Author 2016. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  14. A method for estimating maternal and newborn lives saved from health-related investments funded by the UK government Department for International Development using the Lives Saved Tool.

    PubMed

    Friberg, Ingrid K; Baschieri, Angela; Abbotts, Jo

    2017-11-07

    In 2010, the UK Government Department for International Development (DFID) committed through its 'Framework for results for reproductive, maternal and newborn health (RMNH)' to save 50,000 maternal lives and 250,000 newborn lives by 2015. They also committed to monitoring the performance of this portfolio of investments to demonstrate transparency and accountability. Methods currently available to directly measure lives saved are cost-, time-, and labour-intensive. The gold standard for calculating the total number of lives saved would require measuring mortality with large scale population based surveys or annual vital events surveillance. Neither is currently available in all low- and middle-income countries. Estimating the independent effect of DFID support relative to all other effects on health would also be challenging. The Lives Saved Tool (LiST) is an evidence based software for modelling the effect of changes in health intervention coverage on reproductive, maternal, newborn and child mortality. A multi-country LiST-based analysis protocol was developed to retrospectively assess the total annual number of maternal and newborn lives saved from DFID aid programming in low- and middle-income countries. Annual LiST analyses using the latest program data from DFID country offices were conducted between 2013 and 2016, estimating the annual number of maternal and neonatal lives saved across 2010-2015. For each country, independent project results were aggregated into health intervention coverage estimates, with and in the absence of DFID funding. More than 80% of reported projects were suitable for inclusion in the analysis, with 151 projects analysed in the 2016 analysis. Between 2010 and 2014, it is estimated that DFID contributed to saving the lives of 15,000 women in pregnancy and childbirth with health programming and 88,000 with family planning programming. It is estimated that DFID health programming contributed to saving 187,000 newborn lives. It is feasible to estimate the overall contribution and impact of DFID's investment in RMNH from currently available information on interventions and coverage from individual country offices. This utilization of LiST, with estimated population coverage based on DFID program inputs, can be applied to similar types of datasets to quantify programme impact. The global data were used to estimate DFID's progress against the Framework for results targets to inform future programming. The identified limitations can also be considered to inform future monitoring and evaluation program design and implementation within DFID.

  15. Load flow and state estimation algorithms for three-phase unbalanced power distribution systems

    NASA Astrophysics Data System (ADS)

    Madvesh, Chiranjeevi

    Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.

  16. Quantum Parameter Estimation: From Experimental Design to Constructive Algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Le; Chen, Xi; Zhang, Ming; Dai, Hong-Yi

    2017-11-01

    In this paper we design the following two-step scheme to estimate the model parameter ω 0 of the quantum system: first we utilize the Fisher information with respect to an intermediate variable v=\\cos ({ω }0t) to determine an optimal initial state and to seek optimal parameters of the POVM measurement operators; second we explore how to estimate ω 0 from v by choosing t when a priori information knowledge of ω 0 is available. Our optimal initial state can achieve the maximum quantum Fisher information. The formulation of the optimal time t is obtained and the complete algorithm for parameter estimation is presented. We further explore how the lower bound of the estimation deviation depends on the a priori information of the model. Supported by the National Natural Science Foundation of China under Grant Nos. 61273202, 61673389, and 61134008

  17. Assessing the stability of human locomotion: a review of current measures

    PubMed Central

    Bruijn, S. M.; Meijer, O. G.; Beek, P. J.; van Dieën, J. H.

    2013-01-01

    Falling poses a major threat to the steadily growing population of the elderly in modern-day society. A major challenge in the prevention of falls is the identification of individuals who are at risk of falling owing to an unstable gait. At present, several methods are available for estimating gait stability, each with its own advantages and disadvantages. In this paper, we review the currently available measures: the maximum Lyapunov exponent (λS and λL), the maximum Floquet multiplier, variability measures, long-range correlations, extrapolated centre of mass, stabilizing and destabilizing forces, foot placement estimator, gait sensitivity norm and maximum allowable perturbation. We explain what these measures represent and how they are calculated, and we assess their validity, divided up into construct validity, predictive validity in simple models, convergent validity in experimental studies, and predictive validity in observational studies. We conclude that (i) the validity of variability measures and λS is best supported across all levels, (ii) the maximum Floquet multiplier and λL have good construct validity, but negative predictive validity in models, negative convergent validity and (for λL) negative predictive validity in observational studies, (iii) long-range correlations lack construct validity and predictive validity in models and have negative convergent validity, and (iv) measures derived from perturbation experiments have good construct validity, but data are lacking on convergent validity in experimental studies and predictive validity in observational studies. In closing, directions for future research on dynamic gait stability are discussed. PMID:23516062

  18. MEthods of ASsessing blood pressUre: identifying thReshold and target valuEs (MeasureBP): a review & study protocol.

    PubMed

    Blom, Kimberly C; Farina, Sasha; Gomez, Yessica-Haydee; Campbell, Norm R C; Hemmelgarn, Brenda R; Cloutier, Lyne; McKay, Donald W; Dawes, Martin; Tobe, Sheldon W; Bolli, Peter; Gelfer, Mark; McLean, Donna; Bartlett, Gillian; Joseph, Lawrence; Featherstone, Robin; Schiffrin, Ernesto L; Daskalopoulou, Stella S

    2015-04-01

    Despite progress in automated blood pressure measurement (BPM) technology, there is limited research linking hard outcomes to automated office BPM (OBPM) treatment targets and thresholds. Equivalences for automated BPM devices have been estimated from approximations of standardized manual measurements of 140/90 mmHg. Until outcome-driven targets and thresholds become available for automated measurement methods, deriving evidence-based equivalences between automated methods and standardized manual OBPM is the next best solution. The MeasureBP study group was initiated by the Canadian Hypertension Education Program to close this critical knowledge gap. MeasureBP aims to define evidence-based equivalent values between standardized manual OBPM and automated BPM methods by synthesizing available evidence using a systematic review and individual subject-level data meta-analyses. This manuscript provides a review of the literature and MeasureBP study protocol. These results will lay the evidenced-based foundation to resolve uncertainties within blood pressure guidelines which, in turn, will improve the management of hypertension.

  19. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006.

    PubMed

    Chen, Lin; Ray, Shonket; Keller, Brad M; Pertuz, Said; McDonald, Elizabeth S; Conant, Emily F; Kontos, Despina

    2016-09-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88-0.95; weighted κ = 0.83-0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76-0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. (©) RSNA, 2016 Online supplemental material is available for this article.

  20. The Impact of Acquisition Dose on Quantitative Breast Density Estimation with Digital Mammography: Results from ACRIN PA 4006

    PubMed Central

    Chen, Lin; Ray, Shonket; Keller, Brad M.; Pertuz, Said; McDonald, Elizabeth S.; Conant, Emily F.

    2016-01-01

    Purpose To investigate the impact of radiation dose on breast density estimation in digital mammography. Materials and Methods With institutional review board approval and Health Insurance Portability and Accountability Act compliance under waiver of consent, a cohort of women from the American College of Radiology Imaging Network Pennsylvania 4006 trial was retrospectively analyzed. All patients underwent breast screening with a combination of dose protocols, including standard full-field digital mammography, low-dose digital mammography, and digital breast tomosynthesis. A total of 5832 images from 486 women were analyzed with previously validated, fully automated software for quantitative estimation of density. Clinical Breast Imaging Reporting and Data System (BI-RADS) density assessment results were also available from the trial reports. The influence of image acquisition radiation dose on quantitative breast density estimation was investigated with analysis of variance and linear regression. Pairwise comparisons of density estimations at different dose levels were performed with Student t test. Agreement of estimation was evaluated with quartile-weighted Cohen kappa values and Bland-Altman limits of agreement. Results Radiation dose of image acquisition did not significantly affect quantitative density measurements (analysis of variance, P = .37 to P = .75), with percent density demonstrating a high overall correlation between protocols (r = 0.88–0.95; weighted κ = 0.83–0.90). However, differences in breast percent density (1.04% and 3.84%, P < .05) were observed within high BI-RADS density categories, although they were significantly correlated across the different acquisition dose levels (r = 0.76–0.92, P < .05). Conclusion Precision and reproducibility of automated breast density measurements with digital mammography are not substantially affected by variations in radiation dose; thus, the use of low-dose techniques for the purpose of density estimation may be feasible. © RSNA, 2016 Online supplemental material is available for this article. PMID:27002418

  1. Testing survey-based methods for rapid monitoring of child mortality, with implications for summary birth history data.

    PubMed

    Brady, Eoghan; Hill, Kenneth

    2017-01-01

    Under-five mortality estimates are increasingly used in low and middle income countries to target interventions and measure performance against global development goals. Two new methods to rapidly estimate under-5 mortality based on Summary Birth Histories (SBH) were described in a previous paper and tested with data available. This analysis tests the methods using data appropriate to each method from 5 countries that lack vital registration systems. SBH data are collected across many countries through censuses and surveys, and indirect methods often rely upon their quality to estimate mortality rates. The Birth History Imputation method imputes data from a recent Full Birth History (FBH) onto the birth, death and age distribution of the SBH to produce estimates based on the resulting distribution of child mortality. DHS FBHs and MICS SBHs are used for all five countries. In the implementation, 43 of 70 estimates are within 20% of validation estimates (61%). Mean Absolute Relative Error is 17.7.%. 1 of 7 countries produces acceptable estimates. The Cohort Change method considers the differences in births and deaths between repeated Summary Birth Histories at 1 or 2-year intervals to estimate the mortality rate in that period. SBHs are taken from Brazil's PNAD Surveys 2004-2011 and validated against IGME estimates. 2 of 10 estimates are within 10% of validation estimates. Mean absolute relative error is greater than 100%. Appropriate testing of these new methods demonstrates that they do not produce sufficiently good estimates based on the data available. We conclude this is due to the poor quality of most SBH data included in the study. This has wider implications for the next round of censuses and future household surveys across many low- and middle- income countries.

  2. Influence of vegetation structure on lidar-derived canopy height and fractional cover in forested riparian buffers during leaf-off and leaf-on conditions.

    PubMed

    Wasser, Leah; Day, Rick; Chasmer, Laura; Taylor, Alan

    2013-01-01

    Estimates of canopy height (H) and fractional canopy cover (FC) derived from lidar data collected during leaf-on and leaf-off conditions are compared with field measurements from 80 forested riparian buffer plots. The purpose is to determine if existing lidar data flown in leaf-off conditions for applications such as terrain mapping can effectively estimate forested riparian buffer H and FC within a range of riparian vegetation types. Results illustrate that: 1) leaf-off and leaf-on lidar percentile estimates are similar to measured heights in all plots except those dominated by deciduous compound-leaved trees where lidar underestimates H during leaf off periods; 2) canopy height models (CHMs) underestimate H by a larger margin compared to percentile methods and are influenced by vegetation type (conifer needle, deciduous simple leaf or deciduous compound leaf) and canopy height variability, 3) lidar estimates of FC are within 10% of plot measurements during leaf-on periods, but are underestimated during leaf-off periods except in mixed and conifer plots; and 4) depth of laser pulse penetration lower in the canopy is more variable compared to top of the canopy penetration which may influence within canopy vegetation structure estimates. This study demonstrates that leaf-off lidar data can be used to estimate forested riparian buffer canopy height within diverse vegetation conditions and fractional canopy cover within mixed and conifer forests when leaf-on lidar data are not available.

  3. Influence of Vegetation Structure on Lidar-derived Canopy Height and Fractional Cover in Forested Riparian Buffers During Leaf-Off and Leaf-On Conditions

    PubMed Central

    Wasser, Leah; Day, Rick; Chasmer, Laura; Taylor, Alan

    2013-01-01

    Estimates of canopy height (H) and fractional canopy cover (FC) derived from lidar data collected during leaf-on and leaf-off conditions are compared with field measurements from 80 forested riparian buffer plots. The purpose is to determine if existing lidar data flown in leaf-off conditions for applications such as terrain mapping can effectively estimate forested riparian buffer H and FC within a range of riparian vegetation types. Results illustrate that: 1) leaf-off and leaf-on lidar percentile estimates are similar to measured heights in all plots except those dominated by deciduous compound-leaved trees where lidar underestimates H during leaf off periods; 2) canopy height models (CHMs) underestimate H by a larger margin compared to percentile methods and are influenced by vegetation type (conifer needle, deciduous simple leaf or deciduous compound leaf) and canopy height variability, 3) lidar estimates of FC are within 10% of plot measurements during leaf-on periods, but are underestimated during leaf-off periods except in mixed and conifer plots; and 4) depth of laser pulse penetration lower in the canopy is more variable compared to top of the canopy penetration which may influence within canopy vegetation structure estimates. This study demonstrates that leaf-off lidar data can be used to estimate forested riparian buffer canopy height within diverse vegetation conditions and fractional canopy cover within mixed and conifer forests when leaf-on lidar data are not available. PMID:23382966

  4. MEASURING ECONOMIC GROWTH FROM OUTER SPACE.

    PubMed

    Henderson, J Vernon; Storeygard, Adam; Weil, David N

    2012-04-01

    GDP growth is often measured poorly for countries and rarely measured at all for cities or subnational regions. We propose a readily available proxy: satellite data on lights at night. We develop a statistical framework that uses lights growth to augment existing income growth measures, under the assumption that measurement error in using observed light as an indicator of income is uncorrelated with measurement error in national income accounts. For countries with good national income accounts data, information on growth of lights is of marginal value in estimating the true growth rate of income, while for countries with the worst national income accounts, the optimal estimate of true income growth is a composite with roughly equal weights. Among poor-data countries, our new estimate of average annual growth differs by as much as 3 percentage points from official data. Lights data also allow for measurement of income growth in sub- and supranational regions. As an application, we examine growth in Sub Saharan African regions over the last 17 years. We find that real incomes in non-coastal areas have grown faster by 1/3 of an annual percentage point than coastal areas; non-malarial areas have grown faster than malarial ones by 1/3 to 2/3 annual percent points; and primate city regions have grown no faster than hinterland areas. Such applications point toward a research program in which "empirical growth" need no longer be synonymous with "national income accounts."

  5. Assessment of Space Power Related Measurement Requirements of the Strategic Defense Initiative

    DTIC Science & Technology

    1989-04-01

    calibration techniques are available and estimated uncertainties vary between 5 and 10%. At low rf power levels (~ 10mW ), NIST maintains standard calibration... bands single or dual six-port automatic network analyzers [24] are used as transfer systems with detectors calibrated using the NIST micro calorimeter...Probable designs for the multimegawatt space reactor program indicate the need to measure neutron fluxes up to 1016 neutrons/cm2- s (1019 neutrons

  6. Monitoring Lakes in Africa with Altimetry and GRACE

    NASA Astrophysics Data System (ADS)

    Carabajal, C. C.; Boy, J. P.

    2017-12-01

    Thanks to more than two decades of radar altimetry measurements from TOPEX/POSEIDON, Jason 1, 2 and 3, ENVISAT and others, 18 Ice, Cloud and Land Elevation Satellite (ICESat) laser altimeter measurement campaigns over 6 years, and 15 years of Gravity Recovery And Climate Experiment (GRACE) observations, water levels changes of major lakes and reservoirs can be remotely measured regularly with unprecedented precision, facilitating monitoring of continental water storage variations. Smaller footprint laser altimeters like ICESat are more suitable for the retrieval of water level variations of small inland water bodies, better discriminating water returns when water height measurements have the potential to be contaminated by land or vegetation. Using imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) contemporaneous with the altimetry data collections, in combination with careful examination of the laser waveforms, one can better isolate returns form the water surface. Combining these altimetry observations, we derive and compare water height estimates for several lakes and reservoirs in Africa from radar and laser altimetry measurements, we estimate the surface extent of each individual water body from available MODIS imagery, and derive corresponding estimates of volume variations for each water body. Mass variations from time-variable gravity measurements from the GRACE mission, using the latest one-degree global iterated mascons solution from GSFC are then transformed into volume changes, assuming a constant density, and compared to altimetry plus imagery estimates. These methods demonstrate the power of combined observations to monitor water resources and facilitate their management. Upcoming laser altimetry missions like ICESat-2, with vastly improved coverage and temporal sampling, continuous observations, better measurements techniques, including inland water products specifically formulated for these applications, when combined with SWOT, and GRACE-follow on mission data, will help address the need for continuous monitoring of continental water storage variations from space measurements.

  7. Evaluation of the groundwater-flow model for the Ohio River alluvial aquifer near Carrollton, Kentucky, updated to conditions in September 2010

    USGS Publications Warehouse

    Unthank, Michael D.

    2013-01-01

    The Ohio River alluvial aquifer near Carrollton, Ky., is an important water resource for the cities of Carrollton and Ghent, as well as for several industries in the area. The groundwater of the aquifer is the primary source of drinking water in the region and a highly valued natural resource that attracts various water-dependent industries because of its quantity and quality. This report evaluates the performance of a numerical model of the groundwater-flow system in the Ohio River alluvial aquifer near Carrollton, Ky., published by the U.S. Geological Survey in 1999. The original model simulated conditions in November 1995 and was updated to simulate groundwater conditions estimated for September 2010. The files from the calibrated steady-state model of November 1995 conditions were imported into MODFLOW-2005 to update the model to conditions in September 2010. The model input files modified as part of this update were the well and recharge files. The design of the updated model and other input files are the same as the original model. The ability of the updated model to match hydrologic conditions for September 2010 was evaluated by comparing water levels measured in wells to those computed by the model. Water-level measurements were available for 48 wells in September 2010. Overall, the updated model underestimated the water levels at 36 of the 48 measured wells. The average difference between measured water levels and model-computed water levels was 3.4 feet and the maximum difference was 10.9 feet. The root-mean-square error of the simulation was 4.45 for all 48 measured water levels. The updated steady-state model could be improved by introducing more accurate and site-specific estimates of selected field parameters, refined model geometry, and additional numerical methods. Collection of field data to better estimate hydraulic parameters, together with continued review of available data and information from area well operators, could provide the model with revised estimates of conductance values for the riverbed and valley wall, hydraulic conductivities for the model layer, and target water levels for future simulations. Additional model layers, a redesigned model grid, and revised boundary conditions could provide a better framework for more accurate simulations. Additional numerical methods would identify possible parameter estimates and determine parameter sensitivities.

  8. Reaction time, inhibition, working memory and ‘delay aversion’ performance: genetic influences and their interpretation

    PubMed Central

    KUNTSI, JONNA; ROGERS, HANNAH; SWINARD, GREER; BÖRGER, NORBERT; van der MEERE, JAAP; RIJSDIJK, FRUHLING; ASHERSON, PHILIP

    2013-01-01

    Background For candidate endophenotypes to be useful for psychiatric genetic research, they first of all need to show significant genetic influences. To address the relative lack of previous data, we set to investigate the extent of genetic and environmental influences on performance in a set of theoretically driven cognitive-experimental tasks in a large twin sample. We further aimed to illustrate how test–retest reliability of the measures affects the estimates. Method Four-hundred 7- to 9-year-old twin pairs were assessed individually on tasks measuring reaction time, inhibition, working memory and ‘delay aversion’ performance. Test–retest reliability data on some of the key measures were available from a previous study. Results Several key measures of reaction time, inhibition and working-memory performance indicated a moderate degree of genetic influence. Combining data across theoretically related tasks increased the heritability estimates, as illustrated by the heritability estimates of 60% for mean reaction time and 50% for reaction-time variability. Psychometric properties (reliability or ceiling effects) had a substantial influence on the estimates for some measures. Conclusions The data support the usefulness of several of the variables for endophenotype studies that aim to link genes to cognitive and motivational processes. Importantly, the data also illustrate specific conditions under which the true extent of genetic influences may be underestimated and hence the usefulness for genetic mapping studies compromised, and suggest ways to address this. PMID:16882357

  9. Momentum Flux Determination Using the Multi-beam Poker Flat Incoherent Scatter Radar

    NASA Technical Reports Server (NTRS)

    Nicolls, M. J.; Fritts, D. C.; Janches, Diego; Heinselman, C. J.

    2012-01-01

    In this paper, we develop an estimator for the vertical flux of horizontal momentum with arbitrary beam pointing, applicable to the case of arbitrary but fixed beam pointing with systems such as the Poker Flat Incoherent Scatter Radar (PFISR). This method uses information from all available beams to resolve the variances of the wind field in addition to the vertical flux of both meridional and zonal momentum, targeted for high-frequency wave motions. The estimator utilises the full covariance of the distributed measurements, which provides a significant reduction in errors over the direct extension of previously developed techniques and allows for the calculation of an error covariance matrix of the estimated quantities. We find that for the PFISR experiment, we can construct an unbiased and robust estimator of the momentum flux if sufficient and proper beam orientations are chosen, which can in the future be optimized for the expected frequency distribution of momentum-containing scales. However, there is a potential trade-off between biases and standard errors introduced with the new approach, which must be taken into account when assessing the momentum fluxes. We apply the estimator to PFISR measurements on 23 April 2008 and 21 December 2007, from 60-85 km altitude, and show expected results as compared to mean winds and in relation to the measured vertical velocity variances.

  10. Parameterizing the Spatial Markov Model from Breakthrough Curve Data Alone

    NASA Astrophysics Data System (ADS)

    Sherman, T.; Bolster, D.; Fakhari, A.; Miller, S.; Singha, K.

    2017-12-01

    The spatial Markov model (SMM) uses a correlated random walk and has been shown to effectively capture anomalous transport in porous media systems; in the SMM, particles' future trajectories are correlated to their current velocity. It is common practice to use a priori Lagrangian velocity statistics obtained from high resolution simulations to determine a distribution of transition probabilities (correlation) between velocity classes that govern predicted transport behavior; however, this approach is computationally cumbersome. Here, we introduce a methodology to quantify velocity correlation from Breakthrough (BTC) curve data alone; discretizing two measured BTCs into a set of arrival times and reverse engineering the rules of the SMM allows for prediction of velocity correlation, thereby enabling parameterization of the SMM in studies where Lagrangian velocity statistics are not available. The introduced methodology is applied to estimate velocity correlation from BTCs measured in high resolution simulations, thus allowing for a comparison of estimated parameters with known simulated values. Results show 1) estimated transition probabilities agree with simulated values and 2) using the SMM with estimated parameterization accurately predicts BTCs downstream. Additionally, we include uncertainty measurements by calculating lower and upper estimates of velocity correlation, which allow for prediction of a range of BTCs. The simulated BTCs fall in the range of predicted BTCs. This research proposes a novel method to parameterize the SMM from BTC data alone, thereby reducing the SMM's computational costs and widening its applicability.

  11. Near-Infrared (0.67-4.7 microns) Optical Constants Estimated for Montmorillonite

    NASA Technical Reports Server (NTRS)

    Roush, T. L.

    2005-01-01

    Various models of the reflectance from particulate surfaces are used for interpretation of remote sensing data of solar system objects. These models rely upon the real (n) and imaginary (k) refractive indices of the materials. Such values are limited for commonly encountered silicates at visual and near-infrared wavelengths (lambda, 0.4-5 microns). Availability of optical constants for candidate materials allows more thorough modeling of the observations obtained by Earth-based telescopes and spacecraft. Two approaches for determining the absorption coefficient (alpha=2pik/lambda) from reflectance measurements of particulates have been described; one relies upon Kubelka-Munk theory and the other Hapke theory. Both have been applied to estimate alpha and k for various materials. Neither enables determination of the wavelength dependence of n, n=f(lambda). Thus, a mechanism providing this ability is desirable. Using Hapke-theory to estimate k from reflectance measurements requires two additional quantities be known or assumed: 1) n=f(lambda) and 2) d, the sample particle diameter. Typically n is assumed constant (c) or modestly varying with lambda; referred to here as n(sub 0). Assuming n(sub 0), at each lambda an estimate of k is used to calculate the reflectance and is iteratively adjusted until the difference between the model and measured reflectance is minimized. The estimated k's (k(sub 1)) are the final results, and this concludes the typical analysis.

  12. Groundwater recharge in Wisconsin--Annual estimates for 1970-99 using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Hunt, Randall J.

    2011-01-01

    The groundwater component of streamflow is important because it is indicative of the sustained flow of a stream during dry periods, is often of better quality, and has a smaller range of temperatures, than surface contributions to streamflow. All three of these characteristics are important to the health of aquatic life in a stream. If recharge to the aquifers is to be preserved or enhanced, it is important to understand the present partitioning of total streamflow into base flow and stormflow. Additionally, an estimate of groundwater recharge is important for understanding the flows within a groundwater system-information important for water availability/sustainability or other assessments. The U.S. Geological Survey operates numerous continuous-record streamflow-gaging stations (Hirsch and Norris, 2001), which can be used to provide estimates of average annual base flow. In addition to these continuous record sites, Gebert and others (2007) showed that having a few streamflow measurements in a basin can appreciably reduce the error in a base-flow estimate for that basin. Therefore, in addition to the continuous-record gaging stations, a substantial number of low-flow partial-record sites (6 to 15 discharge measurements) and miscellaneous-measurement sites (1 to 3 discharge measurements) that were operated during 1964-90 throughout the State were included in this work to provide additional insight into spatial distribution of annual base flow and, in turn, groundwater recharge.

  13. Particulate matter air quality assessment over southeast United States using satellite and ground measurements

    NASA Astrophysics Data System (ADS)

    Gupta, Pawan

    Fine particles (PM2.5, particles with aerodynamic diameter less than 2.5 mum) can penetrate deep inside the human lungs and recent scientific studies have shown thousands of deaths occur each year around the world, prematurely, due to a high concentration of particulate matter. Therefore, monitoring and forecasting of surface level fine particulate matter air quality is very important. Typically air quality measurements are made from ground stations. In recent years, linear regression relationships between satellite derived aerosol optical thickness (AOT) and surface measured PM2.5 mass concentration are formed and used to estimate PM2.5 in the areas where surface measurements are not available. This type of simple linear relationships varies with regions and seasons, and does not provide accurate enough estimation of surface level pollution and many studies have shown that AOT alone is not sufficient for PM2.5 mass concentration estimations. Furthermore, AOT represents aerosol loading in the entire column of the atmosphere whereas PM2.5 is measured at the surface; hence, the knowledge of vertical distribution of aerosols coupled with meteorology becomes critical in PM2.5 estimations. In this dissertation I used three years (2004-2006) of coincident hourly PM2.5, MODerate resolution Imaging Spectroradiometer (MODIS) derived AOT, and Rapid Update Cycle (RUC) analyzed meteorological fields to assess PM2.5 air quality in the Southeast United States. I explored the use of two-variate (TVM), multi-variate (MVM) and artificial neural network (ANN) methods for estimating PM2.5 over 85 stations in the region. First, satellite data were analyzed for sampling biases, quality, and impact of clouds. Results show that MODIS-Terra AOT data was available only about 50% of the days in any given month due to cloud over and unfavorable surface conditions, but this produced a sampling bias of less than 2 mugm-3. Results indicate that there is up to three fold improvements in the correlation coefficients (R) while using MVM (that includes meteorology) over different regions and seasons when compared to the TVM and further improvements were noticed when ANN method is applied. The improvement in absolute percentage error of estimation ranges from 5% to 50% over different seasons and regions when compared with TVM models. Overall ANN models performed better than TVM and MVM models. Based on these results, we recommend using meteorological variables along with satellite observations for improving particulate matter air quality assessment from satellite observations in the region.

  14. Virtual Sensors: Using Data Mining Techniques to Efficiently Estimate Remote Sensing Spectra

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj; Stroeve, Julienne

    2004-01-01

    Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. These instruments are sometimes built in a phased approach, with some measurement capabilities being added in later phases. In other cases, there may not be a planned increase in measurement capability, but technology may mature to the point that it offers new measurement capabilities that were not available before. In still other cases, detailed spectral measurements may be too costly to perform on a large sample. Thus, lower resolution instruments with lower associated cost may be used to take the majority of measurements. Higher resolution instruments, with a higher associated cost may be used to take only a small fraction of the measurements in a given area. Many applied science questions that are relevant to the remote sensing community need to be addressed by analyzing enormous amounts of data that were generated from instruments with disparate measurement capability. This paper addresses this problem by demonstrating methods to produce high accuracy estimates of spectra with an associated measure of uncertainty from data that is perhaps nonlinearly correlated with the spectra. In particular, we demonstrate multi-layer perceptrons (MLPs), Support Vector Machines (SVMs) with Radial Basis Function (RBF) kernels, and SVMs with Mixture Density Mercer Kernels (MDMK). We call this type of an estimator a Virtual Sensor because it predicts, with a measure of uncertainty, unmeasured spectral phenomena.

  15. Assessment of global solar radiation to examine the best locations to install a PV system in Tunisia

    NASA Astrophysics Data System (ADS)

    Belkilani, Kaouther; Ben Othman, Afef; Besbes, Mongi

    2018-02-01

    The study of the solar radiation is the starting point of any investigation for a new energy, to study and search the best location to install a PV system. A very important factor in the assessment of solar potential is the availability of data for global solar radiation that must be coherent and of high quality. In this paper, we analyze the estimation result of the monthly global solar radiation for three different locations, Bizerte in Northern Tunisia, Kairouan in Middle Eastern Tunisia, and Tozeur in Southern Tunisia, measured on the surface by the National Institute of Meteorology and the meteorological year irradiation based on satellite imagery result PVGIS radiation databases. To get the right measurements with minimum error, we propose a numerical model used to calculate the global solar radiation in the indicated three sites. The results show that the model can estimate the global solar radiation (kWh/m²) at a specific station and over most area of Tunisia. The model gives a good estimation for solar radiation where error between the measured values and those calculated are negligible.

  16. Rényi continuous entropy of DNA sequences.

    PubMed

    Vinga, Susana; Almeida, Jonas S

    2004-12-07

    Entropy measures of DNA sequences estimate their randomness or, inversely, their repeatability. L-block Shannon discrete entropy accounts for the empirical distribution of all length-L words and has convergence problems for finite sequences. A new entropy measure that extends Shannon's formalism is proposed. Renyi's quadratic entropy calculated with Parzen window density estimation method applied to CGR/USM continuous maps of DNA sequences constitute a novel technique to evaluate sequence global randomness without some of the former method drawbacks. The asymptotic behaviour of this new measure was analytically deduced and the calculation of entropies for several synthetic and experimental biological sequences was performed. The results obtained were compared with the distributions of the null model of randomness obtained by simulation. The biological sequences have shown a different p-value according to the kernel resolution of Parzen's method, which might indicate an unknown level of organization of their patterns. This new technique can be very useful in the study of DNA sequence complexity and provide additional tools for DNA entropy estimation. The main MATLAB applications developed and additional material are available at the webpage . Specialized functions can be obtained from the authors.

  17. Solute solver 'what if' module for modeling urea kinetics.

    PubMed

    Daugirdas, John T

    2016-11-01

    The publicly available Solute Solver module allows calculation of a variety of two-pool urea kinetic measures of dialysis adequacy using pre- and postdialysis plasma urea and estimated dialyzer clearance or estimated urea distribution volumes as inputs. However, the existing program does not have a 'what if' module, which would estimate the plasma urea values as well as commonly used measures of hemodialysis adequacy for a patient with a given urea distribution volume and urea nitrogen generation rate dialyzed according to a particular dialysis schedule. Conventional variable extracellular volume 2-pool urea kinetic equations were used. A javascript-HTML Web form was created that can be used on any personal computer equipped with internet browsing software, to compute commonly used Kt/V-based measures of hemodialysis adequacy for patients with differing amounts of residual kidney function and following a variety of treatment schedules. The completed Web form calculator may be particularly useful in computing equivalent continuous clearances for incremental hemodialysis strategies. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.

  18. Remote Sensing of Cryosphere: Estimation of Mass Balance Change in Himalayan Glaciers

    NASA Astrophysics Data System (ADS)

    Ambinakudige, Shrinidhi; Joshi, Kabindra

    2012-07-01

    Glacial changes are an important indicator of climate change. Our understanding mass balance change in Himalayan glaciers is limited. This study estimates mass balance of some major glaciers in the Sagarmatha National Park (SNP) in Nepal using remote sensing applications. Remote sensing technique to measure mass balance of glaciers is an important methodological advance in the highly rugged Himalayan terrain. This study uses ASTER VNIR, 3N (nadir view) and 3B (backward view) bands to generate Digital Elevation Models (DEMs) for the SNP area for the years 2002, 2003, 2004 and 2005. Glacier boundaries were delineated using combination of boundaries available in the Global land ice measurement (GLIMS) database and various band ratios derived from ASTER images. Elevation differences, glacial area, and ice densities were used to estimate the change in mass balance. The results indicated that the rate of glacier mass balance change was not uniform across glaciers. While there was a decrease in mass balance of some glaciers, some showed increase. This paper discusses how each glacier in the SNP area varied in its annual mass balance measurement during the study period.

  19. Tritium internal dose estimation from measurements with liquid scintillators.

    PubMed

    Pántya, A; Dálnoki, Á; Imre, A R; Zagyvai, P; Pázmándi, T

    2018-07-01

    Tritium may exist in several chemical and physical forms in workplaces, common occurrences are in vapor or liquid form (as tritiated water) and in organic form (e.g. thymidine) which can get into the body by inhalation or by ingestion. For internal dose assessment it is usually assumed that urine samples for tritium analysis are obtained after the tritium concentration inside the body has reached equilibrium following intake. Comparison was carried out for two types of vials, two efficiency calculation methods and two available liquid scintillation devices to highlight the errors of the measurements. The results were used for dose estimation with MONDAL-3 software. It has been shown that concerning the accuracy of the final internal dose assessment, the uncertainties of the assumptions used in the dose assessment (for example the date and route of intake, the physical and chemical form) can be more influential than the errors of the measured data. Therefore, the improvement of the experimental accuracy alone is not the proper way to improve the accuracy of the internal dose estimation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    NASA Astrophysics Data System (ADS)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  1. Snowpack spatial variability: Towards understanding its effect on remote sensing measurements and snow slope stability

    NASA Astrophysics Data System (ADS)

    Marshall, Hans-Peter

    The distribution of water in the snow-covered areas of the world is an important climate change indicator, and it is a vital component of the water cycle. At local and regional scales, the snow water equivalent (SWE), the amount of liquid water a given area of the snowpack represents, is very important for water resource management, flood forecasting, and prediction of available hydropower energy. Measurements from only a few automatic weather stations, such as the SNOTEL network, or sparse manual snowpack measurements are typically extrapolated for estimating SWE over an entire basin. Widespread spatial variability in the distribution of SWE and snowpack stratigraphy at local scales causes large errors in these basin estimates. Remote sensing measurements offer a promising alternative, due to their large spatial coverage and high temporal resolution. Although snow cover extent can currently be estimated from remote sensing data, accurately quantifying SWE from remote sensing measurements has remained difficult, due to a high sensitivity to variations in grain size and stratigraphy. In alpine snowpacks, the large degree of spatial variability of snowpack properties and geometry, caused by topographic, vegetative, and microclimatic effects, also makes prediction of snow avalanches very difficult. Ground-based radar and penetrometer measurements can quickly and accurately characterize snowpack properties and SWE in the field. A portable lightweight radar was developed, and allows a real-time estimate of SWE to within 10%, as well as measurements of depths of all major density transitions within the snowpack. New analysis techniques developed in this thesis allow accurate estimates of mechanical properties and an index of grain size to be retrieved from the SnowMicroPenetrometer. These two tools together allow rapid characterization of the snowpack's geometry, mechanical properties, and SWE, and are used to guide a finite element model to study the stress distribution on a slope. The ability to accurately characterize snowpack properties at much higher resolutions and spatial extent than previously possible will hopefully help lead to a more complete understanding of spatial variability, its effect on remote sensing measurements and snow slope stability, and result in improvements in avalanche prediction and accuracy of SWE estimates from space.

  2. On-Orbit Multi-Field Wavefront Control with a Kalman Filter

    NASA Technical Reports Server (NTRS)

    Lou, John; Sigrist, Norbert; Basinger, Scott; Redding, David

    2008-01-01

    A document describes a multi-field wavefront control (WFC) procedure for the James Webb Space Telescope (JWST) on-orbit optical telescope element (OTE) fine-phasing using wavefront measurements at the NIRCam pupil. The control is applied to JWST primary mirror (PM) segments and secondary mirror (SM) simultaneously with a carefully selected ordering. Through computer simulations, the multi-field WFC procedure shows that it can reduce the initial system wavefront error (WFE), as caused by random initial system misalignments within the JWST fine-phasing error budget, from a few dozen micrometers to below 50 nm across the entire NIRCam Field of View, and the WFC procedure is also computationally stable as the Monte-Carlo simulations indicate. With the incorporation of a Kalman Filter (KF) as an optical state estimator into the WFC process, the robustness of the JWST OTE alignment process can be further improved. In the presence of some large optical misalignments, the Kalman state estimator can provide a reasonable estimate of the optical state, especially for those degrees of freedom that have a significant impact on the system WFE. The state estimate allows for a few corrections to the optical state to push the system towards its nominal state, and the result is that a large part of the WFE can be eliminated in this step. When the multi-field WFC procedure is applied after Kalman state estimate and correction, the stability of fine-phasing control is much more certain. Kalman Filter has been successfully applied to diverse applications as a robust and optimal state estimator. In the context of space-based optical system alignment based on wavefront measurements, a KF state estimator can combine all available wavefront measurements, past and present, as well as measurement and actuation error statistics to generate a Maximum-Likelihood optimal state estimator. The strength and flexibility of the KF algorithm make it attractive for use in real-time optical system alignment when WFC alone cannot effectively align the system.

  3. Source term estimates of radioxenon released from the BaTek medical isotope production facility using external measured air concentrations.

    PubMed

    Eslinger, Paul W; Cameron, Ian M; Dumais, Johannes Robert; Imardjoko, Yudi; Marsoem, Pujadi; McIntyre, Justin I; Miley, Harry S; Stoehlker, Ulrich; Widodo, Susilo; Woods, Vincent T

    2015-10-01

    BATAN Teknologi (BaTek) operates an isotope production facility in Serpong, Indonesia that supplies (99m)Tc for use in medical procedures. Atmospheric releases of (133)Xe in the production process at BaTek are known to influence the measurements taken at the closest stations of the radionuclide network of the International Monitoring System (IMS). The purpose of the IMS is to detect evidence of nuclear explosions, including atmospheric releases of radionuclides. The major xenon isotopes released from BaTek are also produced in a nuclear explosion, but the isotopic ratios are different. Knowledge of the magnitude of releases from the isotope production facility helps inform analysts trying to decide if a specific measurement result could have originated from a nuclear explosion. A stack monitor deployed at BaTek in 2013 measured releases to the atmosphere for several isotopes. The facility operates on a weekly cycle, and the stack data for June 15-21, 2013 show a release of 1.84 × 10(13) Bq of (133)Xe. Concentrations of (133)Xe in the air are available at the same time from a xenon sampler located 14 km from BaTek. An optimization process using atmospheric transport modeling and the sampler air concentrations produced a release estimate of 1.88 × 10(13) Bq. The same optimization process yielded a release estimate of 1.70 × 10(13) Bq for a different week in 2012. The stack release value and the two optimized estimates are all within 10% of each other. Unpublished production data and the release estimate from June 2013 yield a rough annual release estimate of 8 × 10(14) Bq of (133)Xe in 2014. These multiple lines of evidence cross-validate the stack release estimates and the release estimates based on atmospheric samplers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  5. Covariance analysis for evaluating head trackers

    NASA Astrophysics Data System (ADS)

    Kang, Donghoon

    2017-10-01

    Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.

  6. Estimation of available global solar radiation using sunshine duration over South Korea

    NASA Astrophysics Data System (ADS)

    Das, Amrita; Park, Jin-ki; Park, Jong-hwa

    2015-11-01

    Besides designing a solar energy system, accurate insolation data is also a key component for many biological and atmospheric studies. But solar radiation stations are not widely available due to financial and technical limitations; this insufficient number affects the spatial resolution whenever an attempt is made to construct a solar radiation map. There are several models in literature for estimating incoming solar radiation using sunshine fraction. Seventeen of such models among which 6 are linear and 11 non-linear, have been chosen for studying and estimating solar radiation on a horizontal surface over South Korea. The better performance of a non-linear model signifies the fact that the relationship between sunshine duration and clearness index does not follow a straight line. With such a model solar radiation over 79 stations measuring sunshine duration is computed and used as input for spatial interpolation. Finally monthly solar radiation maps are constructed using the Ordinary Kriging method. The cross validation results show good agreement between observed and predicted data.

  7. [The health gap in Mexico, measured through child mortality].

    PubMed

    Gutiérrez, Juan Pablo; Bertozzi, Stefano M

    2003-01-01

    To estimate the health gap in Mexico, as evidenced by the difference between the observed 1998 mortality rate and the estimated rate and the estimated rate for the same year according to social and economic indicators, with rates from other countries. An econometric model was developed, using the 1998 child mortality rate (CMR) as the dependent variable, and macro-social and economic indicators as independent variables. The model included 70 countries for which complete data were available. The proposed model explained over 90% of the variability in CMR among countries. The expected CMR for Mexico was 22% lower that the observed rate, which represented nearly 20,000 excess deaths. After adjusting for differences in productivity, distribution of wealth, and investment in human capital, the excess child mortality rate suggested efficiency problems in the Mexican health system, at least in relation to services intended to reduce child mortality. The English version of this paper is available at: http://www.insp.mx/salud/index.html.

  8. Hand volume estimates based on a geometric algorithm in comparison to water displacement.

    PubMed

    Mayrovitz, H N; Sims, N; Hill, C J; Hernandez, T; Greenshner, A; Diep, H

    2006-06-01

    Assessing changes in upper extremity limb volume during lymphedema therapy is important for determining treatment efficacy and documenting outcomes. Although arm volumes may be determined by tape measure, the suitability of circumference measurements to estimate hand volumes is questionable because of the deviation in circularity of hand shape. Our aim was to develop an alternative measurement procedure and algorithm for routine use to estimate hand volumes. A caliper was used to measure hand width and depth in 33 subjects (66 hands) and volumes (VE) were calculated using an elliptical frustum model. Using regression analysis and limits of agreement (LOA), VE was compared to volumes determined by water displacement (VW), to volumes calculated from tape-measure determined circumferences (VC), and to a trapezoidal model (VT). VW and VE (mean +/- SD) were similar (363 +/- 98 vs. 362 +/-100 ml) and highly correlated; VE = 1.01VW -3.1 ml, r=0.986, p<0.001, with LOA of +/- 33.5 ml and +/- 9.9 %. In contrast, VC (480 +/- 138 ml) and VT (432 +/- 122 ml) significantly overestimated volume (p<0.0001). These results indicate that the elliptical algorithm can be a useful alternative to water displacement when hand volumes are needed and the water displacement method is contra-indicated, impractical to implement, too time consuming or not available.

  9. Measuring Fisher Information Accurately in Correlated Neural Populations

    PubMed Central

    Kohn, Adam; Pouget, Alexandre

    2015-01-01

    Neural responses are known to be variable. In order to understand how this neural variability constrains behavioral performance, we need to be able to measure the reliability with which a sensory stimulus is encoded in a given population. However, such measures are challenging for two reasons: First, they must take into account noise correlations which can have a large influence on reliability. Second, they need to be as efficient as possible, since the number of trials available in a set of neural recording is usually limited by experimental constraints. Traditionally, cross-validated decoding has been used as a reliability measure, but it only provides a lower bound on reliability and underestimates reliability substantially in small datasets. We show that, if the number of trials per condition is larger than the number of neurons, there is an alternative, direct estimate of reliability which consistently leads to smaller errors and is much faster to compute. The superior performance of the direct estimator is evident both for simulated data and for neuronal population recordings from macaque primary visual cortex. Furthermore we propose generalizations of the direct estimator which measure changes in stimulus encoding across conditions and the impact of correlations on encoding and decoding, typically denoted by Ishuffle and Idiag respectively. PMID:26030735

  10. Measuring snow water equivalent from common-offset GPR records through migration velocity analysis

    NASA Astrophysics Data System (ADS)

    St. Clair, James; Holbrook, W. Steven

    2017-12-01

    Many mountainous regions depend on seasonal snowfall for their water resources. Current methods of predicting the availability of water resources rely on long-term relationships between stream discharge and snowpack monitoring at isolated locations, which are less reliable during abnormal snow years. Ground-penetrating radar (GPR) has been shown to be an effective tool for measuring snow water equivalent (SWE) because of the close relationship between snow density and radar velocity. However, the standard methods of measuring radar velocity can be time-consuming. Here we apply a migration focusing method originally developed for extracting velocity information from diffracted energy observed in zero-offset seismic sections to the problem of estimating radar velocities in seasonal snow from common-offset GPR data. Diffractions are isolated by plane-wave-destruction (PWD) filtering and the optimal migration velocity is chosen based on the varimax norm of the migrated image. We then use the radar velocity to estimate snow density, depth, and SWE. The GPR-derived SWE estimates are within 6 % of manual SWE measurements when the GPR antenna is coupled to the snow surface and 3-21 % of the manual measurements when the antenna is mounted on the front of a snowmobile ˜ 0.5 m above the snow surface.

  11. Retrieving air humidity, global solar radiation, and reference evapotranspiration from daily temperatures: development and validation of new methods for Mexico. Part I: humidity

    NASA Astrophysics Data System (ADS)

    Lobit, P.; López Pérez, L.; Lhomme, J. P.; Gómez Tagle, A.

    2017-07-01

    This study evaluates the dew point method (Allen et al. 1998) to estimate atmospheric vapor pressure from minimum temperature, and proposes an improved model to estimate it from maximum and minimum temperature. Both methods were evaluated on 786 weather stations in Mexico. The dew point method induced positive bias in dry areas but also negative bias in coastal areas, and its average root mean square error for all evaluated stations was 0.38 kPa. The improved model assumed a bi-linear relation between estimated vapor pressure deficit (difference between saturated vapor pressure at minimum and average temperature) and measured vapor pressure deficit. The parameters of these relations were estimated from historical annual median values of relative humidity. This model removed bias and allowed for a root mean square error of 0.31 kPa. When no historical measurements of relative humidity were available, empirical relations were proposed to estimate it from latitude and altitude, with only a slight degradation on the model accuracy (RMSE = 0.33 kPa, bias = -0.07 kPa). The applicability of the method to other environments is discussed.

  12. A new device to estimate abundance of moist-soil plant seeds

    USGS Publications Warehouse

    Penny, E.J.; Kaminski, R.M.; Reinecke, K.J.

    2006-01-01

    Methods to sample the abundance of moist-soil seeds efficiently and accurately are critical for evaluating management practices and determining food availability. We adapted a portable, gasoline-powered vacuum to estimate abundance of seeds on the surface of a moist-soil wetland in east-central Mississippi and evaluated the sampler by simulating conditions that researchers and managers may experience when sampling moist-soil areas for seeds. We measured the percent recovery of known masses of seeds by the vacuum sampler in relation to 4 experimentally controlled factors (i.e., seed-size class, sample mass, soil moisture class, and vacuum time) with 2-4 levels per factor. We also measured processing time of samples in the laboratory. Across all experimental factors, seed recovery averaged 88.4% and varied little (CV = 0.68%, n = 474). Overall, mean time to process a sample was 30.3 ? 2.5 min (SE, n = 417). Our estimate of seed recovery rate (88%) may be used to adjust estimates for incomplete seed recovery, or project-specific correction factors may be developed by investigators. Our device was effective for estimating surface abundance of moist-soil plant seeds after dehiscence and before habitats were flooded.

  13. Parameterization of clear-sky surface irradiance and its implications for estimation of aerosol direct radiative effect and aerosol optical depth

    PubMed Central

    Xia, Xiangao

    2015-01-01

    Aerosols impact clear-sky surface irradiance () through the effects of scattering and absorption. Linear or nonlinear relationships between aerosol optical depth (τa) and have been established to describe the aerosol direct radiative effect on (ADRE). However, considerable uncertainties remain associated with ADRE due to the incorrect estimation of (τa in the absence of aerosols). Based on data from the Aerosol Robotic Network, the effects of τa, water vapor content (w) and the cosine of the solar zenith angle (μ) on are thoroughly considered, leading to an effective parameterization of as a nonlinear function of these three quantities. The parameterization is proven able to estimate with a mean bias error of 0.32 W m−2, which is one order of magnitude smaller than that derived using earlier linear or nonlinear functions. Applications of this new parameterization to estimate τa from , or vice versa, show that the root-mean-square errors were 0.08 and 10.0 Wm−2, respectively. Therefore, this study establishes a straightforward method to derive from τa or estimate τa from measurements if water vapor measurements are available. PMID:26395310

  14. Advances in Inhalation Dosimetry Models and Methods for Occupational Risk Assessment and Exposure Limit Derivation

    PubMed Central

    Kuempel, Eileen D.; Sweeney, Lisa M.; Morris, John B.; Jarabek, Annie M.

    2015-01-01

    The purpose of this article is to provide an overview and practical guide to occupational health professionals concerning the derivation and use of dose estimates in risk assessment for development of occupational exposure limits (OELs) for inhaled substances. Dosimetry is the study and practice of measuring or estimating the internal dose of a substance in individuals or a population. Dosimetry thus provides an essential link to understanding the relationship between an external exposure and a biological response. Use of dosimetry principles and tools can improve the accuracy of risk assessment, and reduce the uncertainty, by providing reliable estimates of the internal dose at the target tissue. This is accomplished through specific measurement data or predictive models, when available, or the use of basic dosimetry principles for broad classes of materials. Accurate dose estimation is essential not only for dose-response assessment, but also for interspecies extrapolation and for risk characterization at given exposures. Inhalation dosimetry is the focus of this paper since it is a major route of exposure in the workplace. Practical examples of dose estimation and OEL derivation are provided for inhaled gases and particulates. PMID:26551218

  15. Estimating the instabilities of N clocks by means of comparison measurements

    NASA Technical Reports Server (NTRS)

    Premoli, Amedeo; Tavella, Patrizia

    1993-01-01

    The estimation of individual instabilities of N clocks, compared by measuring the differences of their readings, is considered without assuming a priori any hypotheses on their uncorrelation. Instabilities of the N clocks are described by a complete (non-diagonal) N x N covariance matrix R. Only differences of clock readings are available in order to estimate R. Statistical processing of these data allows one to calculate the (N-1)x(N-l) covariance matrix S of the differences relative to the N-th(reference) clock. By analyzing the relationships tying R and S, several pieces of information can be inferred and, in particular, the conditions for the validity of the uncorrelation hypothesis are established. The estimation of R from S is not unique: in any case R must be positive definite. A theorem states that R is positive definite if and only if its determinant is positive. Nevertheless infinitely many acceptable choices of R still fulfill the condition of positive definiteness. This paper shows that, by increasing the number N of compared clocks, the amount of arbitrariness in estimating R is reduced. The analysis of some experimental data illustrates the capability of the method.

  16. Comparison of skin dose measurement using nanoDot® dosimeter and machine readings of radiation dose during cardiac catheterization in children

    PubMed Central

    Balaguru, Duraisamy; Rodriguez, Matthew; Leon, Stephanie; Wagner, Louis K; Beasley, Charles W; Sultzer, Andrew; Numan, Mohammed T

    2018-01-01

    Objectives: Direct measurement of skin dose of radiation for children using optically stimulated luminescence (OSL) technology using nanoDot® (Landauer, Glenwood, IL, USA). Background: Radiation dose is estimated as cumulative air kerma (AK) and dosearea product based on standards established for adult size patients. Body size of pediatric patients who undergo cardiac catheterization for congenital heart disease vary widely from newborn to adolescence. Direct, skindose measurement applying OSL technology may eliminate errors in the estimate. Materials and Methods: The nanoDot® (1 cm × 1 cm × flat plastic cassette) is applied to patient's skin using adhesive tape during cardiac catheterization and radiation skin doses were read within 24 hrs. nanoDot® values were compared to the currently available cumulative AK values estimated and displayed on fluoroscopy monitor. Results: A total of 12 children were studied, aged 4 months to 18 years (median 1.1 years) and weight range 5.3–86 kg (median 8.4 kg). nanoDot® readings ranged from 2.58 mGy to 424.8 mGy (median 84.1 mGy). Cumulative AK ranged from 16.2 mGy to 571.2 mGy (median 171.1 mGy). Linear correlation was noted between nanoDot® values and AK values (R2 = 0.88, R = 0.94). nanoDot® readings were approximately 65% of the estimated cumulative AK estimated using the International Electrotechnical Commission standards. Conclusions: Application of OSL technology using nanoDot® provides an alternative to directly measure fluoroscopic skin dose in children during cardiac catheterization. Our data show that the actual skin dose for children is approximately one-third lower than the AK estimated using international standards for adult size patients. PMID:29440825

  17. Comparison of skin dose measurement using nanoDot® dosimeter and machine readings of radiation dose during cardiac catheterization in children.

    PubMed

    Balaguru, Duraisamy; Rodriguez, Matthew; Leon, Stephanie; Wagner, Louis K; Beasley, Charles W; Sultzer, Andrew; Numan, Mohammed T

    2018-01-01

    Direct measurement of skin dose of radiation for children using optically stimulated luminescence (OSL) technology using nanoDot ® (Landauer, Glenwood, IL, USA). Radiation dose is estimated as cumulative air kerma (AK) and dosearea product based on standards established for adult size patients. Body size of pediatric patients who undergo cardiac catheterization for congenital heart disease vary widely from newborn to adolescence. Direct, skindose measurement applying OSL technology may eliminate errors in the estimate. The nanoDot ® (1 cm × 1 cm × flat plastic cassette) is applied to patient's skin using adhesive tape during cardiac catheterization and radiation skin doses were read within 24 hrs. nanoDot ® values were compared to the currently available cumulative AK values estimated and displayed on fluoroscopy monitor. A total of 12 children were studied, aged 4 months to 18 years (median 1.1 years) and weight range 5.3-86 kg (median 8.4 kg). nanoDot® readings ranged from 2.58 mGy to 424.8 mGy (median 84.1 mGy). Cumulative AK ranged from 16.2 mGy to 571.2 mGy (median 171.1 mGy). Linear correlation was noted between nanoDot® values and AK values ( R 2 = 0.88, R = 0.94). nanoDot® readings were approximately 65% of the estimated cumulative AK estimated using the International Electrotechnical Commission standards. Application of OSL technology using nanoDot® provides an alternative to directly measure fluoroscopic skin dose in children during cardiac catheterization. Our data show that the actual skin dose for children is approximately one-third lower than the AK estimated using international standards for adult size patients.

  18. A preliminary comparison of Na lidar and meteor radar zonal winds during geomagnetic quiet and disturbed conditions

    NASA Astrophysics Data System (ADS)

    Kishore Kumar, G.; Nesse Tyssøy, H.; Williams, Bifford P.

    2018-03-01

    We investigate the possibility that sufficiently large electric fields and/or ionization during geomagnetic disturbed conditions may invalidate the assumptions applied in the retrieval of neutral horizontal winds from meteor and/or lidar measurements. As per our knowledge, the possible errors in the wind estimation have never been reported. In the present case study, we have been using co-located meteor radar and sodium resonance lidar zonal wind measurements over Andenes (69.27°N, 16.04°E) during intense substorms in the declining phase of the January 2005 solar proton event (21-22 January 2005). In total, 14 h of measurements are available for the comparison, which covers both quiet and disturbed conditions. For comparison, the lidar zonal wind measurements are averaged over the same time and altitude as the meteor radar wind measurements. High cross correlations (∼0.8) are found in all height regions. The discrepancies can be explained in light of differences in the observational volumes of the two instruments. Further, we extended the comparison to address the electric field and/or ionization impact on the neutral wind estimation. For the periods of low ionization, the neutral winds estimated with both instruments are quite consistent with each other. During periods of elevated ionization, comparatively large differences are noticed at the highermost altitude, which might be due to the electric field and/or ionization impact on the wind estimation. At present, one event is not sufficient to make any firm conclusion. Further study with more co-located measurements are needed to test the statistical significance of the result.

  19. Regression calibration for models with two predictor variables measured with error and their interaction, using instrumental variables and longitudinal data.

    PubMed

    Strand, Matthew; Sillau, Stefan; Grunwald, Gary K; Rabinovitch, Nathan

    2014-02-10

    Regression calibration provides a way to obtain unbiased estimators of fixed effects in regression models when one or more predictors are measured with error. Recent development of measurement error methods has focused on models that include interaction terms between measured-with-error predictors, and separately, methods for estimation in models that account for correlated data. In this work, we derive explicit and novel forms of regression calibration estimators and associated asymptotic variances for longitudinal models that include interaction terms, when data from instrumental and unbiased surrogate variables are available but not the actual predictors of interest. The longitudinal data are fit using linear mixed models that contain random intercepts and account for serial correlation and unequally spaced observations. The motivating application involves a longitudinal study of exposure to two pollutants (predictors) - outdoor fine particulate matter and cigarette smoke - and their association in interactive form with levels of a biomarker of inflammation, leukotriene E4 (LTE 4 , outcome) in asthmatic children. Because the exposure concentrations could not be directly observed, we used measurements from a fixed outdoor monitor and urinary cotinine concentrations as instrumental variables, and we used concentrations of fine ambient particulate matter and cigarette smoke measured with error by personal monitors as unbiased surrogate variables. We applied the derived regression calibration methods to estimate coefficients of the unobserved predictors and their interaction, allowing for direct comparison of toxicity of the different pollutants. We used simulations to verify accuracy of inferential methods based on asymptotic theory. Copyright © 2013 John Wiley & Sons, Ltd.

  20. A geographic information system tool to solve regression equations and estimate flow-frequency characteristics of Vermont Streams

    USGS Publications Warehouse

    Olson, Scott A.; Tasker, Gary D.; Johnston, Craig M.

    2003-01-01

    Estimates of the magnitude and frequency of streamflow are needed to safely and economically design bridges, culverts, and other structures in or near streams. These estimates also are used for managing floodplains, identifying flood-hazard areas, and establishing flood-insurance rates, but may be required at ungaged sites where no observed flood data are available for streamflow-frequency analysis. This report describes equations for estimating flow-frequency characteristics at ungaged, unregulated streams in Vermont. In the past, regression equations developed to estimate streamflow statistics required users to spend hours manually measuring basin characteristics for the stream site of interest. This report also describes the accompanying customized geographic information system (GIS) tool that automates the measurement of basin characteristics and calculation of corresponding flow statistics. The tool includes software that computes the accuracy of the results and adjustments for expected probability and for streamflow data of a nearby stream-gaging station that is either upstream or downstream and within 50 percent of the drainage area of the site where the flow-frequency characteristics are being estimated. The custom GIS can be linked to the National Flood Frequency program, adding the ability to plot peak-flow-frequency curves and synthetic hydrographs and to compute adjustments for urbanization.

Top