Sample records for present study estimates

  1. Simulation studies of wide and medium field of view earth radiation data analysis

    NASA Technical Reports Server (NTRS)

    Green, R. N.

    1978-01-01

    A parameter estimation technique is presented to estimate the radiative flux distribution over the earth from radiometer measurements at satellite altitude. The technique analyzes measurements from a wide field of view (WFOV), horizon to horizon, nadir pointing sensor with a mathematical technique to derive the radiative flux estimates at the top of the atmosphere for resolution elements smaller than the sensor field of view. A computer simulation of the data analysis technique is presented for both earth-emitted and reflected radiation. Zonal resolutions are considered as well as the global integration of plane flux. An estimate of the equator-to-pole gradient is obtained from the zonal estimates. Sensitivity studies of the derived flux distribution to directional model errors are also presented. In addition to the WFOV results, medium field of view results are presented.

  2. Black carbon emissions in Russia: A critical review

    NASA Astrophysics Data System (ADS)

    Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa; Denysenko, Artur; Smith, Steven J.; Staniszewski, Aaron; Hao, Wei Min; Liu, Liang; Bond, Tami C.

    2017-08-01

    This study presents a comprehensive review of estimated black carbon (BC) emissions in Russia from a range of studies. Russia has an important role regarding BC emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data on Russia's associated petroleum gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 688 Gg in 2014, with an uncertainty range 401 Gg-1453 Gg, while OC emissions are 9224 Gg with uncertainty ranging between 5596 Gg and 14,736 Gg. Wildfires dominated and contributed about 83% of the total BC emissions: however, the effect on radiative forcing is mitigated in part by OC emissions. We also present an adjusted estimate of Arctic forcing from Russia's BC and OC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.

  3. Black carbon emissions in Russia: A critical review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa

    Russia has a particularly important role regarding black carbon (BC) emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. This study presents a comprehensive review of BC estimates from a range of studies. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data on Russian associated petroleummore » gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 689 Gg in 2014, with an uncertainty range between (407-1,416), while OC emissions are 9,228 Gg (with uncertainty between 5,595 and 14,728). Wildfires dominated and contributed about 83% of the total BC emissions, however the effect on radiative forcing is mitigated by OC emissions. We also present an adjusted estimate of Arctic forcing from Russian OC and BC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.« less

  4. Numerical Estimation in Children for Both Positive and Negative Numbers

    ERIC Educational Resources Information Center

    Brez, Caitlin C.; Miller, Angela D.; Ramirez, Erin M.

    2016-01-01

    Numerical estimation has been used to study how children mentally represent numbers for many years (e.g., Siegler & Opfer, 2003). However, these studies have always presented children with positive numbers and positive number lines. Children's mental representation of negative numbers has never been addressed. The present study tested children…

  5. The benefits of improved technologies in agricultural aviation

    NASA Technical Reports Server (NTRS)

    Lietzke, K.; Abram, P.; Braen, C.; Givens, S.; Hazelrigg, G. A., Jr.; Fish, R.; Clyne, F.; Sand, F.

    1977-01-01

    The results are present for a study of the economic benefits attributed to a variety of potential technological improvements in agricultural aviation. Part 1 gives a general description of the ag-air industry and discusses the information used in the data base to estimate the potential benefits from technological improvements. Part 2 presents the benefit estimates and provides a quantitative basis for the estimates in each area study. Part 3 is a bibliography of references relating to this study.

  6. INVERSE MODEL ESTIMATION AND EVALUATION OF SEASONAL NH 3 EMISSIONS

    EPA Science Inventory

    The presentation topic is inverse modeling for estimate and evaluation of emissions. The case study presented is the need for seasonal estimates of NH3 emissions for air quality modeling. The inverse modeling application approach is first described, and then the NH

  7. Black carbon emissions in Russia: A critical review

    DOE PAGES

    Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa; ...

    2017-05-18

    Here, this study presents a comprehensive review of estimated black carbon (BC) emissions in Russia from a range of studies. Russia has an important role regarding BC emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data onmore » Russia's associated petroleum gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 688 Gg in 2014, with an uncertainty range 401 Gg-1453 Gg, while OC emissions are 9224 Gg with uncertainty ranging between 5596 Gg and 14,736 Gg. Wildfires dominated and contributed about 83% of the total BC emissions: however, the effect on radiative forcing is mitigated in part by OC emissions. We also present an adjusted estimate of Arctic forcing from Russia's BC and OC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.« less

  8. Black carbon emissions in Russia: A critical review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Meredydd; Kholod, Nazar; Kuklinski, Teresa

    Here, this study presents a comprehensive review of estimated black carbon (BC) emissions in Russia from a range of studies. Russia has an important role regarding BC emissions given the extent of its territory above the Arctic Circle, where BC emissions have a particularly pronounced effect on the climate. We assess underlying methodologies and data sources for each major emissions source based on their level of detail, accuracy and extent to which they represent current conditions. We then present reference values for each major emissions source. In the case of flaring, the study presents new estimates drawing on data onmore » Russia's associated petroleum gas and the most recent satellite data on flaring. We also present estimates of organic carbon (OC) for each source, either based on the reference studies or from our own calculations. In addition, the study provides uncertainty estimates for each source. Total BC emissions are estimated at 688 Gg in 2014, with an uncertainty range 401 Gg-1453 Gg, while OC emissions are 9224 Gg with uncertainty ranging between 5596 Gg and 14,736 Gg. Wildfires dominated and contributed about 83% of the total BC emissions: however, the effect on radiative forcing is mitigated in part by OC emissions. We also present an adjusted estimate of Arctic forcing from Russia's BC and OC emissions. In recent years, Russia has pursued policies to reduce flaring and limit particulate emissions from on-road transport, both of which appear to significantly contribute to the lower emissions and forcing values found in this study.« less

  9. Xenia Spacecraft Study Addendum: Spacecraft Cost Estimate

    NASA Technical Reports Server (NTRS)

    Hill, Spencer; Hopkins, Randall

    2009-01-01

    This slide presentation reviews the Xenia spacecraft cost estimates as an addendum for the Xenia Spacecraft study. The NASA/Air Force Cost model (NAFCPOM) was used to derive the cost estimates that are expressed in 2009 dollars.

  10. Mammalian cell culture process for monoclonal antibody production: nonlinear modelling and parameter estimation.

    PubMed

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.

  11. Mammalian Cell Culture Process for Monoclonal Antibody Production: Nonlinear Modelling and Parameter Estimation

    PubMed Central

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797

  12. Hypersonic Research Vehicle (HRV) real-time flight test support feasibility and requirements study. Part 1: Real-time flight experiment support

    NASA Technical Reports Server (NTRS)

    Rediess, Herman A.; Ramnath, Rudrapatna V.; Vrable, Daniel L.; Hirvo, David H.; Mcmillen, Lowell D.; Osofsky, Irving B.

    1991-01-01

    The results are presented of a study to identify potential real time remote computational applications to support monitoring HRV flight test experiments along with definitions of preliminary requirements. A major expansion of the support capability available at Ames-Dryden was considered. The focus is on the use of extensive computation and data bases together with real time flight data to generate and present high level information to those monitoring the flight. Six examples were considered: (1) boundary layer transition location; (2) shock wave position estimation; (3) performance estimation; (4) surface temperature estimation; (5) critical structural stress estimation; and (6) stability estimation.

  13. Bigger is Better, but at What Cost? Estimating the Economic Value of Incremental Data Assets.

    PubMed

    Dalessandro, Brian; Perlich, Claudia; Raeder, Troy

    2014-06-01

    Many firms depend on third-party vendors to supply data for commercial predictive modeling applications. An issue that has received very little attention in the prior research literature is the estimation of a fair price for purchased data. In this work we present a methodology for estimating the economic value of adding incremental data to predictive modeling applications and present two cases studies. The methodology starts with estimating the effect that incremental data has on model performance in terms of common classification evaluation metrics. This effect is then translated into economic units, which gives an expected economic value that the firm might realize with the acquisition of a particular data asset. With this estimate a firm can then set a data acquisition price that targets a particular return on investment. This article presents the methodology in full detail and illustrates it in the context of two marketing case studies.

  14. Estimation of the measurement uncertainty in magnetic resonance velocimetry based on statistical models

    NASA Astrophysics Data System (ADS)

    Bruschewski, Martin; Freudenhammer, Daniel; Buchenberg, Waltraud B.; Schiffer, Heinz-Peter; Grundmann, Sven

    2016-05-01

    Velocity measurements with magnetic resonance velocimetry offer outstanding possibilities for experimental fluid mechanics. The purpose of this study was to provide practical guidelines for the estimation of the measurement uncertainty in such experiments. Based on various test cases, it is shown that the uncertainty estimate can vary substantially depending on how the uncertainty is obtained. The conventional approach to estimate the uncertainty from the noise in the artifact-free background can lead to wrong results. A deviation of up to -75 % is observed with the presented experiments. In addition, a similarly high deviation is demonstrated with the data from other studies. As a more accurate approach, the uncertainty is estimated directly from the image region with the flow sample. Two possible estimation methods are presented.

  15. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  16. Methods to Estimate the Variance of Some Indices of the Signal Detection Theory: A Simulation Study

    ERIC Educational Resources Information Center

    Suero, Manuel; Privado, Jesús; Botella, Juan

    2017-01-01

    A simulation study is presented to evaluate and compare three methods to estimate the variance of the estimates of the parameters d and "C" of the signal detection theory (SDT). Several methods have been proposed to calculate the variance of their estimators, "d'" and "c." Those methods have been mostly assessed by…

  17. A comparison of two indices for the intraclass correlation coefficient.

    PubMed

    Shieh, Gwowen

    2012-12-01

    In the present study, we examined the behavior of two indices for measuring the intraclass correlation in the one-way random effects model: the prevailing ICC(1) (Fisher, 1938) and the corrected eta-squared (Bliese & Halverson, 1998). These two procedures differ both in their methods of estimating the variance components that define the intraclass correlation coefficient and in their performance of bias and mean squared error in the estimation of the intraclass correlation coefficient. In contrast with the natural unbiased principle used to construct ICC(1), in the present study it was analytically shown that the corrected eta-squared estimator is identical to the maximum likelihood estimator and the pairwise estimator under equal group sizes. Moreover, the empirical results obtained from the present Monte Carlo simulation study across various group structures revealed the mutual dominance relationship between their truncated versions for negative values. The corrected eta-squared estimator performs better than the ICC(1) estimator when the underlying population intraclass correlation coefficient is small. Conversely, ICC(1) has a clear advantage over the corrected eta-squared for medium and large magnitudes of population intraclass correlation coefficient. The conceptual description and numerical investigation provide guidelines to help researchers choose between the two indices for more accurate reliability analysis in multilevel research.

  18. COMDYN: Software to study the dynamics of animal communities using a capture-recapture approach

    USGS Publications Warehouse

    Hines, J.E.; Boulinier, T.; Nichols, J.D.; Sauer, J.R.; Pollock, K.H.

    1999-01-01

    COMDYN is a set of programs developed for estimation of parameters associated with community dynamics using count data from two locations or time periods. It is Internet-based, allowing remote users either to input their own data, or to use data from the North American Breeding Bird Survey for analysis. COMDYN allows probability of detection to vary among species and among locations and time periods. The basic estimator for species richness underlying all estimators is the jackknife estimator proposed by Burnham and Overton. Estimators are presented for quantities associated with temporal change in species richness, including rate of change in species richness over time, local extinction probability, local species turnover and number of local colonizing species. Estimators are also presented for quantities associated with spatial variation in species richness, including relative richness at two locations and proportion of species present in one location that are also present at a second location. Application of the estimators to species richness estimation has been previously described and justified. The potential applications of these programs are discussed.

  19. [Potentials in the regionalization of health indicators using small-area estimation methods : Exemplary results based on the 2009, 2010 and 2012 GEDA studies].

    PubMed

    Kroll, Lars Eric; Schumann, Maria; Müters, Stephan; Lampert, Thomas

    2017-12-01

    Nationwide health surveys can be used to estimate regional differences in health. Using traditional estimation techniques, the spatial depth for these estimates is limited due to the constrained sample size. So far - without special refreshment samples - results have only been available for larger populated federal states of Germany. An alternative is regression-based small-area estimation techniques. These models can generate smaller-scale data, but are also subject to greater statistical uncertainties because of the model assumptions. In the present article, exemplary regionalized results based on the studies "Gesundheit in Deutschland aktuell" (GEDA studies) 2009, 2010 and 2012, are compared to the self-rated health status of the respondents. The aim of the article is to analyze the range of regional estimates in order to assess the usefulness of the techniques for health reporting more adequately. The results show that the estimated prevalence is relatively stable when using different samples. Important determinants of the variation of the estimates are the achieved sample size on the district level and the type of the district (cities vs. rural regions). Overall, the present study shows that small-area modeling of prevalence is associated with additional uncertainties compared to conventional estimates, which should be taken into account when interpreting the corresponding findings.

  20. Large-scale precipitation estimation using Kalpana-1 IR measurements and its validation using GPCP and GPCC data

    NASA Astrophysics Data System (ADS)

    Prakash, Satya; Mahesh, C.; Gairola, Rakesh M.

    2011-12-01

    Large-scale precipitation estimation is very important for climate science because precipitation is a major component of the earth's water and energy cycles. In the present study, the GOES precipitation index technique has been applied to the Kalpana-1 satellite infrared (IR) images of every three-hourly, i.e., of 0000, 0300, 0600,…., 2100 hours UTC, for rainfall estimation as a preparatory to the INSAT-3D. After the temperatures of all the pixels in a grid are known, they are distributed to generate a three-hourly 24-class histogram of brightness temperatures of IR (10.5-12.5 μm) images for a 1.0° × 1.0° latitude/longitude box. The daily, monthly, and seasonal rainfall have been estimated using these three-hourly rain estimates for the entire south-west monsoon period of 2009 in the present study. To investigate the potential of these rainfall estimates, the validation of monthly and seasonal rainfall estimates has been carried out using the Global Precipitation Climatology Project and Global Precipitation Climatology Centre data. The validation results show that the present technique works very well for the large-scale precipitation estimation qualitatively as well as quantitatively. The results also suggest that the simple IR-based estimation technique can be used to estimate rainfall for tropical areas at a larger temporal scale for climatological applications.

  1. Photometric redshifts for the next generation of deep radio continuum surveys - II. Gaussian processes and hybrid estimates

    NASA Astrophysics Data System (ADS)

    Duncan, Kenneth J.; Jarvis, Matt J.; Brown, Michael J. I.; Röttgering, Huub J. A.

    2018-07-01

    Building on the first paper in this series (Duncan et al. 2018), we present a study investigating the performance of Gaussian process photometric redshift (photo-z) estimates for galaxies and active galactic nuclei (AGNs) detected in deep radio continuum surveys. A Gaussian process redshift code is used to produce photo-z estimates targeting specific subsets of both the AGN population - infrared (IR), X-ray, and optically selected AGNs - and the general galaxy population. The new estimates for the AGN population are found to perform significantly better at z > 1 than the template-based photo-z estimates presented in our previous study. Our new photo-z estimates are then combined with template estimates through hierarchical Bayesian combination to produce a hybrid consensus estimate that outperforms both of the individual methods across all source types. Photo-z estimates for radio sources that are X-ray sources or optical/IR AGNs are significantly improved in comparison to previous template-only estimates - with outlier fractions and robust scatter reduced by up to a factor of ˜4. The ability of our method to combine the strengths of the two input photo-z techniques and the large improvements we observe illustrate its potential for enabling future exploitation of deep radio continuum surveys for both the study of galaxy and black hole coevolution and for cosmological studies.

  2. The Contribution of Scenic Beauty Indicators in Estimating Environmental Welfare Measures: A Case Study

    ERIC Educational Resources Information Center

    Fanariotu, Ioanna; Skuras, Dimitris

    2004-01-01

    Aesthetic indicators of landscapes, expressed as individual scenic beauty estimates, may be used as proxies of individuals' specific aesthetic values, and improve the properties of welfare estimates produced by contingent valuation models. This work presents results from an interdisciplinary study where forest scenic beauty indicators are utilized…

  3. Estimation of population mean in the presence of measurement error and non response under stratified random sampling

    PubMed Central

    Shabbir, Javid

    2018-01-01

    In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519

  4. Can Dyscalculics Estimate the Results of Arithmetic Problems?

    ERIC Educational Resources Information Center

    Ganor-Stern, Dana

    2017-01-01

    The present study is the first to examine the computation estimation skills of dyscalculics versus controls using the estimation comparison task. In this task, participants judged whether an estimated answer to a multidigit multiplication problem was larger or smaller than a given reference number. While dyscalculics were less accurate than…

  5. DARK MATTER MASS FRACTION IN LENS GALAXIES: NEW ESTIMATES FROM MICROLENSING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiménez-Vicente, J.; Mediavilla, E.; Kochanek, C. S.

    2015-02-01

    We present a joint estimate of the stellar/dark matter mass fraction in lens galaxies and the average size of the accretion disk of lensed quasars based on microlensing measurements of 27 quasar image pairs seen through 19 lens galaxies. The Bayesian estimate for the fraction of the surface mass density in the form of stars is α = 0.21 ± 0.14 near the Einstein radius of the lenses (∼1-2 effective radii). The estimate for the average accretion disk size is R{sub 1/2}=7.9{sub −2.6}{sup +3.8}√(M/0.3 M{sub ⊙}) light days. The fraction of mass in stars at these radii is significantly largermore » than previous estimates from microlensing studies assuming quasars were point-like. The corresponding local dark matter fraction of 79% is in good agreement with other estimates based on strong lensing or kinematics. The size of the accretion disk inferred in the present study is slightly larger than previous estimates.« less

  6. Structural nested mean models for assessing time-varying effect moderation.

    PubMed

    Almirall, Daniel; Ten Have, Thomas; Murphy, Susan A

    2010-03-01

    This article considers the problem of assessing causal effect moderation in longitudinal settings in which treatment (or exposure) is time varying and so are the covariates said to moderate its effect. Intermediate causal effects that describe time-varying causal effects of treatment conditional on past covariate history are introduced and considered as part of Robins' structural nested mean model. Two estimators of the intermediate causal effects, and their standard errors, are presented and discussed: The first is a proposed two-stage regression estimator. The second is Robins' G-estimator. The results of a small simulation study that begins to shed light on the small versus large sample performance of the estimators, and on the bias-variance trade-off between the two estimators are presented. The methodology is illustrated using longitudinal data from a depression study.

  7. Revisiting the social cost of carbon.

    PubMed

    Nordhaus, William D

    2017-02-14

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is $31 per ton of CO 2 in 2010 US$ for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources.

  8. Revisiting the social cost of carbon

    NASA Astrophysics Data System (ADS)

    Nordhaus, William D.

    2017-02-01

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is 31 per ton of CO2 in 2010 US for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources.

  9. Estimation of octanol/water partition coefficient and aqueous solubility of environmental chemicals using molecular fingerprints and machine learning methods

    EPA Science Inventory

    Octanol/water partition coefficient (logP) and aqueous solubility (logS) are two important parameters in pharmacology and toxicology studies, and experimental measurements are usually time-consuming and expensive. In the present research, novel methods are presented for the estim...

  10. Corrected score estimation in the proportional hazards model with misclassified discrete covariates

    PubMed Central

    Zucker, David M.; Spiegelman, Donna

    2013-01-01

    SUMMARY We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain ‘localized error’ condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer. PMID:18219700

  11. Progress in Turbulence Detection via GNSS Occultation Data

    NASA Technical Reports Server (NTRS)

    Cornman, L. B.; Goodrich, R. K.; Axelrad, P.; Barlow, E.

    2012-01-01

    The increased availability of radio occultation (RO) data offers the ability to detect and study turbulence in the Earth's atmosphere. An analysis of how RO data can be used to determine the strength and location of turbulent regions is presented. This includes the derivation of a model for the power spectrum of the log-amplitude and phase fluctuations of the permittivity (or index of refraction) field. The bulk of the paper is then concerned with the estimation of the model parameters. Parameter estimators are introduced and some of their statistical properties are studied. These estimators are then applied to simulated log-amplitude RO signals. This includes the analysis of global statistics derived from a large number of realizations, as well as case studies that illustrate various specific aspects of the problem. Improvements to the basic estimation methods are discussed, and their beneficial properties are illustrated. The estimation techniques are then applied to real occultation data. Only two cases are presented, but they illustrate some of the salient features inherent in real data.

  12. Unbiased estimation of the calcaneus volume using the Cavalieri principle on computed tomography images.

    PubMed

    Acer, N; Bayar, B; Basaloglu, H; Oner, E; Bayar, K; Sankur, S

    2008-11-20

    The size and shape of tarsal bones are especially relevant when considering some orthopedic diseases such as clubfoot. For this reason, the measurements of the tarsal bones have been the subject of many studies, none of which has used stereological methods to estimate the volume. In the present stereological study, we estimated the volume of calcaneal bone of normal feet and dry bones. We used a combination of the Cavalieri principle and computer tomographic scans taken from eight males and nine dry calcanei to estimate the volumes of calcaneal bones. The mean volume of dry calcaneal bones was estimated, producing mean results using the point-counting method and Archimedes principle being 49.11+/-10.7 or 48.22+/-11.92 cm(3), respectively. A positive correlation was found between anthropometric measurements and the volume of calcaneal bones. The findings of the present study using the stereological methods could provide data for the evaluation of normal and pathological volumes of calcaneal bones.

  13. The Graphical Display of Simulation Results, with Applications to the Comparison of Robust IRT Estimators of Ability.

    ERIC Educational Resources Information Center

    Thissen, David; Wainer, Howard

    Simulation studies of the performance of (potentially) robust statistical estimation produce large quantities of numbers in the form of performance indices of the various estimators under various conditions. This report presents a multivariate graphical display used to aid in the digestion of the plentiful results in a current study of Item…

  14. Investigation of methods for estimating hand bone dimensions using X-ray hand anthropometric data.

    PubMed

    Kong, Yong-Ku; Freivalds, Andris; Kim, Dae-Min; Chang, Joonho

    2017-06-01

    This study examined two conversion methods, M1 and M2, to predict finger/phalange bone lengths based on finger/phalange surface lengths. Forty-one Korean college students (25 males and 16 females) were recruited and their finger/phalange surface lengths, bone lengths and grip strengths were measured using a vernier caliper, an X-ray generator and a double-handle force measurement system, respectively. M1 and M2 were defined as formulas able to estimate finger/phalange bone lengths based on one dimension (i.e., surface hand length) and four finger dimensions (surface finger lengths), respectively. As a result of conversion, the estimation errors by M1 presented mean 1.22 mm, which was smaller than those (1.29 mm) by M2. The bone lengths estimated by M1 (mean r = 0.81) presented higher correlations with the measured bone lengths than those estimated by M2 (0.79). Thus, the M1 method was recommended in the present study, based on conversion simplicity and accuracy.

  15. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  16. Airport Surface Traffic Control Concept Formulation Study : Volume 4. Estimation of Requirements

    DOT National Transportation Integrated Search

    1975-07-01

    A detailed study of requirements was performed and is presented. This requirements effort provided an estimate of the performance requirements of a surveillance sensor that would be required in a TAGS (Tower Automated Ground Surveillance) system for ...

  17. Using open robust design models to estimate temporary emigration from capture-recapture data.

    PubMed

    Kendall, W L; Bjorkland, R

    2001-12-01

    Capture-recapture studies are crucial in many circumstances for estimating demographic parameters for wildlife and fish populations. Pollock's robust design, involving multiple sampling occasions per period of interest, provides several advantages over classical approaches. This includes the ability to estimate the probability of being present and available for detection, which in some situations is equivalent to breeding probability. We present a model for estimating availability for detection that relaxes two assumptions required in previous approaches. The first is that the sampled population is closed to additions and deletions across samples within a period of interest. The second is that each member of the population has the same probability of being available for detection in a given period. We apply our model to estimate survival and breeding probability in a study of hawksbill sea turtles (Eretmochelys imbricata), where previous approaches are not appropriate.

  18. Using open robust design models to estimate temporary emigration from capture-recapture data

    USGS Publications Warehouse

    Kendall, W.L.; Bjorkland, R.

    2001-01-01

    Capture-recapture studies are crucial in many circumstances for estimating demographic parameters for wildlife and fish populations. Pollock's robust design, involving multiple sampling occasions per period of interest, provides several advantages over classical approaches. This includes the ability to estimate the probability of being present and available for detection, which in some situations is equivalent to breeding probability. We present a model for estimating availability for detection that relaxes two assumptions required in previous approaches. The first is that the sampled population is closed to additions and deletions across samples within a period of interest. The second is that each member of the population has the same probability of being available for detection in a given period. We apply our model to estimate survival and breeding probability in a study of hawksbill sea turtles (Eretmochelys imbricata), where previous approaches are not appropriate.

  19. Precipitation estimates and comparison of satellite rainfall data to in situ rain gauge observations to further develop the watershed-modeling capabilities for the Lower Mekong River Basin

    NASA Astrophysics Data System (ADS)

    Dandridge, C.; Lakshmi, V.; Sutton, J. R. P.; Bolten, J. D.

    2017-12-01

    This study focuses on the lower region of the Mekong River Basin (MRB), an area including Burma, Cambodia, Vietnam, Laos, and Thailand. This region is home to expansive agriculture that relies heavily on annual precipitation over the basin for its prosperity. Annual precipitation amounts are regulated by the global monsoon system and therefore vary throughout the year. This research will lead to improved prediction of floods and management of floodwaters for the MRB. We compare different satellite estimates of precipitation to each other and to in-situ precipitation estimates for the Mekong River Basin. These comparisons will help us determine which satellite precipitation estimates are better at predicting precipitation in the MRB and will help further our understanding of watershed-modeling capabilities for the basin. In this study we use: 1) NOAA's PERSIANN daily 0.25° precipitation estimate Climate Data Record (CDR), 2) NASA's Tropical Rainfall Measuring Mission (TRMM) daily 0.25° estimate, and 3) NASA's Global Precipitation Measurement (GPM) daily 0.1 estimate and 4) 488 in-situ stations located in the lower MRB provide daily precipitation estimates. The PERSIANN CDR precipitation estimate was able to provide the longest data record because it is available from 1983 to present. The TRMM precipitation estimate is available from 2000 to present and the GPM precipitation estimates are available from 2015 to present. It is for this reason that we provide several comparisons between our precipitation estimates. Comparisons were done between each satellite product and the in-situ precipitation estimates based on geographical location and date using the entire available data record for each satellite product for daily, monthly, and yearly precipitation estimates. We found that monthly PERSIANN precipitation estimates were able to explain up to 90% of the variability in station precipitation depending on station location.

  20. Reliability reporting across studies using the Buss Durkee Hostility Inventory.

    PubMed

    Vassar, Matt; Hale, William

    2009-01-01

    Empirical research on anger and hostility has pervaded the academic literature for more than 50 years. Accurate measurement of anger/hostility and subsequent interpretation of results requires that the instruments yield strong psychometric properties. For consistent measurement, reliability estimates must be calculated with each administration, because changes in sample characteristics may alter the scale's ability to generate reliable scores. Therefore, the present study was designed to address reliability reporting practices for a widely used anger assessment, the Buss Durkee Hostility Inventory (BDHI). Of the 250 published articles reviewed, 11.2% calculated and presented reliability estimates for the data at hand, 6.8% cited estimates from a previous study, and 77.1% made no mention of score reliability. Mean alpha estimates of scores for BDHI subscales generally fell below acceptable standards. Additionally, no detectable pattern was found between reporting practices and publication year or journal prestige. Areas for future research are also discussed.

  1. Revisiting the social cost of carbon

    PubMed Central

    Nordhaus, William D.

    2017-01-01

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is $31 per ton of CO2 in 2010 US$ for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources. PMID:28143934

  2. Organochlorines in urban soils from Central India: probabilistic health hazard and risk implications to human population.

    PubMed

    Kumar, Bhupander; Mishra, Meenu; Verma, V K; Rai, Premanjali; Kumar, Sanjay

    2018-04-21

    This study presents distribution of organochlorines (OCs) including HCH, DDT and PCBs in urban soils, and their environmental and human health risk. Forty-eight soil samples were extracted using ultrasonication, cleaned with modified silica gel chromatography and analyzed by GC-ECD. The observed concentrations of ∑HCH, ∑DDT and ∑PCBs in soils ranged between < 0.01-2.54, 1.30-27.41 and < 0.01-62.8 µg kg -1 , respectively, which were lower than the recommended soil quality guidelines. Human health risk was estimated following recommended guidelines. Lifetime average daily dose (LADD), non-cancer risk or hazard quotient (HQ) and incremental lifetime cancer risk (ILCR) for humans due to individual and total OCs were estimated and presented. Estimated LADD were lower than acceptable daily intake and reference dose. Human health risk estimates were lower than safe limit of non-cancer risk (HQ < 1.0) and the acceptable distribution range of ILCR (10 -6 -10 -4 ). Therefore, this study concluded that present levels of OCs (HCH, DDT and PCBs) in studied soils were low, and subsequently posed low health risk to human population in the study area.

  3. Estimation of stature from the foot and its segments in a sub-adult female population of North India

    PubMed Central

    2011-01-01

    Background Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. Methods The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. Results The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. Conclusions The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults. PMID:22104433

  4. Estimation of stature from the foot and its segments in a sub-adult female population of North India.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam

    2011-11-21

    Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults.

  5. Estimating site occupancy, colonization, and local extinction when a species is detected imperfectly

    USGS Publications Warehouse

    MacKenzie, D.I.; Nichols, J.D.; Hines, J.E.; Knutson, M.G.; Franklin, A.B.

    2003-01-01

    Few species are likely to be so evident that they will always be detected when present. Failing to allow for the possibility that a target species was present, but undetected, at a site will lead to biased estimates of site occupancy, colonization, and local extinction probabilities. These population vital rates are often of interest in long-term monitoring programs and metapopulation studies. We present a model that enables direct estimation of these parameters when the probability of detecting the species is less than 1. The model does not require any assumptions of process stationarity, as do some previous methods, but does require detection/nondetection data to be collected in a manner similar to Pollock's robust design as used in mark?recapture studies. Via simulation, we show that the model provides good estimates of parameters for most scenarios considered. We illustrate the method with data from monitoring programs of Northern Spotted Owls (Strix occidentalis caurina) in northern California and tiger salamanders (Ambystoma tigrinum) in Minnesota, USA.

  6. Estimating site occupancy, colonization, and local extinction when a species is detected imperfectly

    USGS Publications Warehouse

    MacKenzie, D.I.; Nichols, J.D.; Hines, J.E.; Knutson, M.G.; Franklin, A.B.

    2003-01-01

    Few species are likely to be so evident that they will always be defected when present: Failing to allow for the possibility that a target species was present, but undetected at a site will lead to biased estimates of site occupancy, colonization,and local extinction probabilities. These population vital rates are often of interest in long-term monitoring programs and metapopulation studies. We present a model that enables direct estimation of these parameters when the probability of detecting the species is less than 1. The model does not require any assumptions-of process stationarity, as do some previous methods, but does require detection/nondetection data to be collected in a-manner similar to. Pollock's robust design as used-in mark-recapture studies. Via simulation, we,show that the model provides good estimates of parameters for most scenarios considered. We illustrate the method with data from monitoring programs of Northern Spotted Owls (Strix occidentalis caurina) in northern California and tiger salamanders (Ambystoma tigrinum) in Minnesota, USA.

  7. An improved method for nonlinear parameter estimation: a case study of the Rössler model

    NASA Astrophysics Data System (ADS)

    He, Wen-Ping; Wang, Liu; Jiang, Yun-Di; Wan, Shi-Quan

    2016-08-01

    Parameter estimation is an important research topic in nonlinear dynamics. Based on the evolutionary algorithm (EA), Wang et al. (2014) present a new scheme for nonlinear parameter estimation and numerical tests indicate that the estimation precision is satisfactory. However, the convergence rate of the EA is relatively slow when multiple unknown parameters in a multidimensional dynamical system are estimated simultaneously. To solve this problem, an improved method for parameter estimation of nonlinear dynamical equations is provided in the present paper. The main idea of the improved scheme is to use all of the known time series for all of the components in some dynamical equations to estimate the parameters in single component one by one, instead of estimating all of the parameters in all of the components simultaneously. Thus, we can estimate all of the parameters stage by stage. The performance of the improved method was tested using a classic chaotic system—Rössler model. The numerical tests show that the amended parameter estimation scheme can greatly improve the searching efficiency and that there is a significant increase in the convergence rate of the EA, particularly for multiparameter estimation in multidimensional dynamical equations. Moreover, the results indicate that the accuracy of parameter estimation and the CPU time consumed by the presented method have no obvious dependence on the sample size.

  8. "Flash" dance: how speed modulates percieved duration in dancers and non-dancers.

    PubMed

    Sgouramani, Helena; Vatakis, Argiro

    2014-03-01

    Speed has been proposed as a modulating factor on duration estimation. However, the different measurement methodologies and experimental designs used have led to inconsistent results across studies, and, thus, the issue of how speed modulates time estimation remains unresolved. Additionally, no studies have looked into the role of expertise on spatiotemporal tasks (tasks requiring high temporal and spatial acuity; e.g., dancing) and susceptibility to modulations of speed in timing judgments. In the present study, therefore, using naturalistic, dynamic dance stimuli, we aimed at defining the role of speed and the interaction of speed and experience on time estimation. We presented videos of a dancer performing identical ballet steps in fast and slow versions, while controlling for the number of changes present. Professional dancers and non-dancers performed duration judgments through a production and a reproduction task. Analysis revealed a significantly larger underestimation of fast videos as compared to slow ones during reproduction. The exact opposite result was true for the production task. Dancers were significantly less variable in their time estimations as compared to non-dancers. Speed and experience, therefore, affect the participants' estimates of time. Results are discussed in association to the theoretical framework of current models by focusing on the role of attention. © 2013 Elsevier B.V. All rights reserved.

  9. A Simulation Study Comparison of Bayesian Estimation with Conventional Methods for Estimating Unknown Change Points

    ERIC Educational Resources Information Center

    Wang, Lijuan; McArdle, John J.

    2008-01-01

    The main purpose of this research is to evaluate the performance of a Bayesian approach for estimating unknown change points using Monte Carlo simulations. The univariate and bivariate unknown change point mixed models were presented and the basic idea of the Bayesian approach for estimating the models was discussed. The performance of Bayesian…

  10. Unmasking the component-general and component-specific aspects of primary and secondary memory in the immediate free recall task.

    PubMed

    Gibson, Bradley S; Gondoli, Dawn M

    2018-04-01

    The immediate free recall (IFR) task has been commonly used to estimate the capacities of the primary memory (PM) and secondary memory (SM) components of working memory (WM). Using this method, the correlation between estimates of the PM and SM components has hovered around zero, suggesting that PM and SM represent fully distinct and dissociable components of WM. However, this conclusion has conflicted with more recent studies that have observed moderately strong, positive correlations between PM and SM when separate attention and retrieval tasks are used to estimate these capacities, suggesting that PM and SM represent at least some related capacities. The present study attempted to resolve this empirical discrepancy by investigating the extent to which the relation between estimates of PM and SM might be suppressed by a third variable that operates during the recall portion of the IFR task. This third variable was termed "strength of recency" (SOR) in the present study as it reflected differences in the extent to which individuals used the same experimentally-induced recency recall initiation strategy. As predicted, the present findings showed that the positive correlation between estimates of PM and SM grew from small to medium when the indirect effect of SOR was controlled across two separate sets of studies. This finding is important because it provides stronger support for the distinction between "component-general" and "component-specific" aspects of PM and SM; furthermore, a proof is presented that demonstrates a limitation of using regression techniques to differentiate general and specific aspects of these components.

  11. Geostatistics and petroleum geology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hohn, M.E.

    1988-01-01

    This book examines purpose and use of geostatistics in exploration and development of oil and gas with an emphasis on appropriate and pertinent case studies. It present an overview of geostatistics. Topics covered include: The semivariogram; Linear estimation; Multivariate geostatistics; Nonlinear estimation; From indicator variables to nonparametric estimation; and More detail, less certainty; conditional simulation.

  12. Estimating Total-Test Scores from Partial Scores in a Matrix Sampling Design.

    ERIC Educational Resources Information Center

    Sachar, Jane; Suppes, Patrick

    1980-01-01

    The present study compared six methods, two of which utilize the content structure of items, to estimate total-test scores using 450 students and 60 items of the 110-item Stanford Mental Arithmetic Test. Three methods yielded fairly good estimates of the total-test score. (Author/RL)

  13. Estimating Slope and Level Change in N = 1 Designs

    ERIC Educational Resources Information Center

    Solanas, Antonio; Manolov, Rumen; Onghena, Patrick

    2010-01-01

    The current study proposes a new procedure for separately estimating slope change and level change between two adjacent phases in single-case designs. The procedure eliminates baseline trend from the whole data series before assessing treatment effectiveness. The steps necessary to obtain the estimates are presented in detail, explained, and…

  14. Estimating Surface Area of Sponges and Marine Gorgonians as Indicators of Habitat Availability on Caribbean Coral Reefs

    EPA Science Inventory

    Surface area and topographical complexity are fundamental attributes of shallow tropical coral reefs and can be used to estimate habitat for fish and invertebrates. This study presents empirical methods for estimating surface area provided by sponges and gorgonians in the Central...

  15. Estimation after classification using lot quality assurance sampling: corrections for curtailed sampling with application to evaluating polio vaccination campaigns.

    PubMed

    Olives, Casey; Valadez, Joseph J; Pagano, Marcello

    2014-03-01

    To assess the bias incurred when curtailment of Lot Quality Assurance Sampling (LQAS) is ignored, to present unbiased estimators, to consider the impact of cluster sampling by simulation and to apply our method to published polio immunization data from Nigeria. We present estimators of coverage when using two kinds of curtailed LQAS strategies: semicurtailed and curtailed. We study the proposed estimators with independent and clustered data using three field-tested LQAS designs for assessing polio vaccination coverage, with samples of size 60 and decision rules of 9, 21 and 33, and compare them to biased maximum likelihood estimators. Lastly, we present estimates of polio vaccination coverage from previously published data in 20 local government authorities (LGAs) from five Nigerian states. Simulations illustrate substantial bias if one ignores the curtailed sampling design. Proposed estimators show no bias. Clustering does not affect the bias of these estimators. Across simulations, standard errors show signs of inflation as clustering increases. Neither sampling strategy nor LQAS design influences estimates of polio vaccination coverage in 20 Nigerian LGAs. When coverage is low, semicurtailed LQAS strategies considerably reduces the sample size required to make a decision. Curtailed LQAS designs further reduce the sample size when coverage is high. Results presented dispel the misconception that curtailed LQAS data are unsuitable for estimation. These findings augment the utility of LQAS as a tool for monitoring vaccination efforts by demonstrating that unbiased estimation using curtailed designs is not only possible but these designs also reduce the sample size. © 2014 John Wiley & Sons Ltd.

  16. Estimating associations of mobile phone use and brain tumours taking into account laterality: a comparison and theoretical evaluation of applied methods.

    PubMed

    Frederiksen, Kirsten; Deltour, Isabelle; Schüz, Joachim

    2012-12-10

    Estimating exposure-outcome associations using laterality information on exposure and on outcome is an issue, when estimating associations of mobile phone use and brain tumour risk. The exposure is localized; therefore, a potential risk is expected to exist primarily on the side of the head, where the phone is usually held (ipsilateral exposure), and to a lesser extent at the opposite side of the head (contralateral exposure). Several measures of the associations with ipsilateral and contralateral exposure, dealing with different sampling designs, have been presented in the literature. This paper presents a general framework for the analysis of such studies using a likelihood-based approach in a competing risks model setting. The approach clarifies the implicit assumptions required for the validity of the presented estimators, particularly that in some approaches the risk with contralateral exposure is assumed to be zero. The performance of the estimators is illustrated in a simulation study showing for instance that while in some scenarios there is a loss of statistical power, others - in case of a positive ipsilateral exposure-outcome association - would result in a negatively biased estimate of the contralateral exposure parameter, irrespective of any additional recall bias. In conclusion, our theoretical evaluations and results from the simulation study emphasize the importance of setting up a formal model, which furthermore allows for estimation in more complicated and perhaps more realistic exposure settings, such as taking into account exposure to both sides of the head. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Estimation of Time-Varying Pilot Model Parameters

    NASA Technical Reports Server (NTRS)

    Zaal, Peter M. T.; Sweet, Barbara T.

    2011-01-01

    Human control behavior is rarely completely stationary over time due to fatigue or loss of attention. In addition, there are many control tasks for which human operators need to adapt their control strategy to vehicle dynamics that vary in time. In previous studies on the identification of time-varying pilot control behavior wavelets were used to estimate the time-varying frequency response functions. However, the estimation of time-varying pilot model parameters was not considered. Estimating these parameters can be a valuable tool for the quantification of different aspects of human time-varying manual control. This paper presents two methods for the estimation of time-varying pilot model parameters, a two-step method using wavelets and a windowed maximum likelihood estimation method. The methods are evaluated using simulations of a closed-loop control task with time-varying pilot equalization and vehicle dynamics. Simulations are performed with and without remnant. Both methods give accurate results when no pilot remnant is present. The wavelet transform is very sensitive to measurement noise, resulting in inaccurate parameter estimates when considerable pilot remnant is present. Maximum likelihood estimation is less sensitive to pilot remnant, but cannot detect fast changes in pilot control behavior.

  18. State Estimates of Disability in America. Disability Statistics Report 3.

    ERIC Educational Resources Information Center

    LaPlante, Mitchell P.

    This study presents and discusses existing data on disability by state, from the 1980 and 1990 censuses, the Current Population Survey (CPS), and the National Health Interview Survey (NHIS). The study used direct methods for states with large sample sizes and synthetic estimates for states with low sample sizes. The study's highlighted findings…

  19. Implicit Particle Filter for Power System State Estimation with Large Scale Renewable Power Integration.

    NASA Astrophysics Data System (ADS)

    Uzunoglu, B.; Hussaini, Y.

    2017-12-01

    Implicit Particle Filter is a sequential Monte Carlo method for data assimilation that guides the particles to the high-probability by an implicit step . It optimizes a nonlinear cost function which can be inherited from legacy assimilation routines . Dynamic state estimation for almost real-time applications in power systems are becomingly increasingly more important with integration of variable wind and solar power generation. New advanced state estimation tools that will replace the old generation state estimation in addition to having a general framework of complexities should be able to address the legacy software and able to integrate the old software in a mathematical framework while allowing the power industry need for a cautious and evolutionary change in comparison to a complete revolutionary approach while addressing nonlinearity and non-normal behaviour. This work implements implicit particle filter as a state estimation tool for the estimation of the states of a power system and presents the first implicit particle filter application study on a power system state estimation. The implicit particle filter is introduced into power systems and the simulations are presented for a three-node benchmark power system . The performance of the filter on the presented problem is analyzed and the results are presented.

  20. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  1. Quantum-enhanced metrology for multiple phase estimation with noise

    PubMed Central

    Yue, Jie-Dong; Zhang, Yu-Ran; Fan, Heng

    2014-01-01

    We present a general quantum metrology framework to study the simultaneous estimation of multiple phases in the presence of noise as a discretized model for phase imaging. This approach can lead to nontrivial bounds of the precision for multiphase estimation. Our results show that simultaneous estimation (SE) of multiple phases is always better than individual estimation (IE) of each phase even in noisy environment. The utility of the bounds of multiple phase estimation for photon loss channels is exemplified explicitly. When noise is low, those bounds possess the Heisenberg scale showing quantum-enhanced precision with the O(d) advantage for SE, where d is the number of phases. However, this O(d) advantage of SE scheme in the variance of the estimation may disappear asymptotically when photon loss becomes significant and then only a constant advantage over that of IE scheme demonstrates. Potential application of those results is presented. PMID:25090445

  2. Estimating thermal performance curves from repeated field observations

    USGS Publications Warehouse

    Childress, Evan; Letcher, Benjamin H.

    2017-01-01

    Estimating thermal performance of organisms is critical for understanding population distributions and dynamics and predicting responses to climate change. Typically, performance curves are estimated using laboratory studies to isolate temperature effects, but other abiotic and biotic factors influence temperature-performance relationships in nature reducing these models' predictive ability. We present a model for estimating thermal performance curves from repeated field observations that includes environmental and individual variation. We fit the model in a Bayesian framework using MCMC sampling, which allowed for estimation of unobserved latent growth while propagating uncertainty. Fitting the model to simulated data varying in sampling design and parameter values demonstrated that the parameter estimates were accurate, precise, and unbiased. Fitting the model to individual growth data from wild trout revealed high out-of-sample predictive ability relative to laboratory-derived models, which produced more biased predictions for field performance. The field-based estimates of thermal maxima were lower than those based on laboratory studies. Under warming temperature scenarios, field-derived performance models predicted stronger declines in body size than laboratory-derived models, suggesting that laboratory-based models may underestimate climate change effects. The presented model estimates true, realized field performance, avoiding assumptions required for applying laboratory-based models to field performance, which should improve estimates of performance under climate change and advance thermal ecology.

  3. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    PubMed

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  4. Misconceptions of Astronomical Distances

    ERIC Educational Resources Information Center

    Miller, Brian W.; Brewer, William F.

    2010-01-01

    Previous empirical studies using multiple-choice procedures have suggested that there are misconceptions about the scale of astronomical distances. The present study provides a quantitative estimate of the nature of this misconception among US university students by asking them, in an open-ended response format, to make estimates of the distances…

  5. Modelling the effect on injuries and fatalities when changing mode of transport from car to bicycle.

    PubMed

    Nilsson, Philip; Stigson, Helena; Ohlin, Maria; Strandroth, Johan

    2017-03-01

    Several studies have estimated the health effects of active commuting, where a transport mode shift from car to bicycle reduces risk of mortality and morbidity. Previous studies mainly assess the negative aspects of bicycling by referring to fatalities or police reported injuries. However, most bicycle crashes are not reported by the police and therefore hospital reported data would cover a much higher rate of injuries from bicycle crashes. The aim of the present study was to estimate the effect on injuries and fatalities from traffic crashes when shifting mode of transport from car to bicycle by using hospital reported data. This present study models the change in number of injuries and fatalities due to a transport mode change using a given flow change from car to bicycle and current injury and fatality risk per distance for bicyclists and car occupants. show that bicyclists have a much higher injury risk (29 times) and fatality risk (10 times) than car occupants. In a scenario where car occupants in Stockholm living close to their work place shifts transport mode to bicycling, injuries, fatalities and health loss expressed in Disability-Adjusted Life Years (DALY) were estimated to increase. The vast majority of the estimated DALY increase was caused by severe injuries and fatalities and it tends to fluctuate so that the number of severe crashes may exceed the estimation with a large margin. Although the estimated increase of traffic crashes and DALY, a transport mode shift is seen as a way towards a more sustainable society. Thus, this present study highlights the need of strategic preventive measures in order to minimize the negative impacts from increased bicycling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  7. Online frequency estimation with applications to engine and generator sets

    NASA Astrophysics Data System (ADS)

    Manngård, Mikael; Böling, Jari M.

    2017-07-01

    Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.

  8. Study of solid rocket motors for a space shuttle booster. Volume 2, book 3: Cost estimating data

    NASA Technical Reports Server (NTRS)

    Vanderesch, A. H.

    1972-01-01

    Cost estimating data for the 156 inch diameter, parallel burn solid rocket propellant engine selected for the space shuttle booster are presented. The costing aspects on the baseline motor are initially considered. From the baseline, sufficient data is obtained to provide cost estimates of alternate approaches.

  9. Individual snag detection using neighborhood attribute filtered airborne lidar data

    Treesearch

    Brian M. Wing; Martin W. Ritchie; Kevin Boston; Warren B. Cohen; Michael J. Olsen

    2015-01-01

    The ability to estimate and monitor standing dead trees (snags) has been difficult due to their irregular and sparse distribution, often requiring intensive sampling methods to obtain statistically significant estimates. This study presents a new method for estimating and monitoring snags using neighborhood attribute filtered airborne discrete-return lidar data. The...

  10. The benefits of transit in the United States : a review and analysis of benefit-cost studies.

    DOT National Transportation Integrated Search

    2015-07-01

    This white paper presents the findings from a review and analysis of the available literature on benefit-cost (b-c) estimates of : existing U.S. transit systems. Following an inventory of the literature, the b-c estimates from each study were organiz...

  11. Estimation of Fat-free Mass at Discharge in Preterm Infants Fed With Optimized Feeding Regimen.

    PubMed

    Larcade, Julie; Pradat, Pierre; Buffin, Rachel; Leick-Courtois, Charline; Jourdes, Emilie; Picaud, Jean-Charles

    2017-01-01

    The purpose of the present study was to validate a previously calculated equation (E1) that estimates infant fat-free mass (FFM) at discharge using data from a population of preterm infants receiving an optimized feeding regimen. Preterm infants born before 33 weeks of gestation between April 2014 and November 2015 in the tertiary care unit of Croix-Rousse Hospital in Lyon, France, were included in the study. At discharge, FFM was assessed by air displacement plethysmography (PEA POD) and was compared with FFM estimated by E1. FFM was estimated using a multiple linear regression model. Data on 155 preterm infants were collected. There was a strong correlation between the FFM estimated by E1 and FFM assessed by the PEA POD (r = 0.939). E1, however, underestimated the FFM (average difference: -197 g), and this underestimation increased as FFM increased. A new, more predictive equation is proposed (r = 0.950, average difference: -12 g). Although previous estimation methods were useful for estimating FFM at discharge, an equation adapted to present populations of preterm infants with "modern" neonatal care and nutritional practices is required for accuracy.

  12. Travtek Evaluation Yoked Driver Study

    DOT National Transportation Integrated Search

    1998-11-01

    The purpose of this paper is to present estimates of potential safety benefits resulting from full implementation of Intelligent Transportation Systems (ITS) in the United States. These estimates were derived by integrating results from a number of d...

  13. Presenting simulation results in a nested loop plot.

    PubMed

    Rücker, Gerta; Schwarzer, Guido

    2014-12-12

    Statisticians investigate new methods in simulations to evaluate their properties for future real data applications. Results are often presented in a number of figures, e.g., Trellis plots. We had conducted a simulation study on six statistical methods for estimating the treatment effect in binary outcome meta-analyses, where selection bias (e.g., publication bias) was suspected because of apparent funnel plot asymmetry. We varied five simulation parameters: true treatment effect, extent of selection, event proportion in control group, heterogeneity parameter, and number of studies in meta-analysis. In combination, this yielded a total number of 768 scenarios. To present all results using Trellis plots, 12 figures were needed. Choosing bias as criterion of interest, we present a 'nested loop plot', a diagram type that aims to have all simulation results in one plot. The idea was to bring all scenarios into a lexicographical order and arrange them consecutively on the horizontal axis of a plot, whereas the treatment effect estimate is presented on the vertical axis. The plot illustrates how parameters simultaneously influenced the estimate. It can be combined with a Trellis plot in a so-called hybrid plot. Nested loop plots may also be applied to other criteria such as the variance of estimation. The nested loop plot, similar to a time series graph, summarizes all information about the results of a simulation study with respect to a chosen criterion in one picture and provides a suitable alternative or an addition to Trellis plots.

  14. Self-Estimation of Blood Alcohol Concentration: A Review

    PubMed Central

    Aston, Elizabeth R.; Liguori, Anthony

    2013-01-01

    This article reviews the history of blood alcohol concentration (BAC) estimation training, which trains drinkers to discriminate distinct BAC levels and thus avoid excessive alcohol consumption. BAC estimation training typically combines education concerning alcohol metabolism with attention to subjective internal cues associated with specific concentrations. Estimation training was originally conceived as a component of controlled drinking programs. However, dependent drinkers were unsuccessful in BAC estimation, likely due to extreme tolerance. In contrast, moderate drinkers successfully acquired this ability. A subsequent line of research translated laboratory estimation studies to naturalistic settings by studying large samples of drinkers in their preferred drinking environments. Thus far, naturalistic studies have provided mixed results regarding the most effective form of BAC feedback. BAC estimation training is important because it imparts an ability to perceive individualized impairment that may be present below the legal limit for driving. Consequently, the training can be a useful component for moderate drinkers in drunk driving prevention programs. PMID:23380489

  15. Estimation of species extinction: what are the consequences when total species number is unknown?

    PubMed

    Chen, Youhua

    2014-12-01

    The species-area relationship (SAR) is known to overestimate species extinction but the underlying mechanisms remain unclear to a great extent. Here, I show that when total species number in an area is unknown, the SAR model exaggerates the estimation of species extinction. It is proposed that to accurately estimate species extinction caused by habitat destruction, one of the principal prerequisites is to accurately total the species numbers presented in the whole study area. One can better evaluate and compare alternative theoretical SAR models on the accurate estimation of species loss only when the exact total species number for the whole area is clear. This presents an opportunity for ecologists to simulate more research on accurately estimating Whittaker's gamma diversity for the purpose of better predicting species loss.

  16. Fossils matter: improved estimates of divergence times in Pinus reveal older diversification.

    PubMed

    Saladin, Bianca; Leslie, Andrew B; Wüest, Rafael O; Litsios, Glenn; Conti, Elena; Salamin, Nicolas; Zimmermann, Niklaus E

    2017-04-04

    The taxonomy of pines (genus Pinus) is widely accepted and a robust gene tree based on entire plastome sequences exists. However, there is a large discrepancy in estimated divergence times of major pine clades among existing studies, mainly due to differences in fossil placement and dating methods used. We currently lack a dated molecular phylogeny that makes use of the rich pine fossil record, and this study is the first to estimate the divergence dates of pines based on a large number of fossils (21) evenly distributed across all major clades, in combination with applying both node and tip dating methods. We present a range of molecular phylogenetic trees of Pinus generated within a Bayesian framework. We find the origin of crown Pinus is likely up to 30 Myr older (Early Cretaceous) than inferred in most previous studies (Late Cretaceous) and propose generally older divergence times for major clades within Pinus than previously thought. Our age estimates vary significantly between the different dating approaches, but the results generally agree on older divergence times. We present a revised list of 21 fossils that are suitable to use in dating or comparative analyses of pines. Reliable estimates of divergence times in pines are essential if we are to link diversification processes and functional adaptation of this genus to geological events or to changing climates. In addition to older divergence times in Pinus, our results also indicate that node age estimates in pines depend on dating approaches and the specific fossil sets used, reflecting inherent differences in various dating approaches. The sets of dated phylogenetic trees of pines presented here provide a way to account for uncertainties in age estimations when applying comparative phylogenetic methods.

  17. Methods to estimate the between‐study variance and its uncertainty in meta‐analysis†

    PubMed Central

    Jackson, Dan; Viechtbauer, Wolfgang; Bender, Ralf; Bowden, Jack; Knapp, Guido; Kuss, Oliver; Higgins, Julian PT; Langan, Dean; Salanti, Georgia

    2015-01-01

    Meta‐analyses are typically used to estimate the overall/mean of an outcome of interest. However, inference about between‐study variability, which is typically modelled using a between‐study variance parameter, is usually an additional aim. The DerSimonian and Laird method, currently widely used by default to estimate the between‐study variance, has been long challenged. Our aim is to identify known methods for estimation of the between‐study variance and its corresponding uncertainty, and to summarise the simulation and empirical evidence that compares them. We identified 16 estimators for the between‐study variance, seven methods to calculate confidence intervals, and several comparative studies. Simulation studies suggest that for both dichotomous and continuous data the estimator proposed by Paule and Mandel and for continuous data the restricted maximum likelihood estimator are better alternatives to estimate the between‐study variance. Based on the scenarios and results presented in the published studies, we recommend the Q‐profile method and the alternative approach based on a ‘generalised Cochran between‐study variance statistic’ to compute corresponding confidence intervals around the resulting estimates. Our recommendations are based on a qualitative evaluation of the existing literature and expert consensus. Evidence‐based recommendations require an extensive simulation study where all methods would be compared under the same scenarios. © 2015 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. PMID:26332144

  18. A proposed Kalman filter algorithm for estimation of unmeasured output variables for an F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Alag, Gurbux S.; Gilyard, Glenn B.

    1990-01-01

    To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.

  19. A technology-based mass emission factors of gases and aerosol precursor and spatial distribution of emissions from on-road transport sector in India

    NASA Astrophysics Data System (ADS)

    Prakash, Jai; Habib, Gazala

    2018-05-01

    This study presents a new emission estimate of gaseous pollutants including CO, CO2, and NOX from on-road transport sector of India for the base year 2013. For the first time, a detailed vintage-wise on-road measured emission factors used for reducing uncertainties in emission estimates. The consumptions of diesel, gasoline, and compressed natural gas (CNG) were also estimated at the national level and disaggregated at the state level. The national average use of diesel, gasoline, and CNG and their 95% confidence interval estimated as 52 (39-66), 24 (18-30), and 1.6 (1.2-2.0) MTy-1 for the year 2013. The CO, CO2, and NOX emissions were estimated as 7349 (3220-11477) Gg y-1, 261 (179-343) Tg y-1, and 4052 (2127-5977) Gg y-1, respectively from on-road transport sector for the year 2013. New vehicles registered after 2005 emit 70-80% of national level CO2, and NOX, while rest 20-30% were emitted by old vehicles registered before 2005. Old and new vehicles both equally contributed to CO emissions. Superemitters accounted for 14% of total traffic volume, but they were responsible for 17-57% of total CO2, CO and NOX emissions. The uncertainties in emission estimates were reduced to 48-56% compared to previous estimates (62-136%). The comparison with recent studies for nationwide emission estimates from 4-wheelers indicated that use of emission factors from dynamometer studies can underestimate the emissions by 32-92% for various pollutants, while an overestimation by 20-82% was seen with the use of emission model derived emission factors. Similarly for Delhi city recent CO and NOx emission estimates for 4-wheelers based on emission factors reported from dynamometer studies were 23-89% lower than present work. The present work revealed the need for representative vintage wise emission factor database development from on-road measurement and the more comprehensive assessment of activity data through survey.

  20. Subjective Word Frequency Estimates in L1 and L2.

    ERIC Educational Resources Information Center

    Arnaud, Pierre J. L.

    A study investigated the usefulness of non-native speakers' subjective, relative word frequency estimates as a measure of second language proficiency. In the experiment, two subjective frequency estimate (SFE) tasks, one on French and one on English, were presented to French learners of English (n=126) and American learners of French (n=87).…

  1. Model comparisons for estimating carbon emissions from North American wildland fire

    Treesearch

    Nancy H.F. French; William J. de Groot; Liza K. Jenkins; Brendan M. Rogers; Ernesto Alvarado; Brian Amiro; Bernardus De Jong; Scott Goetz; Elizabeth Hoy; Edward Hyer; Robert Keane; B.E. Law; Donald McKenzie; Steven G. McNulty; Roger Ottmar; Diego R. Perez-Salicrup; James Randerson; Kevin M. Robertson; Merritt Turetsky

    2011-01-01

    Research activities focused on estimating the direct emissions of carbon from wildland fires across North America are reviewed as part of the North American Carbon Program disturbance synthesis. A comparison of methods to estimate the loss of carbon from the terrestrial biosphere to the atmosphere from wildland fires is presented. Published studies on emissions from...

  2. An overall estimation of losses caused by diseases in the Brazilian fish farms.

    PubMed

    Tavares-Dias, Marcos; Martins, Maurício Laterça

    2017-12-01

    Parasitic and infectious diseases are common in finfish, but are difficult to accurately estimate the economic impacts on the production in a country with large dimensions like Brazil. The aim of this study was to estimate the costs caused by economic losses of finfish due to mortality by diseases in Brazil. A model for estimating the costs related to parasitic and bacterial diseases in farmed fish and an estimative of these economic impacts are presented. We used official data of production and mortality of finfish for rough estimation of economic losses. The losses herein presented are related to direct and indirect economic costs for freshwater farmed fish, which were estimated in US$ 84 million per year. Finally, it was possible to establish by the first time an estimative of overall losses in finfish production in Brazil using data available from production. Therefore, this current estimative must help researchers and policy makers to approximate the economic costs of diseases for fish farming industry, as well as for developing of public policies on the control measures of diseases and priority research lines.

  3. Meta-analysis of the effect of road work zones on crash occurrence.

    PubMed

    Theofilatos, Athanasios; Ziakopoulos, Apostolos; Papadimitriou, Eleonora; Yannis, George; Diamandouros, Konstantinos

    2017-11-01

    There is strong evidence that work zones pose increased risk of crashes and injuries. The two most common risk factors associated with increased crash frequencies are work zone duration and length. However, relevant research on the topic is relatively limited. For that reason, this paper presents formal meta-analyses of studies that have estimated the relationship between the number of crashes and work zone duration and length, in order to provide overall estimates of those effects on crash frequencies. All studies presented in this paper are crash prediction models with similar specifications. According to the meta-analyses and after correcting for publication bias when it was considered appropriate, the summary estimates of regression coefficients were found to be 0.1703 for duration and 0.862 for length. These effects were significant for length but not for duration. However, the overall estimate of duration was significant before correcting for publication bias. Separate meta-analyses on the studies examining both duration and length was also carried out in order to have rough estimates of the combined effects. The estimate of duration was found to be 0.953, while for length was 0.847. Similar to previous meta-analyses the effect of duration after correcting for publication bias is not significant, while the effect of length was significant at a 95% level. Meta-regression findings indicate that the main factors influencing the overall estimates of the beta coefficients are study year and region for duration and study year and model specification for length. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Some applications of categorical data analysis to epidemiological studies.

    PubMed Central

    Grizzle, J E; Koch, G G

    1979-01-01

    Several examples of categorized data from epidemiological studies are analyzed to illustrate that more informative analysis than tests of independence can be performed by fitting models. All of the analyses fit into a unified conceptual framework that can be performed by weighted least squares. The methods presented show how to calculate point estimate of parameters, asymptotic variances, and asymptotically valid chi 2 tests. The examples presented are analysis of relative risks estimated from several 2 x 2 tables, analysis of selected features of life tables, construction of synthetic life tables from cross-sectional studies, and analysis of dose-response curves. PMID:540590

  5. SUBTLEX-ESP: Spanish Word Frequencies Based on Film Subtitles

    ERIC Educational Resources Information Center

    Cuetos, Fernando; Glez-Nosti, Maria; Barbon, Analia; Brysbaert, Marc

    2011-01-01

    Recent studies have shown that word frequency estimates obtained from films and television subtitles are better to predict performance in word recognition experiments than the traditional word frequency estimates based on books and newspapers. In this study, we present a subtitle-based word frequency list for Spanish, one of the most widely spoken…

  6. A phylogeny and revised classification of Squamata, including 4161 species of lizards and snakes

    PubMed Central

    2013-01-01

    Background The extant squamates (>9400 known species of lizards and snakes) are one of the most diverse and conspicuous radiations of terrestrial vertebrates, but no studies have attempted to reconstruct a phylogeny for the group with large-scale taxon sampling. Such an estimate is invaluable for comparative evolutionary studies, and to address their classification. Here, we present the first large-scale phylogenetic estimate for Squamata. Results The estimated phylogeny contains 4161 species, representing all currently recognized families and subfamilies. The analysis is based on up to 12896 base pairs of sequence data per species (average = 2497 bp) from 12 genes, including seven nuclear loci (BDNF, c-mos, NT3, PDC, R35, RAG-1, and RAG-2), and five mitochondrial genes (12S, 16S, cytochrome b, ND2, and ND4). The tree provides important confirmation for recent estimates of higher-level squamate phylogeny based on molecular data (but with more limited taxon sampling), estimates that are very different from previous morphology-based hypotheses. The tree also includes many relationships that differ from previous molecular estimates and many that differ from traditional taxonomy. Conclusions We present a new large-scale phylogeny of squamate reptiles that should be a valuable resource for future comparative studies. We also present a revised classification of squamates at the family and subfamily level to bring the taxonomy more in line with the new phylogenetic hypothesis. This classification includes new, resurrected, and modified subfamilies within gymnophthalmid and scincid lizards, and boid, colubrid, and lamprophiid snakes. PMID:23627680

  7. Communications availability: Estimation studies at AMSC

    NASA Technical Reports Server (NTRS)

    Sigler, C. Edward, Jr.

    1994-01-01

    The results of L-band communications availability work performed to date are presented. Results include a L-band communications availability estimate model and field propagation trials using an INMARSAT-M terminal. American Mobile Satellite Corporation's (AMSC's) primary concern centers on availability of voice communications intelligibility, with secondary concerns for circuit-switched data and fax. The model estimates for representative terrain/vegetation areas are applied to the contiguous U.S. for overall L-band communications availability estimates.

  8. Orbit/attitude estimation with LANDSAT Landmark data

    NASA Technical Reports Server (NTRS)

    Hall, D. L.; Waligora, S.

    1979-01-01

    The use of LANDSAT landmark data for orbit/attitude and camera bias estimation was studied. The preliminary results of these investigations are presented. The Goddard Trajectory Determination System (GTDS) error analysis capability was used to perform error analysis studies. A number of questions were addressed including parameter observability and sensitivity, effects on the solve-for parameter errors of data span, density, and distribution an a priori covariance weighting. The use of the GTDS differential correction capability with acutal landmark data was examined. The rms line and element observation residuals were studied as a function of the solve-for parameter set, a priori covariance weighting, force model, attitude model and data characteristics. Sample results are presented. Finally, verfication and preliminary system evaluation of the LANDSAT NAVPAK system for sequential (extended Kalman Filter) estimation of orbit, and camera bias parameters is given.

  9. Delay and environmental costs of truck crashes

    DOT National Transportation Integrated Search

    2013-03-01

    This report presents estimates of certain categories of costs of truck- and bus-involved crashes. Crash related costs estimated as part of this study include vehicle delay costs, emission costs, and fuel consumption costs. In addition, this report al...

  10. Evaluating Perceived Probability of Threat-Relevant Outcomes and Temporal Orientation in Flying Phobia.

    PubMed

    Mavromoustakos, Elena; Clark, Gavin I; Rock, Adam J

    2016-01-01

    Probability bias regarding threat-relevant outcomes has been demonstrated across anxiety disorders but has not been investigated in flying phobia. Individual temporal orientation (time perspective) may be hypothesised to influence estimates of negative outcomes occurring. The present study investigated whether probability bias could be demonstrated in flying phobia and whether probability estimates of negative flying events was predicted by time perspective. Sixty flying phobic and fifty-five non-flying-phobic adults were recruited to complete an online questionnaire. Participants completed the Flight Anxiety Scale, Probability Scale (measuring perceived probability of flying-negative events, general-negative and general positive events) and the Past-Negative, Future and Present-Hedonistic subscales of the Zimbardo Time Perspective Inventory (variables argued to predict mental travel forward and backward in time). The flying phobic group estimated the probability of flying negative and general negative events occurring as significantly higher than non-flying phobics. Past-Negative scores (positively) and Present-Hedonistic scores (negatively) predicted probability estimates of flying negative events. The Future Orientation subscale did not significantly predict probability estimates. This study is the first to demonstrate probability bias for threat-relevant outcomes in flying phobia. Results suggest that time perspective may influence perceived probability of threat-relevant outcomes but the nature of this relationship remains to be determined.

  11. Evaluating Perceived Probability of Threat-Relevant Outcomes and Temporal Orientation in Flying Phobia

    PubMed Central

    Mavromoustakos, Elena; Clark, Gavin I.; Rock, Adam J.

    2016-01-01

    Probability bias regarding threat-relevant outcomes has been demonstrated across anxiety disorders but has not been investigated in flying phobia. Individual temporal orientation (time perspective) may be hypothesised to influence estimates of negative outcomes occurring. The present study investigated whether probability bias could be demonstrated in flying phobia and whether probability estimates of negative flying events was predicted by time perspective. Sixty flying phobic and fifty-five non-flying-phobic adults were recruited to complete an online questionnaire. Participants completed the Flight Anxiety Scale, Probability Scale (measuring perceived probability of flying-negative events, general-negative and general positive events) and the Past-Negative, Future and Present-Hedonistic subscales of the Zimbardo Time Perspective Inventory (variables argued to predict mental travel forward and backward in time). The flying phobic group estimated the probability of flying negative and general negative events occurring as significantly higher than non-flying phobics. Past-Negative scores (positively) and Present-Hedonistic scores (negatively) predicted probability estimates of flying negative events. The Future Orientation subscale did not significantly predict probability estimates. This study is the first to demonstrate probability bias for threat-relevant outcomes in flying phobia. Results suggest that time perspective may influence perceived probability of threat-relevant outcomes but the nature of this relationship remains to be determined. PMID:27557054

  12. Estimation of nonpoint source loadings of phosphorus for lakes in the Puget Sound region, Washington

    USGS Publications Warehouse

    Gilliom, Robert J.

    1983-01-01

    Control of eutrophication of lakes in watersheds undergoing development is facilitated by estimates of the amounts of phosphorus (P) that reach the lakes from areas under various types of land use. Using a mass-balance model, the author calculated P loadings from present-day P concentrations measured in lake water and from other easily measured physical characteristics in a total of 28 lakes in drainage basins that contain only forest and residential land. The loadings from background sources (forest-land drainage and bulk precipitation) to each of the lakes were estimated by methods developed in a previous study. Differences between estimated present-day P loadings and loadings from background sources were attributed to changes in land use. The mean increase in annual P yield resulting from conversion of forest to residential land use was 7 kilograms per square kilometer, not including septic tank system contributions. Calculated loadings from septic systems were found to correlate best with the number of near-shore dwellings around each lake in 1940. The regression equation expressing this relationship explained 36 percent of the sample variance. There was no significant correlation between estimated septic tank system P loadings and number of dwellings present in 1960 or 1970. The evidence indicates that older systems might contribute more phosphorus to lakes than newer systems, and that there may be substantial time lags between septic system installation and significant impacts on lake-water P concentrations. For lakes in basins that contain agricultural land, the P loading attributable to agriculture can be calculated as the difference between the estimated total loading and the sum of estimated loadings from nonagricultural sources. A comprehensive system for evaluating errors in all loading estimates is presented. The empirical relationships developed allow preliminary approximations of the cumulative impact development has had on P loading and the amounts of P loading from generalized land-use categories for Puget Sound lowland lakes. In addition, the sensitivity of a lake to increased loading can be evaluated using the mass-balance model. The data required are presently available for most lakes. Estimates of P loading are useful in developing water-quality goals, setting priorities for lake studies, and designing studies of individual lakes. The suitability of a method for management of individual lakes will often be limited by relatively high levels of uncertainty, especially if the method is used to evaluate relatively small increases in P loading.

  13. Estimating effect of environmental contaminants on women's subfecundity for the MoBa study data with an outcome-dependent sampling scheme

    PubMed Central

    Ding, Jieli; Zhou, Haibo; Liu, Yanyan; Cai, Jianwen; Longnecker, Matthew P.

    2014-01-01

    Motivated by the need from our on-going environmental study in the Norwegian Mother and Child Cohort (MoBa) study, we consider an outcome-dependent sampling (ODS) scheme for failure-time data with censoring. Like the case-cohort design, the ODS design enriches the observed sample by selectively including certain failure subjects. We present an estimated maximum semiparametric empirical likelihood estimation (EMSELE) under the proportional hazards model framework. The asymptotic properties of the proposed estimator were derived. Simulation studies were conducted to evaluate the small-sample performance of our proposed method. Our analyses show that the proposed estimator and design is more efficient than the current default approach and other competing approaches. Applying the proposed approach with the data set from the MoBa study, we found a significant effect of an environmental contaminant on fecundability. PMID:24812419

  14. Degree of Approximation by a General Cλ -Summability Method

    NASA Astrophysics Data System (ADS)

    Sonker, S.; Munjal, A.

    2018-03-01

    In the present study, two theorems explaining the degree of approximation of signals belonging to the class Lip(α, p, w) by a more general C λ -method (Summability method) have been formulated. Improved estimations have been observed in terms of λ(n) where (λ(n))‑α ≤ n ‑α for 0 < α ≤ 1 as compared to previous studies presented in terms of n. These estimations of infinite matrices are very much applicable in solid state physics which further motivates for an investigation of perturbations of matrix valued functions.

  15. Occupancy Estimation and Modeling : Inferring Patterns and Dynamics of Species Occurrence

    USGS Publications Warehouse

    MacKenzie, D.I.; Nichols, J.D.; Royle, J. Andrew; Pollock, K.H.; Bailey, L.L.; Hines, J.E.

    2006-01-01

    This is the first book to examine the latest methods in analyzing presence/absence data surveys. Using four classes of models (single-species, single-season; single-species, multiple season; multiple-species, single-season; and multiple-species, multiple-season), the authors discuss the practical sampling situation, present a likelihood-based model enabling direct estimation of the occupancy-related parameters while allowing for imperfect detectability, and make recommendations for designing studies using these models. It provides authoritative insights into the latest in estimation modeling; discusses multiple models which lay the groundwork for future study designs; addresses critical issues of imperfect detectibility and its effects on estimation; and explores the role of probability in estimating in detail.

  16. Positional estimation techniques for an autonomous mobile robot

    NASA Technical Reports Server (NTRS)

    Nandhakumar, N.; Aggarwal, J. K.

    1990-01-01

    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented.

  17. Overcoming the winner's curse: estimating penetrance parameters from case-control data.

    PubMed

    Zollner, Sebastian; Pritchard, Jonathan K

    2007-04-01

    Genomewide association studies are now a widely used approach in the search for loci that affect complex traits. After detection of significant association, estimates of penetrance and allele-frequency parameters for the associated variant indicate the importance of that variant and facilitate the planning of replication studies. However, when these estimates are based on the original data used to detect the variant, the results are affected by an ascertainment bias known as the "winner's curse." The actual genetic effect is typically smaller than its estimate. This overestimation of the genetic effect may cause replication studies to fail because the necessary sample size is underestimated. Here, we present an approach that corrects for the ascertainment bias and generates an estimate of the frequency of a variant and its penetrance parameters. The method produces a point estimate and confidence region for the parameter estimates. We study the performance of this method using simulated data sets and show that it is possible to greatly reduce the bias in the parameter estimates, even when the original association study had low power. The uncertainty of the estimate decreases with increasing sample size, independent of the power of the original test for association. Finally, we show that application of the method to case-control data can improve the design of replication studies considerably.

  18. Estimating survival probabilities by exposure levels: utilizing vital statistics and complex survey data with mortality follow-up.

    PubMed

    Landsman, V; Lou, W Y W; Graubard, B I

    2015-05-20

    We present a two-step approach for estimating hazard rates and, consequently, survival probabilities, by levels of general categorical exposure. The resulting estimator utilizes three sources of data: vital statistics data and census data are used at the first step to estimate the overall hazard rate for a given combination of gender and age group, and cohort data constructed from a nationally representative complex survey with linked mortality records, are used at the second step to divide the overall hazard rate by exposure levels. We present an explicit expression for the resulting estimator and consider two methods for variance estimation that account for complex multistage sample design: (1) the leaving-one-out jackknife method, and (2) the Taylor linearization method, which provides an analytic formula for the variance estimator. The methods are illustrated with smoking and all-cause mortality data from the US National Health Interview Survey Linked Mortality Files, and the proposed estimator is compared with a previously studied crude hazard rate estimator that uses survey data only. The advantages of a two-step approach and possible extensions of the proposed estimator are discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  19. Parametric study of modern airship productivity

    NASA Technical Reports Server (NTRS)

    Ardema, M. D.; Flaig, K.

    1980-01-01

    A method for estimating the specific productivity of both hybrid and fully buoyant airships is developed. Various methods of estimating structural weight of deltoid hybrids are discussed and a derived weight estimating relationship is presented. Specific productivity is used as a figure of merit in a parametric study of fully buoyant ellipsoidal and deltoid hybrid semi-buoyant vehicles. The sensitivity of results as a function of assumptions is also determined. No airship configurations were found to have superior specific productivity to transport airplanes.

  20. Bias Correction for the Maximum Likelihood Estimate of Ability. Research Report. ETS RR-05-15

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2005-01-01

    Lord's bias function and the weighted likelihood estimation method are effective in reducing the bias of the maximum likelihood estimate of an examinee's ability under the assumption that the true item parameters are known. This paper presents simulation studies to determine the effectiveness of these two methods in reducing the bias when the item…

  1. An inverse problem for a mathematical model of aquaponic agriculture

    NASA Astrophysics Data System (ADS)

    Bobak, Carly; Kunze, Herb

    2017-01-01

    Aquaponic agriculture is a sustainable ecosystem that relies on a symbiotic relationship between fish and macrophytes. While the practice has been growing in popularity, relatively little mathematical models exist which aim to study the system processes. In this paper, we present a system of ODEs which aims to mathematically model the population and concetrations dynamics present in an aquaponic environment. Values of the parameters in the system are estimated from the literature so that simulated results can be presented to illustrate the nature of the solutions to the system. As well, a brief sensitivity analysis is performed in order to identify redundant parameters and highlight those which may need more reliable estimates. Specifically, an inverse problem with manufactured data for fish and plants is presented to demonstrate the ability of the collage theorem to recover parameter estimates.

  2. About an adaptively weighted Kaplan-Meier estimate.

    PubMed

    Plante, Jean-François

    2009-09-01

    The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.

  3. The relationship between species detection probability and local extinction probability

    USGS Publications Warehouse

    Alpizar-Jara, R.; Nichols, J.D.; Hines, J.E.; Sauer, J.R.; Pollock, K.H.; Rosenberry, C.S.

    2004-01-01

    In community-level ecological studies, generally not all species present in sampled areas are detected. Many authors have proposed the use of estimation methods that allow detection probabilities that are < 1 and that are heterogeneous among species. These methods can also be used to estimate community-dynamic parameters such as species local extinction probability and turnover rates (Nichols et al. Ecol Appl 8:1213-1225; Conserv Biol 12:1390-1398). Here, we present an ad hoc approach to estimating community-level vital rates in the presence of joint heterogeneity of detection probabilities and vital rates. The method consists of partitioning the number of species into two groups using the detection frequencies and then estimating vital rates (e.g., local extinction probabilities) for each group. Estimators from each group are combined in a weighted estimator of vital rates that accounts for the effect of heterogeneity. Using data from the North American Breeding Bird Survey, we computed such estimates and tested the hypothesis that detection probabilities and local extinction probabilities were negatively related. Our analyses support the hypothesis that species detection probability covaries negatively with local probability of extinction and turnover rates. A simulation study was conducted to assess the performance of vital parameter estimators as well as other estimators relevant to questions about heterogeneity, such as coefficient of variation of detection probabilities and proportion of species in each group. Both the weighted estimator suggested in this paper and the original unweighted estimator for local extinction probability performed fairly well and provided no basis for preferring one to the other.

  4. Estimation of fecundability from survey data.

    PubMed

    Goldman, N; Westoff, C F; Paul, L E

    1985-01-01

    The estimation of fecundability from survey data is plagued by methodological problems such as misreporting of dates of birth and marriage and the occurrence of premarital exposure to the risk of conception. Nevertheless, estimates of fecundability from World Fertility Survey data for women married in recent years appear to be plausible for most of the surveys analyzed here and are quite consistent with estimates reported in earlier studies. The estimates presented in this article are all derived from the first interval, the interval between marriage or consensual union and the first live birth conception.

  5. Do labeled versus unlabeled treatments of alternatives' names influence stated choice outputs? Results from a mode choice study.

    PubMed

    Jin, Wen; Jiang, Hai; Liu, Yimin; Klampfl, Erica

    2017-01-01

    Discrete choice experiments have been widely applied to elicit behavioral preferences in the literature. In many of these experiments, the alternatives are named alternatives, meaning that they are naturally associated with specific names. For example, in a mode choice study, the alternatives can be associated with names such as car, taxi, bus, and subway. A fundamental issue that arises in stated choice experiments is whether to treat the alternatives' names as labels (that is, labeled treatment), or as attributes (that is, unlabeled treatment) in the design as well as the presentation phases of the choice sets. In this research, we investigate the impact of labeled versus unlabeled treatments of alternatives' names on the outcome of stated choice experiments, a question that has not been thoroughly investigated in the literature. Using results from a mode choice study, we find that the labeled or the unlabeled treatment of alternatives' names in either the design or the presentation phase of the choice experiment does not statistically affect the estimates of the coefficient parameters. We then proceed to measure the influence toward the willingness-to-pay (WTP) estimates. By using a random-effects model to relate the conditional WTP estimates to the socioeconomic characteristics of the individuals and the labeled versus unlabeled treatments of alternatives' names, we find that: a) Given the treatment of alternatives' names in the presentation phase, the treatment of alternatives' names in the design phase does not statistically affect the estimates of the WTP measures; and b) Given the treatment of alternatives' names in the design phase, the labeled treatment of alternatives' names in the presentation phase causes the corresponding WTP estimates to be slightly higher.

  6. A Comparison of the Cheater Detection and the Unrelated Question Models: A Randomized Response Survey on Physical and Cognitive Doping in Recreational Triathletes

    PubMed Central

    Schröter, Hannes; Studzinski, Beatrix; Dietz, Pavel; Ulrich, Rolf; Striegel, Heiko; Simon, Perikles

    2016-01-01

    Purpose This study assessed the prevalence of physical and cognitive doping in recreational triathletes with two different randomized response models, that is, the Cheater Detection Model (CDM) and the Unrelated Question Model (UQM). Since both models have been employed in assessing doping, the major objective of this study was to investigate whether the estimates of these two models converge. Material and Methods An anonymous questionnaire was distributed to 2,967 athletes at two triathlon events (Frankfurt and Wiesbaden, Germany). Doping behavior was assessed either with the CDM (Frankfurt sample, one Wiesbaden subsample) or the UQM (one Wiesbaden subsample). A generalized likelihood-ratio test was employed to check whether the prevalence estimates differed significantly between models. In addition, we compared the prevalence rates of the present survey with those of a previous study on a comparable sample. Results After exclusion of incomplete questionnaires and outliers, the data of 2,017 athletes entered the final data analysis. Twelve-month prevalence for physical doping ranged from 4% (Wiesbaden, CDM and UQM) to 12% (Frankfurt CDM), and for cognitive doping from 1% (Wiesbaden, CDM) to 9% (Frankfurt CDM). The generalized likelihood-ratio test indicated no differences in prevalence rates between the two methods. Furthermore, there were no significant differences in prevalences between the present (undertaken in 2014) and the previous survey (undertaken in 2011), although the estimates tended to be smaller in the present survey. Discussion The results suggest that the two models can provide converging prevalence estimates. The high rate of cheaters estimated by the CDM, however, suggests that the present results must be seen as a lower bound and that the true prevalence of doping might be considerably higher. PMID:27218830

  7. Do labeled versus unlabeled treatments of alternatives’ names influence stated choice outputs? Results from a mode choice study

    PubMed Central

    Jin, Wen; Jiang, Hai; Liu, Yimin; Klampfl, Erica

    2017-01-01

    Discrete choice experiments have been widely applied to elicit behavioral preferences in the literature. In many of these experiments, the alternatives are named alternatives, meaning that they are naturally associated with specific names. For example, in a mode choice study, the alternatives can be associated with names such as car, taxi, bus, and subway. A fundamental issue that arises in stated choice experiments is whether to treat the alternatives’ names as labels (that is, labeled treatment), or as attributes (that is, unlabeled treatment) in the design as well as the presentation phases of the choice sets. In this research, we investigate the impact of labeled versus unlabeled treatments of alternatives’ names on the outcome of stated choice experiments, a question that has not been thoroughly investigated in the literature. Using results from a mode choice study, we find that the labeled or the unlabeled treatment of alternatives’ names in either the design or the presentation phase of the choice experiment does not statistically affect the estimates of the coefficient parameters. We then proceed to measure the influence toward the willingness-to-pay (WTP) estimates. By using a random-effects model to relate the conditional WTP estimates to the socioeconomic characteristics of the individuals and the labeled versus unlabeled treatments of alternatives’ names, we find that: a) Given the treatment of alternatives’ names in the presentation phase, the treatment of alternatives’ names in the design phase does not statistically affect the estimates of the WTP measures; and b) Given the treatment of alternatives’ names in the design phase, the labeled treatment of alternatives’ names in the presentation phase causes the corresponding WTP estimates to be slightly higher. PMID:28806764

  8. Comparative study of age estimation using dentinal translucency by digital and conventional methods.

    PubMed

    Bommannavar, Sushma; Kulkarni, Meena

    2015-01-01

    Estimating age using the dentition plays a significant role in identification of the individual in forensic cases. Teeth are one of the most durable and strongest structures in the human body. The morphology and arrangement of teeth vary from person-to-person and is unique to an individual as are the fingerprints. Therefore, the use of dentition is the method of choice in the identification of the unknown. Root dentin translucency is considered to be one of the best parameters for dental age estimation. Traditionally, root dentin translucency was measured using calipers. Recently, the use of custom built software programs have been proposed for the same. The present study describes a method to measure root dentin translucency on sectioned teeth using a custom built software program Adobe Photoshop 7.0 version (Adobe system Inc, Mountain View California). A total of 50 single rooted teeth were sectioned longitudinally to derive a 0.25 mm uniform thickness and the root dentin translucency was measured using digital and caliper methods and compared. The Gustafson's morphohistologic approach is used in this study. Correlation coefficients of translucency measurements to age were statistically significant for both the methods (P < 0.125) and linear regression equations derived from both methods revealed better ability of the digital method to assess age. The custom built software program used in the present study is commercially available and widely used image editing software. Furthermore, this method is easy to use and less time consuming. The measurements obtained using this method are more precise and thus help in more accurate age estimation. Considering these benefits, the present study recommends the use of digital method to assess translucency for age estimation.

  9. Comparative study of age estimation using dentinal translucency by digital and conventional methods

    PubMed Central

    Bommannavar, Sushma; Kulkarni, Meena

    2015-01-01

    Introduction: Estimating age using the dentition plays a significant role in identification of the individual in forensic cases. Teeth are one of the most durable and strongest structures in the human body. The morphology and arrangement of teeth vary from person-to-person and is unique to an individual as are the fingerprints. Therefore, the use of dentition is the method of choice in the identification of the unknown. Root dentin translucency is considered to be one of the best parameters for dental age estimation. Traditionally, root dentin translucency was measured using calipers. Recently, the use of custom built software programs have been proposed for the same. Objectives: The present study describes a method to measure root dentin translucency on sectioned teeth using a custom built software program Adobe Photoshop 7.0 version (Adobe system Inc, Mountain View California). Materials and Methods: A total of 50 single rooted teeth were sectioned longitudinally to derive a 0.25 mm uniform thickness and the root dentin translucency was measured using digital and caliper methods and compared. The Gustafson's morphohistologic approach is used in this study. Results: Correlation coefficients of translucency measurements to age were statistically significant for both the methods (P < 0.125) and linear regression equations derived from both methods revealed better ability of the digital method to assess age. Conclusion: The custom built software program used in the present study is commercially available and widely used image editing software. Furthermore, this method is easy to use and less time consuming. The measurements obtained using this method are more precise and thus help in more accurate age estimation. Considering these benefits, the present study recommends the use of digital method to assess translucency for age estimation. PMID:25709325

  10. How accurately can we estimate energetic costs in a marine top predator, the king penguin?

    PubMed

    Halsey, Lewis G; Fahlman, Andreas; Handrich, Yves; Schmidt, Alexander; Woakes, Anthony J; Butler, Patrick J

    2007-01-01

    King penguins (Aptenodytes patagonicus) are one of the greatest consumers of marine resources. However, while their influence on the marine ecosystem is likely to be significant, only an accurate knowledge of their energy demands will indicate their true food requirements. Energy consumption has been estimated for many marine species using the heart rate-rate of oxygen consumption (f(H) - V(O2)) technique, and the technique has been applied successfully to answer eco-physiological questions. However, previous studies on the energetics of king penguins, based on developing or applying this technique, have raised a number of issues about the degree of validity of the technique for this species. These include the predictive validity of the present f(H) - V(O2) equations across different seasons and individuals and during different modes of locomotion. In many cases, these issues also apply to other species for which the f(H) - V(O2) technique has been applied. In the present study, the accuracy of three prediction equations for king penguins was investigated based on validity studies and on estimates of V(O2) from published, field f(H) data. The major conclusions from the present study are: (1) in contrast to that for walking, the f(H) - V(O2) relationship for swimming king penguins is not affected by body mass; (2) prediction equation (1), log(V(O2) = -0.279 + 1.24log(f(H) + 0.0237t - 0.0157log(f(H)t, derived in a previous study, is the most suitable equation presently available for estimating V(O2) in king penguins for all locomotory and nutritional states. A number of possible problems associated with producing an f(H) - V(O2) relationship are discussed in the present study. Finally, a statistical method to include easy-to-measure morphometric characteristics, which may improve the accuracy of f(H) - V(O2) prediction equations, is explained.

  11. Option Price Estimates for Water Quality Improvements: A Contingent Valuation Study for the Monongahela River (1985)

    EPA Pesticide Factsheets

    This paper presents the findings from a contingent valuation survey designed to estimate the option price bids for the improved recreation resulting from enhanced water quality in the Pennsylvania portion of the Monongahela River.

  12. A comparison of abundance estimates from extended batch-marking and Jolly–Seber-type experiments

    PubMed Central

    Cowen, Laura L E; Besbeas, Panagiotis; Morgan, Byron J T; Schwarz, Carl J

    2014-01-01

    Little attention has been paid to the use of multi-sample batch-marking studies, as it is generally assumed that an individual's capture history is necessary for fully efficient estimates. However, recently, Huggins et al. (2010) present a pseudo-likelihood for a multi-sample batch-marking study where they used estimating equations to solve for survival and capture probabilities and then derived abundance estimates using a Horvitz–Thompson-type estimator. We have developed and maximized the likelihood for batch-marking studies. We use data simulated from a Jolly–Seber-type study and convert this to what would have been obtained from an extended batch-marking study. We compare our abundance estimates obtained from the Crosbie–Manly–Arnason–Schwarz (CMAS) model with those of the extended batch-marking model to determine the efficiency of collecting and analyzing batch-marking data. We found that estimates of abundance were similar for all three estimators: CMAS, Huggins, and our likelihood. Gains are made when using unique identifiers and employing the CMAS model in terms of precision; however, the likelihood typically had lower mean square error than the pseudo-likelihood method of Huggins et al. (2010). When faced with designing a batch-marking study, researchers can be confident in obtaining unbiased abundance estimators. Furthermore, they can design studies in order to reduce mean square error by manipulating capture probabilities and sample size. PMID:24558576

  13. Estimation of Right-Lobe Graft Weight From Computed Tomographic Volumetry for Living Donor Liver Transplantation.

    PubMed

    Yang, X; Chu, C W; Yang, J D; Yang, K H; Yu, H C; Cho, B H; You, H

    2017-03-01

    The objective of the study was to establish a right-lobe graft weight (GW) estimation formula for living donor liver transplantation (LDLT) from right-lobe graft volume without veins (GV w/o_veins ), including portal vein and hepatic vein measured by computed tomographic (CT) volumetry, and to compare its estimation accuracy with those of existing formulas. Right-lobe GW estimation formulas established with the use of graft volume with veins (GV w_veins ) sacrifice accuracy because GW measured intra-operatively excludes the weight of blood in the veins. Right-lobe GW estimation formulas have been established with the use of right-lobe GV w/o_veins , but a more accurate formula must be developed. The present study developed right-lobe GW estimation formulas based on GV w/o_veins as well as GV w_veins , using 40 cases of Korean donors: GW = 29.1 + 0.943 × GV w/o_veins (adjusted R 2  = 0.94) and GW = 74.7 + 0.773 × GV w_veins (adjusted R 2  = 0.87). The proposed GW estimation formulas were compared with existing GV w_veins - and GV w/o_veins -based models, using 43 cases additionally obtained from two medical centers for cross-validation. The GV w/o_veins -based formula developed in the present study was most preferred (absolute error = 21.5 ± 16.5 g and percentage of absolute error = 3.0 ± 2.3%). The GV w/o_veins -based formula is preferred to the GV w_veins -based formula in GW estimation. Accurate CT volumetry and alignment between planned and actual surgical cutting lines are crucial in the establishment of a better GW estimation formula. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. 3D Tendon Strain Estimation Using High-frequency Volumetric Ultrasound Images: A Feasibility Study.

    PubMed

    Carvalho, Catarina; Slagmolen, Pieter; Bogaerts, Stijn; Scheys, Lennart; D'hooge, Jan; Peers, Koen; Maes, Frederik; Suetens, Paul

    2018-03-01

    Estimation of strain in tendons for tendinopathy assessment is a hot topic within the sports medicine community. It is believed that, if accurately estimated, existing treatment and rehabilitation protocols can be improved and presymptomatic abnormalities can be detected earlier. State-of-the-art studies present inaccurate and highly variable strain estimates, leaving this problem without solution. Out-of-plane motion, present when acquiring two-dimensional (2D) ultrasound (US) images, is a known problem and may be responsible for such errors. This work investigates the benefit of high-frequency, three-dimensional (3D) US imaging to reduce errors in tendon strain estimation. Volumetric US images were acquired in silico, in vitro, and ex vivo using an innovative acquisition approach that combines the acquisition of 2D high-frequency US images with a mechanical guided system. An affine image registration method was used to estimate global strain. 3D strain estimates were then compared with ground-truth values and with 2D strain estimates. The obtained results for in silico data showed a mean absolute error (MAE) of 0.07%, 0.05%, and 0.27% for 3D estimates along axial, lateral direction, and elevation direction and a respective MAE of 0.21% and 0.29% for 2D strain estimates. Although 3D could outperform 2D, this does not occur in in vitro and ex vivo settings, likely due to 3D acquisition artifacts. Comparison against the state-of-the-art methods showed competitive results. The proposed work shows that 3D strain estimates are more accurate than 2D estimates but acquisition of appropriate 3D US images remains a challenge.

  15. Joint coverage probability in a simulation study on Continuous-Time Markov Chain parameter estimation.

    PubMed

    Benoit, Julia S; Chan, Wenyaw; Doody, Rachelle S

    2015-01-01

    Parameter dependency within data sets in simulation studies is common, especially in models such as Continuous-Time Markov Chains (CTMC). Additionally, the literature lacks a comprehensive examination of estimation performance for the likelihood-based general multi-state CTMC. Among studies attempting to assess the estimation, none have accounted for dependency among parameter estimates. The purpose of this research is twofold: 1) to develop a multivariate approach for assessing accuracy and precision for simulation studies 2) to add to the literature a comprehensive examination of the estimation of a general 3-state CTMC model. Simulation studies are conducted to analyze longitudinal data with a trinomial outcome using a CTMC with and without covariates. Measures of performance including bias, component-wise coverage probabilities, and joint coverage probabilities are calculated. An application is presented using Alzheimer's disease caregiver stress levels. Comparisons of joint and component-wise parameter estimates yield conflicting inferential results in simulations from models with and without covariates. In conclusion, caution should be taken when conducting simulation studies aiming to assess performance and choice of inference should properly reflect the purpose of the simulation.

  16. Selection bias and patterns of confounding in cohort studies: the case of the NINFEA web-based birth cohort.

    PubMed

    Pizzi, Costanza; De Stavola, Bianca L; Pearce, Neil; Lazzarato, Fulvio; Ghiotti, Paola; Merletti, Franco; Richiardi, Lorenzo

    2012-11-01

    Several studies have examined the effects of sample selection on the exposure-outcome association estimates in cohort studies, but the reasons why this selection may induce bias have not been fully explored. To investigate how sample selection of the web-based NINFEA birth cohort may change the confounding patterns present in the source population. The characteristics of the NINFEA participants (n=1105) were compared with those of the wider source population-the Piedmont Birth Registry (PBR)-(n=36 092), and the association of two exposures (parity and educational level) with two outcomes (low birth weight and birth by caesarean section), while controlling for other risk factors, was studied. Specifically the associations among measured risk factors within each dataset were examined and the exposure-outcome estimates compared in terms of relative ORs. The associations of educational level with the other risk factors (alcohol consumption, folic acid intake, maternal age, pregnancy weight gain, previous miscarriages) partly differed between PBR and NINFEA. This was not observed for parity. Overall, the exposure-outcome estimates derived from NINFEA only differed moderately from those obtained in PBR, with relative ORs ranging between 0.74 and 1.03. Sample selection in cohort studies may alter the confounding patterns originally present in the general population. However, this does not necessarily introduce selection bias in the exposure-outcome estimates, as sample selection may reduce some of the residual confounding present in the general population.

  17. Estimation of excitation forces for wave energy converters control using pressure measurements

    NASA Astrophysics Data System (ADS)

    Abdelkhalik, O.; Zou, S.; Robinett, R.; Bacelli, G.; Wilson, D.

    2017-08-01

    Most control algorithms of wave energy converters require prediction of wave elevation or excitation force for a short future horizon, to compute the control in an optimal sense. This paper presents an approach that requires the estimation of the excitation force and its derivatives at present time with no need for prediction. An extended Kalman filter is implemented to estimate the excitation force. The measurements in this approach are selected to be the pressures at discrete points on the buoy surface, in addition to the buoy heave position. The pressures on the buoy surface are more directly related to the excitation force on the buoy as opposed to wave elevation in front of the buoy. These pressure measurements are also more accurate and easier to obtain. A singular arc control is implemented to compute the steady-state control using the estimated excitation force. The estimated excitation force is expressed in the Laplace domain and substituted in the control, before the latter is transformed to the time domain. Numerical simulations are presented for a Bretschneider wave case study.

  18. The areal reduction factor: A new analytical expression for the Lazio Region in central Italy

    NASA Astrophysics Data System (ADS)

    Mineo, C.; Ridolfi, E.; Napolitano, F.; Russo, F.

    2018-05-01

    For the study and modeling of hydrological phenomena, both in urban and rural areas, a proper estimation of the areal reduction factor (ARF) is crucial. In this paper, we estimated the ARF from observed rainfall data as the ratio between the average rainfall occurring in a specific area and the point rainfall. Then, we compared the obtained ARF values with some of the most widespread empirical approaches in literature which are used when rainfall observations are not available. Results highlight that the literature formulations can lead to a substantial over- or underestimation of the ARF estimated from observed data. These findings can have severe consequences, especially in the design of hydraulic structures where empirical formulations are extensively applied. The aim of this paper is to present a new analytical relationship with an explicit dependence on the rainfall duration and area that can better represent the ARF-area trend over the area case of study. The analytical curve presented here can find an important application to estimate the ARF values for design purposes. The test study area is the Lazio Region (central Italy).

  19. A review of findings of a study of rocket based combined cycle engines applied to extensively axisymmetric single stage to orbit vehicles

    NASA Technical Reports Server (NTRS)

    Foster, Richard W.

    1992-01-01

    Extensively axisymmetric and non-axisymmetric Single Stage To Orbit (SSTO) vehicles are considered. The information is presented in viewgraph form and the following topics are presented: payload comparisons; payload as a percent of dry weight - a system hardware cost indicator; life cycle cost estimations; operations and support costs estimation; selected engine type; and rocket engine specific impulse calculation.

  20. Cerebral perfusion computed tomography deconvolution via structure tensor total variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, Dong; Zhang, Xinyu; Bian, Zhaoying, E-mail: zybian@smu.edu.cn, E-mail: jhma@smu.edu.cn

    Purpose: Cerebral perfusion computed tomography (PCT) imaging as an accurate and fast acute ischemic stroke examination has been widely used in clinic. Meanwhile, a major drawback of PCT imaging is the high radiation dose due to its dynamic scan protocol. The purpose of this work is to develop a robust perfusion deconvolution approach via structure tensor total variation (STV) regularization (PD-STV) for estimating an accurate residue function in PCT imaging with the low-milliampere-seconds (low-mAs) data acquisition. Methods: Besides modeling the spatio-temporal structure information of PCT data, the STV regularization of the present PD-STV approach can utilize the higher order derivativesmore » of the residue function to enhance denoising performance. To minimize the objective function, the authors propose an effective iterative algorithm with a shrinkage/thresholding scheme. A simulation study on a digital brain perfusion phantom and a clinical study on an old infarction patient were conducted to validate and evaluate the performance of the present PD-STV approach. Results: In the digital phantom study, visual inspection and quantitative metrics (i.e., the normalized mean square error, the peak signal-to-noise ratio, and the universal quality index) assessments demonstrated that the PD-STV approach outperformed other existing approaches in terms of the performance of noise-induced artifacts reduction and accurate perfusion hemodynamic maps (PHM) estimation. In the patient data study, the present PD-STV approach could yield accurate PHM estimation with several noticeable gains over other existing approaches in terms of visual inspection and correlation analysis. Conclusions: This study demonstrated the feasibility and efficacy of the present PD-STV approach in utilizing STV regularization to improve the accuracy of residue function estimation of cerebral PCT imaging in the case of low-mAs.« less

  1. A study of fault prediction and reliability assessment in the SEL environment

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Patnaik, Debabrata

    1986-01-01

    An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.

  2. An estimator for the standard deviation of a natural frequency. II.

    NASA Technical Reports Server (NTRS)

    Schiff, A. J.; Bogdanoff, J. L.

    1971-01-01

    A method has been presented for estimating the variability of a system's natural frequencies arising from the variability of the system's parameters. The only information required to obtain the estimates is the member variability, in the form of second-order properties, and the natural frequencies and mode shapes of the mean system. It has also been established for the systems studied by means of Monte Carlo estimates that the specification of second-order properties is an adequate description of member variability.

  3. Estimate of standard deviation for a log-transformed variable using arithmetic means and standard deviations.

    PubMed

    Quan, Hui; Zhang, Ji

    2003-09-15

    Analyses of study variables are frequently based on log transformations. To calculate the power for detecting the between-treatment difference in the log scale, we need an estimate of the standard deviation of the log-transformed variable. However, in many situations a literature search only provides the arithmetic means and the corresponding standard deviations. Without individual log-transformed data to directly calculate the sample standard deviation, we need alternative methods to estimate it. This paper presents methods for estimating and constructing confidence intervals for the standard deviation of a log-transformed variable given the mean and standard deviation of the untransformed variable. It also presents methods for estimating the standard deviation of change from baseline in the log scale given the means and standard deviations of the untransformed baseline value, on-treatment value and change from baseline. Simulations and examples are provided to assess the performance of these estimates. Copyright 2003 John Wiley & Sons, Ltd.

  4. Estimating discharge measurement uncertainty using the interpolated variance estimator

    USGS Publications Warehouse

    Cohn, T.; Kiang, J.; Mason, R.

    2012-01-01

    Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.

  5. Space tug economic analysis study. Volume 3: Cost estimates

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Cost estimates for the space tug operation are presented. The subjects discussed are: (1) research and development costs, (2) investment costs, (3) operations costs, and (4) funding requirements. The emphasis is placed on the single stage tug configuration using various types of liquid propellants.

  6. Predicting Costs of Eastern National Forest Wildernesses.

    ERIC Educational Resources Information Center

    Guldin, Richard W.

    1981-01-01

    A method for estimating the total direct social costs for proposed wilderness areas is presented. A cost framework is constructed and equations are developed for cost components. To illustrate the study's method, social costs are estimated for a proposed wilderness area in New England. (Author/JN)

  7. Estimation of factors from natural and anthropogenic radioactivity present in the surface soil and comparison with DCF values.

    PubMed

    Ranade, A K; Pandey, M; Datta, D

    2013-01-01

    A study was conducted to evaluate the absorbed rate coefficient of (238)U, (232)Th, (40)K and (137)Cs present in soil. A total of 31 soil samples and the corresponding terrestrial dose rates at 1 m from different locations were taken around the Anushaktinagar region, where the litho-logy is dominated by red soil. A linear regression model was developed for the estimation of these factors. The estimated coefficients (nGy h(-1) Bq(-1) kg(-1)) were 0.454, 0.586, 0.035 and 0.392, respectively. The factors calculated were in good agreement with the literature values.

  8. Preliminary Findings of Serum Creatinine and Estimated Glomerular Filtration Rate (eGFR) in Adolescents with Intellectual Disabilities

    ERIC Educational Resources Information Center

    Lin, Jin-Ding; Lin, Lan-Ping; Hsieh, Molly; Lin, Pei-Ying

    2010-01-01

    The present study aimed to describe the kidney function profile--serum creatinine and estimated glomerular filtration rate (eGFR), and to examine the relationships of predisposing factors to abnormal serum creatinine in people with intellectual disabilities (ID). Data were collected by a cross-sectional study of 827 aged 15-18 years adolescents…

  9. Regression analysis of longitudinal data with correlated censoring and observation times.

    PubMed

    Li, Yang; He, Xin; Wang, Haiying; Sun, Jianguo

    2016-07-01

    Longitudinal data occur in many fields such as the medical follow-up studies that involve repeated measurements. For their analysis, most existing approaches assume that the observation or follow-up times are independent of the response process either completely or given some covariates. In practice, it is apparent that this may not be true. In this paper, we present a joint analysis approach that allows the possible mutual correlations that can be characterized by time-dependent random effects. Estimating equations are developed for the parameter estimation and the resulted estimators are shown to be consistent and asymptotically normal. The finite sample performance of the proposed estimators is assessed through a simulation study and an illustrative example from a skin cancer study is provided.

  10. Resilient Distributed Estimation Through Adversary Detection

    NASA Astrophysics Data System (ADS)

    Chen, Yuan; Kar, Soummya; Moura, Jose M. F.

    2018-05-01

    This paper studies resilient multi-agent distributed estimation of an unknown vector parameter when a subset of the agents is adversarial. We present and analyze a Flag Raising Distributed Estimator ($\\mathcal{FRDE}$) that allows the agents under attack to perform accurate parameter estimation and detect the adversarial agents. The $\\mathcal{FRDE}$ algorithm is a consensus+innovations estimator in which agents combine estimates of neighboring agents (consensus) with local sensing information (innovations). We establish that, under $\\mathcal{FRDE}$, either the uncompromised agents' estimates are almost surely consistent or the uncompromised agents detect compromised agents if and only if the network of uncompromised agents is connected and globally observable. Numerical examples illustrate the performance of $\\mathcal{FRDE}$.

  11. Regional Input-Output Tables and Trade Flows: an Integrated and Interregional Non-survey Approach

    DOE PAGES

    Boero, Riccardo; Edwards, Brian Keith; Rivera, Michael Kelly

    2017-03-20

    Regional input–output tables and trade flows: an integrated and interregional non-survey approach. Regional Studies. Regional analyses require detailed and accurate information about dynamics happening within and between regional economies. However, regional input–output tables and trade flows are rarely observed and they must be estimated using up-to-date information. Common estimation approaches vary widely but consider tables and flows independently. Here, by using commonly used economic assumptions and available economic information, this paper presents a method that integrates the estimation of regional input–output tables and trade flows across regions. Examples of the method implementation are presented and compared with other approaches, suggestingmore » that the integrated approach provides advantages in terms of estimation accuracy and analytical capabilities.« less

  12. Classification image analysis: estimation and statistical inference for two-alternative forced-choice experiments

    NASA Technical Reports Server (NTRS)

    Abbey, Craig K.; Eckstein, Miguel P.

    2002-01-01

    We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.

  13. Regional Input-Output Tables and Trade Flows: an Integrated and Interregional Non-survey Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boero, Riccardo; Edwards, Brian Keith; Rivera, Michael Kelly

    Regional input–output tables and trade flows: an integrated and interregional non-survey approach. Regional Studies. Regional analyses require detailed and accurate information about dynamics happening within and between regional economies. However, regional input–output tables and trade flows are rarely observed and they must be estimated using up-to-date information. Common estimation approaches vary widely but consider tables and flows independently. Here, by using commonly used economic assumptions and available economic information, this paper presents a method that integrates the estimation of regional input–output tables and trade flows across regions. Examples of the method implementation are presented and compared with other approaches, suggestingmore » that the integrated approach provides advantages in terms of estimation accuracy and analytical capabilities.« less

  14. Private health insurance: New measures of a complex and changing industry

    PubMed Central

    Arnett, Ross H.; Trapnell, Gordon R.

    1984-01-01

    Private health insurance benefit payments are an integral component of estimates of national health expenditures. Recent analyses indicate that the insurance industry has undergone significant changes since the mid-1970's. As a result of these study findings and corresponding changes to estimating techniques, private health insurance estimates have been revised upward. This has had a major impact on national health expenditure estimates. This article describes the changes that have occurred in the industry, discusses some of the implications of those changes, presents a new methodology to measure private health insurance and the resulting estimate levels, and then examines concepts that underpin these estimates. PMID:10310950

  15. See food diet? Cultural differences in estimating fullness and intake as a function of plate size.

    PubMed

    Peng, Mei; Adam, Sarah; Hautus, Michael J; Shin, Myoungju; Duizer, Lisa M; Yan, Huiquan

    2017-10-01

    Previous research has suggested that manipulations of plate size can have a direct impact on perception of food intake, measured by estimated fullness and intake. The present study, involving 570 individuals across Canada, China, Korea, and New Zealand, is the first empirical study to investigate cultural influences on perception of food portion as a function of plate size. The respondents viewed photographs of ten culturally diverse dishes presented on large (27 cm) and small (23 cm) plates, and then rated their estimated usual intake and expected fullness after consuming the dish, using 100-point visual analog scales. The data were analysed with a mixed-model ANCOVA controlling for individual BMI, liking and familiarity of the presented food. The results showed clear cultural differences: (1) manipulations of the plate size had no effect on the expected fullness or the estimated intake of the Chinese and Korean respondents, as opposed to significant effects in Canadians and New Zealanders (p < 0.05); (2) Canadian (88.91 ± 0.42) and New Zealanders (90.37 ± 0.41) reported significantly higher estimated intake ratings than Chinese (80.80 ± 0.38) or Korean (81.69 ± 0.44; p < 0.05), notwithstanding the estimated fullness ratings from the Western respondents were comparable or even higher than those from the Asian respondents. Overall, these findings, from a cultural perspective, support the notion that estimation of fullness and intake are learned through dining experiences, and highlight the importance of considering eating environments and contexts when assessing individual behaviours relating to food intake. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Empirical estimation of present-day Antarctic glacial isostatic adjustment and ice mass change

    NASA Astrophysics Data System (ADS)

    Gunter, B. C.; Didova, O.; Riva, R. E. M.; Ligtenberg, S. R. M.; Lenaerts, J. T. M.; King, M. A.; van den Broeke, M. R.; Urban, T.

    2014-04-01

    This study explores an approach that simultaneously estimates Antarctic mass balance and glacial isostatic adjustment (GIA) through the combination of satellite gravity and altimetry data sets. The results improve upon previous efforts by incorporating a firn densification model to account for firn compaction and surface processes as well as reprocessed data sets over a slightly longer period of time. A range of different Gravity Recovery and Climate Experiment (GRACE) gravity models were evaluated and a new Ice, Cloud, and Land Elevation Satellite (ICESat) surface height trend map computed using an overlapping footprint approach. When the GIA models created from the combination approach were compared to in situ GPS ground station displacements, the vertical rates estimated showed consistently better agreement than recent conventional GIA models. The new empirically derived GIA rates suggest the presence of strong uplift in the Amundsen Sea sector in West Antarctica (WA) and the Philippi/Denman sectors, as well as subsidence in large parts of East Antarctica (EA). The total GIA-related mass change estimates for the entire Antarctic ice sheet ranged from 53 to 103 Gt yr-1, depending on the GRACE solution used, with an estimated uncertainty of ±40 Gt yr-1. Over the time frame February 2003-October 2009, the corresponding ice mass change showed an average value of -100 ± 44 Gt yr-1 (EA: 5 ± 38, WA: -105 ± 22), consistent with other recent estimates in the literature, with regional mass loss mostly concentrated in WA. The refined approach presented in this study shows the contribution that such data combinations can make towards improving estimates of present-day GIA and ice mass change, particularly with respect to determining more reliable uncertainties.

  17. Psychometric considerations in the measurement of event-related brain potentials: Guidelines for measurement and reporting.

    PubMed

    Clayson, Peter E; Miller, Gregory A

    2017-01-01

    Failing to consider psychometric issues related to reliability and validity, differential deficits, and statistical power potentially undermines the conclusions of a study. In research using event-related brain potentials (ERPs), numerous contextual factors (population sampled, task, data recording, analysis pipeline, etc.) can impact the reliability of ERP scores. The present review considers the contextual factors that influence ERP score reliability and the downstream effects that reliability has on statistical analyses. Given the context-dependent nature of ERPs, it is recommended that ERP score reliability be formally assessed on a study-by-study basis. Recommended guidelines for ERP studies include 1) reporting the threshold of acceptable reliability and reliability estimates for observed scores, 2) specifying the approach used to estimate reliability, and 3) justifying how trial-count minima were chosen. A reliability threshold for internal consistency of at least 0.70 is recommended, and a threshold of 0.80 is preferred. The review also advocates the use of generalizability theory for estimating score dependability (the generalizability theory analog to reliability) as an improvement on classical test theory reliability estimates, suggesting that the latter is less well suited to ERP research. To facilitate the calculation and reporting of dependability estimates, an open-source Matlab program, the ERP Reliability Analysis Toolbox, is presented. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Telomerecat: A ploidy-agnostic method for estimating telomere length from whole genome sequencing data.

    PubMed

    Farmery, James H R; Smith, Mike L; Lynch, Andy G

    2018-01-22

    Telomere length is a risk factor in disease and the dynamics of telomere length are crucial to our understanding of cell replication and vitality. The proliferation of whole genome sequencing represents an unprecedented opportunity to glean new insights into telomere biology on a previously unimaginable scale. To this end, a number of approaches for estimating telomere length from whole-genome sequencing data have been proposed. Here we present Telomerecat, a novel approach to the estimation of telomere length. Previous methods have been dependent on the number of telomeres present in a cell being known, which may be problematic when analysing aneuploid cancer data and non-human samples. Telomerecat is designed to be agnostic to the number of telomeres present, making it suited for the purpose of estimating telomere length in cancer studies. Telomerecat also accounts for interstitial telomeric reads and presents a novel approach to dealing with sequencing errors. We show that Telomerecat performs well at telomere length estimation when compared to leading experimental and computational methods. Furthermore, we show that it detects expected patterns in longitudinal data, repeated measurements, and cross-species comparisons. We also apply the method to a cancer cell data, uncovering an interesting relationship with the underlying telomerase genotype.

  19. Cost of stroke in Australia from a societal perspective: results from the North East Melbourne Stroke Incidence Study (NEMESIS).

    PubMed

    Dewey, H M; Thrift, A G; Mihalopoulos, C; Carter, R; Macdonell, R A; McNeil, J J; Donnan, G A

    2001-10-01

    Accurate information about resource use and costs of stroke is necessary for informed health service planning. The purpose of this study was to determine the patterns of resource use among stroke patients and to estimate the total costs (direct service use and indirect production losses) of stroke (excluding SAH) in Australia for 1997. An incidence-based cost-of-illness model was developed, incorporating data obtained from the North East Melbourne Stroke Incidence Study (NEMESIS). The costs of stroke during the first year after stroke and the present value of total lifetime costs of stroke were estimated. The total first-year costs of all first-ever-in-a lifetime strokes (SAH excluded) that occurred in Australia during 1997 were estimated to be A$555 million (US$420 million), and the present value of lifetime costs was estimated to be A$1.3 billion (US$985 million). The average cost per case during the first 12 months and over a lifetime was A$18 956 (US$14 361) and A$44 428 (US$33 658), respectively. The most important categories of cost during the first year were acute hospitalization (A$154 million), inpatient rehabilitation (A$150 million), and nursing home care (A$63 million). The present value of lifetime indirect costs was estimated to be A$34 million. Similar to other studies, hospital and nursing home costs contributed most to the total cost of stroke (excluding SAH) in Australia. Inpatient rehabilitation accounts for approximately 27% of total first-year costs. Given the magnitude of these costs, investigation of the cost-effectiveness of rehabilitation services should become a priority in this community.

  20. Extension of the thermal porosimetry method to high gas pressure for nanoporosimetry estimation

    NASA Astrophysics Data System (ADS)

    Jannot, Y.; Degiovanni, A.; Camus, M.

    2018-04-01

    Standard pore size determination methods like mercury porosimetry, nitrogen sorption, microscopy, or X-ray tomography are not suited to highly porous, low density, and thus very fragile materials. For this kind of materials, a method based on thermal characterization has been developed in a previous study. This method has been used with air pressure varying from 10-1 to 105 Pa for materials having a thermal conductivity less than 0.05 W m-1 K-1 at atmospheric pressure. It enables the estimation of pore size distribution between 100 nm and 1 mm. In this paper, we present a new experimental device enabling thermal conductivity measurement under gas pressure up to 106 Pa, enabling the estimation of the volume fraction of pores having a 10 nm diameter. It is also demonstrated that the main thermal conductivity models (parallel, series, Maxwell, Bruggeman, self-consistent) lead to the same estimation of the pore size distribution as the extended parallel model (EPM) presented in this paper and then used to process the experimental data. Three materials with thermal conductivities at atmospheric pressure ranging from 0.014 W m-1 K-1 to 0.04 W m-1 K-1 are studied. The thermal conductivity measurement results obtained with the three materials are presented, and the corresponding pore size distributions between 10 nm and 1 mm are presented and discussed.

  1. Statistical Methodology for Assigning Emissions to Industries in the United States, Revised Estimates: 1970 to 1997 (2001)

    EPA Pesticide Factsheets

    This report presents the results of a study that develops a methodology to assign emissions to the manufacturing and nonmanufacturing industries that comprise the industrial sector of the EPA’s national emission estimates for 1970 to 1997.

  2. Program review presentation to Level 1, Interagency Coordination Committee

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Progress in the development of crop inventory technology is reported. Specific topics include the results of a thematic mapper analysis, variable selection studies/early season estimator improvements, the agricultural information system simulator, large unit proportion estimation, and development of common features for multi-satellite information extraction.

  3. The pack size effect: Influence on consumer perceptions of portion sizes.

    PubMed

    Hieke, Sophie; Palascha, Aikaterini; Jola, Corinne; Wills, Josephine; Raats, Monique M

    2016-01-01

    Larger portions as well as larger packs can lead to larger prospective consumption estimates, larger servings and increased consumption, described as 'portion-size effects' and 'pack size effects'. Although related, the effects of pack sizes on portion estimates have received less attention. While it is not possible to generalize consumer behaviour across cultures, external cues taken from pack size may affect us all. We thus examined whether pack sizes influence portion size estimates across cultures, leading to a general 'pack size effect'. We compared portion size estimates based on digital presentations of different product pack sizes of solid and liquid products. The study with 13,177 participants across six European countries consisted of three parts. Parts 1 and 2 asked participants to indicate the number of portions present in a combined photographic and text-based description of different pack sizes. The estimated portion size was calculated as the quotient of the content weight or volume of the food presented and the number of stated portions. In Part 3, participants stated the number of food items that make up a portion when presented with packs of food containing either a small or a large number of items. The estimated portion size was calculated as the item weight times the item number. For all three parts and across all countries, we found that participants' portion estimates were based on larger portions for larger packs compared to smaller packs (Part 1 and 2) as well as more items to make up a portion (Part 3); hence, portions were stated to be larger in all cases. Considering that the larger estimated portions are likely to be consumed, there are implications for energy intake and weight status. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Multiplication factor versus regression analysis in stature estimation from hand and foot dimensions.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Sharma, Abhilasha

    2012-05-01

    Estimation of stature is an important parameter in identification of human remains in forensic examinations. The present study is aimed to compare the reliability and accuracy of stature estimation and to demonstrate the variability in estimated stature and actual stature using multiplication factor and regression analysis methods. The study is based on a sample of 246 subjects (123 males and 123 females) from North India aged between 17 and 20 years. Four anthropometric measurements; hand length, hand breadth, foot length and foot breadth taken on the left side in each subject were included in the study. Stature was measured using standard anthropometric techniques. Multiplication factors were calculated and linear regression models were derived for estimation of stature from hand and foot dimensions. Derived multiplication factors and regression formula were applied to the hand and foot measurements in the study sample. The estimated stature from the multiplication factors and regression analysis was compared with the actual stature to find the error in estimated stature. The results indicate that the range of error in estimation of stature from regression analysis method is less than that of multiplication factor method thus, confirming that the regression analysis method is better than multiplication factor analysis in stature estimation. Copyright © 2012 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  5. Estimating Total-test Scores from Partial Scores in a Matrix Sampling Design.

    ERIC Educational Resources Information Center

    Sachar, Jane; Suppes, Patrick

    It is sometimes desirable to obtain an estimated total-test score for an individual who was administered only a subset of the items in a total test. The present study compared six methods, two of which utilize the content structure of items, to estimate total-test scores using 450 students in grades 3-5 and 60 items of the ll0-item Stanford Mental…

  6. A hierarchical model for spatial capture-recapture data

    USGS Publications Warehouse

    Royle, J. Andrew; Young, K.V.

    2008-01-01

    Estimating density is a fundamental objective of many animal population studies. Application of methods for estimating population size from ostensibly closed populations is widespread, but ineffective for estimating absolute density because most populations are subject to short-term movements or so-called temporary emigration. This phenomenon invalidates the resulting estimates because the effective sample area is unknown. A number of methods involving the adjustment of estimates based on heuristic considerations are in widespread use. In this paper, a hierarchical model of spatially indexed capture recapture data is proposed for sampling based on area searches of spatial sample units subject to uniform sampling intensity. The hierarchical model contains explicit models for the distribution of individuals and their movements, in addition to an observation model that is conditional on the location of individuals during sampling. Bayesian analysis of the hierarchical model is achieved by the use of data augmentation, which allows for a straightforward implementation in the freely available software WinBUGS. We present results of a simulation study that was carried out to evaluate the operating characteristics of the Bayesian estimator under variable densities and movement patterns of individuals. An application of the model is presented for survey data on the flat-tailed horned lizard (Phrynosoma mcallii) in Arizona, USA.

  7. Last menstrual period provides the best estimate of gestation length for women in rural Guatemala.

    PubMed

    Neufeld, Lynnette M; Haas, Jere D; Grajéda, Ruben; Martorell, Reynaldo

    2006-07-01

    The accurate estimation of gestational age in field studies in rural areas of developing countries continues to present difficulties for researchers. Our objective was to determine the best method for gestational age estimation in rural Guatemala. Women of childbearing age from four communities in rural Guatemala were invited to participate in a longitudinal study. Gestational age at birth was determined by an early second trimester measure of biparietal diameter, last menstrual period (LMP), the Capurro neonatal examination and symphysis-fundus height (SFH) for 171 women-infant pairs. Regression modelling was used to determine which method provided the best estimate of gestational age using ultrasound as the reference. Gestational age estimated by LMP was within +/-14 days of the ultrasound estimate for 94% of the sample. LMP-estimated gestational age explained 46% of the variance in gestational age estimated by ultrasound whereas the neonatal examination explained only 20%. The results of this study suggest that, when trained field personnel assist women to recall their date of LMP, this date provides the best estimate of gestational age. SFH measured during the second trimester may provide a reasonable alternative when LMP is unavailable.

  8. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  9. Magnitude and Frequency of Floods on Nontidal Streams in Delaware

    USGS Publications Warehouse

    Ries, Kernell G.; Dillow, Jonathan J.A.

    2006-01-01

    Reliable estimates of the magnitude and frequency of annual peak flows are required for the economical and safe design of transportation and water-conveyance structures. This report, done in cooperation with the Delaware Department of Transportation (DelDOT) and the Delaware Geological Survey (DGS), presents methods for estimating the magnitude and frequency of floods on nontidal streams in Delaware at locations where streamgaging stations monitor streamflow continuously and at ungaged sites. Methods are presented for estimating the magnitude of floods for return frequencies ranging from 2 through 500 years. These methods are applicable to watersheds exhibiting a full range of urban development conditions. The report also describes StreamStats, a web application that makes it easy to obtain flood-frequency estimates for user-selected locations on Delaware streams. Flood-frequency estimates for ungaged sites are obtained through a process known as regionalization, using statistical regression analysis, where information determined for a group of streamgaging stations within a region forms the basis for estimates for ungaged sites within the region. One hundred and sixteen streamgaging stations in and near Delaware with at least 10 years of non-regulated annual peak-flow data available were used in the regional analysis. Estimates for gaged sites are obtained by combining the station peak-flow statistics (mean, standard deviation, and skew) and peak-flow estimates with regional estimates of skew and flood-frequency magnitudes. Example flood-frequency estimate calculations using the methods presented in the report are given for: (1) ungaged sites, (2) gaged locations, (3) sites upstream or downstream from a gaged location, and (4) sites between gaged locations. Regional regression equations applicable to ungaged sites in the Piedmont and Coastal Plain Physiographic Provinces of Delaware are presented. The equations incorporate drainage area, forest cover, impervious area, basin storage, housing density, soil type A, and mean basin slope as explanatory variables, and have average standard errors of prediction ranging from 28 to 72 percent. Additional regression equations that incorporate drainage area and housing density as explanatory variables are presented for use in defining the effects of urbanization on peak-flow estimates throughout Delaware for the 2-year through 500-year recurrence intervals, along with suggestions for their appropriate use in predicting development-affected peak flows. Additional topics associated with the analyses performed during the study are also discussed, including: (1) the availability and description of more than 30 basin and climatic characteristics considered during the development of the regional regression equations; (2) the treatment of increasing trends in the annual peak-flow series identified at 18 gaged sites, with respect to their relations with maximum 24-hour precipitation and housing density, and their use in the regional analysis; (3) calculation of the 90-percent confidence interval associated with peak-flow estimates from the regional regression equations; and (4) a comparison of flood-frequency estimates at gages used in a previous study, highlighting the effects of various improved analytical techniques.

  10. Perceived risk of tamoxifen side effects: a study of the use of absolute frequencies or frequency bands, with or without verbal descriptors.

    PubMed

    Knapp, Peter; Gardner, Peter H; Raynor, David K; Woolf, Elizabeth; McMillan, Brian

    2010-05-01

    To investigate the effectiveness of presenting medicine side effect risk information in different forms, including that proposed by UK guidelines [[1] Medicines and Healthcare products Regulatory Agency. Always read the leaflet-Getting the best information with every medicine. (Report of the Committee on Safety of Medicines Working Group on Patient Information). London: The Stationery Office, 2005.]. 134 Cancer Research UK (CRUK) website users were recruited via a 'pop-up'. Using a 2x2 factorial design, participants were randomly allocated to one of four conditions and asked to: imagine they had to take tamoxifen, estimate the risks of 4 side effects, and indicate a presentation mode preference. Those presented with absolute frequencies demonstrated greater accuracy in estimating 2 of 4 side effects, and of any side effect occurring, than those presented with frequency bands. Those presented with combined descriptors were more accurate at estimating the risk of pulmonary embolism than those presented with numeric descriptors only. Absolute frequencies outperform frequency bands when presenting side effect risk information. However, presenting such exact frequencies for every side effect may be much less digestible than all side effects listed under 5 frequency bands. Combined numerical and verbal descriptors may be better than numeric only descriptors when describing infrequent side effects. Information about side effects should be presented in ways that patients prefer, and which result in most accurate risk estimates. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  11. Hazard Function Estimation with Cause-of-Death Data Missing at Random.

    PubMed

    Wang, Qihua; Dinse, Gregg E; Liu, Chunling

    2012-04-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.

  12. Medical care at the Sweetwaters Music Festival.

    PubMed

    Yates, K M; Hazell, W C; Schweder, L

    2001-04-13

    To describe medical cover and medical presentations at the four-day 1999 Sweetwaters Music Festival, and make comparisons with other festivals. All medical contacts were counted, and patients presenting to the medical tent were included in the study. Case records were studied to determine demographic data, nature of complaint, treatment and disposition. A Medline literature search was performed to obtain information on other festivals. There were 2,231 medical contacts overall (8.9% of estimated attendees) and 217 presentations to the medical tent (0.9% of estimated attendees). 53% of patients presenting to the medical tent were men and the mean patient age was 25 years. Lacerations (16%), intoxication (13%), local infections (12%) and soft tissue injuries (9%) were the most common problems. There were no deaths or cardiac arrests. Problems encountered were similar to other music festivals, with minor injuries predominant.

  13. The effect of osteoarthritis definition on prevalence and incidence estimates: a systematic review.

    PubMed

    Pereira, D; Peleteiro, B; Araújo, J; Branco, J; Santos, R A; Ramos, E

    2011-11-01

    To understand the differences in prevalence and incidence estimates of osteoarthritis (OA), according to case definition, in knee, hip and hand joints. A systematic review was carried out in PUBMED and SCOPUS databases comprising the date of publication period from January 1995 to February 2011. We attempted to summarise data on the incidence and prevalence of OA according to different methods of assessment: self-reported, radiographic and symptomatic OA (clinical plus radiographic). Prevalence estimates were combined through meta-analysis and between-study heterogeneity was quantified. Seventy-two papers were reviewed (nine on incidence and 63 on prevalence). Higher OA prevalences are seen when radiographic OA definition was used for all age groups. Prevalence meta-analysis showed high heterogeneity between studies even in each specific joint and using the same OA definition. Although the knee is the most studied joint, the highest OA prevalence estimates were found in hand joints. OA of the knee tends to be more prevalent in women than in men independently of the OA definition used, but no gender differences were found in hip and hand OA. Insufficient data for incidence studies didn't allow us to make any comparison according to joint site or OA definition. Radiographic case definition of OA presented the highest prevalences. Within each joint site, self-reported and symptomatic OA definitions appear to present similar estimates. The high heterogeneity found in the studies limited further conclusions. Copyright © 2011 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  14. Constrained low-rank matrix estimation: phase transitions, approximate message passing and applications

    NASA Astrophysics Data System (ADS)

    Lesieur, Thibault; Krzakala, Florent; Zdeborová, Lenka

    2017-07-01

    This article is an extended version of previous work of Lesieur et al (2015 IEEE Int. Symp. on Information Theory Proc. pp 1635-9 and 2015 53rd Annual Allerton Conf. on Communication, Control and Computing (IEEE) pp 680-7) on low-rank matrix estimation in the presence of constraints on the factors into which the matrix is factorized. Low-rank matrix factorization is one of the basic methods used in data analysis for unsupervised learning of relevant features and other types of dimensionality reduction. We present a framework to study the constrained low-rank matrix estimation for a general prior on the factors, and a general output channel through which the matrix is observed. We draw a parallel with the study of vector-spin glass models—presenting a unifying way to study a number of problems considered previously in separate statistical physics works. We present a number of applications for the problem in data analysis. We derive in detail a general form of the low-rank approximate message passing (Low-RAMP) algorithm, that is known in statistical physics as the TAP equations. We thus unify the derivation of the TAP equations for models as different as the Sherrington-Kirkpatrick model, the restricted Boltzmann machine, the Hopfield model or vector (xy, Heisenberg and other) spin glasses. The state evolution of the Low-RAMP algorithm is also derived, and is equivalent to the replica symmetric solution for the large class of vector-spin glass models. In the section devoted to result we study in detail phase diagrams and phase transitions for the Bayes-optimal inference in low-rank matrix estimation. We present a typology of phase transitions and their relation to performance of algorithms such as the Low-RAMP or commonly used spectral methods.

  15. Ultra wide-band localization and SLAM: a comparative study for mobile robot navigation.

    PubMed

    Segura, Marcelo J; Auat Cheein, Fernando A; Toibero, Juan M; Mut, Vicente; Carelli, Ricardo

    2011-01-01

    In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work.

  16. A comparison of underwater hearing sensitivity in bottlenose dolphins (Tursiops truncatus) determined by electrophysiological and behavioral methods.

    PubMed

    Houser, Dorian S; Finneran, James J

    2006-09-01

    Variable stimulus presentation methods are used in auditory evoked potential (AEP) estimates of cetacean hearing sensitivity, each of which might affect stimulus reception and hearing threshold estimates. This study quantifies differences in underwater hearing thresholds obtained by AEP and behavioral means. For AEP estimates, a transducer embedded in a suction cup (jawphone) was coupled to the dolphin's lower jaw for stimulus presentation. Underwater AEP thresholds were obtained for three dolphins in San Diego Bay and for one dolphin in a quiet pool. Thresholds were estimated from the envelope following response at carrier frequencies ranging from 10 to 150 kHz. One animal, with an atypical audiogram, demonstrated significantly greater hearing loss in the right ear than in the left. Across test conditions, the range and average difference between AEP and behavioral threshold estimates were consistent with published comparisons between underwater behavioral and in-air AEP thresholds. AEP thresholds for one animal obtained in-air and in a quiet pool demonstrated a range of differences of -10 to 9 dB (mean = 3 dB). Results suggest that for the frequencies tested, the presentation of sound stimuli through a jawphone, underwater and in-air, results in acceptable differences to AEP threshold estimates.

  17. Comparison of volume estimation methods for pancreatic islet cells

    NASA Astrophysics Data System (ADS)

    Dvořák, JiřÃ.­; Å vihlík, Jan; Habart, David; Kybic, Jan

    2016-03-01

    In this contribution we study different methods of automatic volume estimation for pancreatic islets which can be used in the quality control step prior to the islet transplantation. The total islet volume is an important criterion in the quality control. Also, the individual islet volume distribution is interesting -- it has been indicated that smaller islets can be more effective. A 2D image of a microscopy slice containing the islets is acquired. The input of the volume estimation methods are segmented images of individual islets. The segmentation step is not discussed here. We consider simple methods of volume estimation assuming that the islets have spherical or ellipsoidal shape. We also consider a local stereological method, namely the nucleator. The nucleator does not rely on any shape assumptions and provides unbiased estimates if isotropic sections through the islets are observed. We present a simulation study comparing the performance of the volume estimation methods in different scenarios and an experimental study comparing the methods on a real dataset.

  18. Estimates of population change in selected species of tropical birds using mark-recapture data

    USGS Publications Warehouse

    Brawn, J.; Nichols, J.D.; Hines, J.E.; Nesbitt, J.

    2000-01-01

    The population biology of tropical birds is known for a only small sample of species; especially in the Neotropics. Robust estimates of parameters such as survival rate and finite rate of population change (A) are crucial for conservation purposes and useful for studies of avian life histories. We used methods developed by Pradel (1996, Biometrics 52:703-709) to estimate A for 10 species of tropical forest lowland birds using data from a long-term (> 20 yr) banding study in Panama. These species constitute a ecologically and phylogenetically diverse sample. We present these estimates and explore if they are consistent with what we know from selected studies of banded birds and from 5 yr of estimating nesting success (i.e., an important component of A). A major goal of these analyses is to assess if the mark-recapture methods generate reliable and reasonably precise estimates of population change than traditional methods that require more sampling effort.

  19. Estimating Upper Bounds for Occupancy and Number of Manatees in Areas Potentially Affected by Oil from the Deepwater Horizon Oil Spill

    PubMed Central

    Martin, Julien; Edwards, Holly H.; Bled, Florent; Fonnesbeck, Christopher J.; Dupuis, Jérôme A.; Gardner, Beth; Koslovsky, Stacie M.; Aven, Allen M.; Ward-Geiger, Leslie I.; Carmichael, Ruth H.; Fagan, Daniel E.; Ross, Monica A.; Reinert, Thomas R.

    2014-01-01

    The explosion of the Deepwater Horizon drilling platform created the largest marine oil spill in U.S. history. As part of the Natural Resource Damage Assessment process, we applied an innovative modeling approach to obtain upper estimates for occupancy and for number of manatees in areas potentially affected by the oil spill. Our data consisted of aerial survey counts in waters of the Florida Panhandle, Alabama and Mississippi. Our method, which uses a Bayesian approach, allows for the propagation of uncertainty associated with estimates from empirical data and from the published literature. We illustrate that it is possible to derive estimates of occupancy rate and upper estimates of the number of manatees present at the time of sampling, even when no manatees were observed in our sampled plots during surveys. We estimated that fewer than 2.4% of potentially affected manatee habitat in our Florida study area may have been occupied by manatees. The upper estimate for the number of manatees present in potentially impacted areas (within our study area) was estimated with our model to be 74 (95%CI 46 to 107). This upper estimate for the number of manatees was conditioned on the upper 95%CI value of the occupancy rate. In other words, based on our estimates, it is highly probable that there were 107 or fewer manatees in our study area during the time of our surveys. Because our analyses apply to habitats considered likely manatee habitats, our inference is restricted to these sites and to the time frame of our surveys. Given that manatees may be hard to see during aerial surveys, it was important to account for imperfect detection. The approach that we described can be useful for determining the best allocation of resources for monitoring and conservation. PMID:24670971

  20. Hazardous materials incident costs : estimating the costs of the March 25, 2004, tanker truck crash in Bridgeport, Connecticut

    DOT National Transportation Integrated Search

    2004-08-01

    Significant variations in the reporting of hazardous materials incident costs are illustrated using a case study of the March 2004 crash of a fuel tanker truck on Interstate 95 in Bridgeport, Connecticut. Three separate cost estimates are presented, ...

  1. The Nexus between the Above-Average Effect and Cooperative Learning in the Classroom

    ERIC Educational Resources Information Center

    Breneiser, Jennifer E.; Monetti, David M.; Adams, Katharine S.

    2012-01-01

    The present study examines the above-average effect (Chambers & Windschitl, 2004; Moore & Small, 2007) in assessments of task performance. Participants completed self-estimates of performance and group estimates of performance, before and after completing a task. Participants completed a task individually and in groups. Groups were…

  2. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    ERIC Educational Resources Information Center

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  3. Estimating forest characteristics using NAIP imagery and ArcObjects

    Treesearch

    John S Hogland; Nathaniel M. Anderson; Woodam Chung; Lucas Wells

    2014-01-01

    Detailed, accurate, efficient, and inexpensive methods of estimating basal area, trees, and aboveground biomass per acre across broad extents are needed to effectively manage forests. In this study we present such a methodology using readily available National Agriculture Imagery Program imagery, Forest Inventory Analysis samples, a two stage classification and...

  4. Parameter Estimates in Differential Equation Models for Population Growth

    ERIC Educational Resources Information Center

    Winkel, Brian J.

    2011-01-01

    We estimate the parameters present in several differential equation models of population growth, specifically logistic growth models and two-species competition models. We discuss student-evolved strategies and offer "Mathematica" code for a gradient search approach. We use historical (1930s) data from microbial studies of the Russian biologist,…

  5. Modeling environmental noise exceedances using non-homogeneous Poisson processes.

    PubMed

    Guarnaccia, Claudio; Quartieri, Joseph; Barrios, Juan M; Rodrigues, Eliane R

    2014-10-01

    In this work a non-homogeneous Poisson model is considered to study noise exposure. The Poisson process, counting the number of times that a sound level surpasses a threshold, is used to estimate the probability that a population is exposed to high levels of noise a certain number of times in a given time interval. The rate function of the Poisson process is assumed to be of a Weibull type. The presented model is applied to community noise data from Messina, Sicily (Italy). Four sets of data are used to estimate the parameters involved in the model. After the estimation and tuning are made, a way of estimating the probability that an environmental noise threshold is exceeded a certain number of times in a given time interval is presented. This estimation can be very useful in the study of noise exposure of a population and also to predict, given the current behavior of the data, the probability of occurrence of high levels of noise in the near future. One of the most important features of the model is that it implicitly takes into account different noise sources, which need to be treated separately when using usual models.

  6. Nonparametric change point estimation for survival distributions with a partially constant hazard rate.

    PubMed

    Brazzale, Alessandra R; Küchenhoff, Helmut; Krügel, Stefanie; Schiergens, Tobias S; Trentzsch, Heiko; Hartl, Wolfgang

    2018-04-05

    We present a new method for estimating a change point in the hazard function of a survival distribution assuming a constant hazard rate after the change point and a decreasing hazard rate before the change point. Our method is based on fitting a stump regression to p values for testing hazard rates in small time intervals. We present three real data examples describing survival patterns of severely ill patients, whose excess mortality rates are known to persist far beyond hospital discharge. For designing survival studies in these patients and for the definition of hospital performance metrics (e.g. mortality), it is essential to define adequate and objective end points. The reliable estimation of a change point will help researchers to identify such end points. By precisely knowing this change point, clinicians can distinguish between the acute phase with high hazard (time elapsed after admission and before the change point was reached), and the chronic phase (time elapsed after the change point) in which hazard is fairly constant. We show in an extensive simulation study that maximum likelihood estimation is not robust in this setting, and we evaluate our new estimation strategy including bootstrap confidence intervals and finite sample bias correction.

  7. The Burden of Tick-Borne Encephalitis in Disability-Adjusted Life Years (DALYs) for Slovenia

    PubMed Central

    Šmit, Renata; Postma, Maarten J.

    2015-01-01

    Background Tick-borne encephalitis (TBE) presents an increasing burden in many parts of Europe, Asian Russia, Siberia, Asian former USSR and Far East. Incidence can be considered as one way to express the burden. A more comprehensive measure concerns disability-adjusted life years (DALYs), better characterizing the full burden of TBE. TBE burden in DALYs has not yet been estimated, nor has it been specified by the Global Burden of Disease (GBD) studies. Objective The purpose of the present study is to estimate the burden of TBE in Slovenia, expressed in DALYs, both from the population and individual perspectives. We discuss the impact of TBE burden on public health and potential strategies to reduce this burden in Slovenia. Methods The burden of TBE is estimated by using the updated DALYs' methodology first introduced in the GBD project. The DALYs᾽ calculations are based on the health outcomes of the natural course of the disease being modelled. Corrections for under-reporting and under-ascertainment are applied. The impact of uncertainty in parameters in the model was assessed using sensitivity analyses. Results From the population perspective, total DALYs amount to 3,450 (167.8 per 100,000 population), while from the individual perspective they amount to 3.1 per case in 2011. Notably, the consequences of TBE present a larger burden than TBE itself. Conclusions TBE presents a relatively high burden expressed in DALYs compared with estimates for other infectious diseases from the GBD 2010 study for Slovenia. Raising awareness and increasing vaccination coverage are needed to reduce TBE and its consequences. PMID:26672751

  8. Estimating flow duration curve in the humid tropics: a disaggregation approach in Hawaiian catchments

    NASA Astrophysics Data System (ADS)

    Chris, Leong; Yoshiyuki, Yokoo

    2017-04-01

    Islands that are concentrated in developing countries have poor hydrological research data which contribute to stress on hydrological resources due to unmonitored human influence and negligence. As studies in islands are relatively young, there is a need to understand these stresses and influences by building block research specifically targeting islands. The flow duration curve (FDC) is a simple start up hydrological tool that can be used in initial studies of islands. This study disaggregates the FDC into three sections, top, middle and bottom and in each section runoff is estimated with simple hydrological models. The study is based on Hawaiian Islands, toward estimating runoff in ungauged island catchments in the humid tropics. Runoff estimations in the top and middle sections include using the Curve Number (CN) method and the Regime Curve (RC) respectively. The bottom section is presented as a separate study from this one. The results showed that for majority of the catchments the RC can be used for estimations in the middle section of the FDC. It also showed that in order for the CN method to make stable estimations, it had to be calibrated. This study identifies simple methodologies that can be useful for making runoff estimations in ungauged island catchments.

  9. A Pilot Study to Evaluate California's Fossil Fuel CO2 Emissions Using Atmospheric Observations

    NASA Astrophysics Data System (ADS)

    Graven, H. D.; Fischer, M. L.; Lueker, T.; Guilderson, T.; Brophy, K. J.; Keeling, R. F.; Arnold, T.; Bambha, R.; Callahan, W.; Campbell, J. E.; Cui, X.; Frankenberg, C.; Hsu, Y.; Iraci, L. T.; Jeong, S.; Kim, J.; LaFranchi, B. W.; Lehman, S.; Manning, A.; Michelsen, H. A.; Miller, J. B.; Newman, S.; Paplawsky, B.; Parazoo, N.; Sloop, C.; Walker, S.; Whelan, M.; Wunch, D.

    2016-12-01

    Atmospheric CO2 concentration is influenced by human activities and by natural exchanges. Studies of CO2 fluxes using atmospheric CO2 measurements typically focus on natural exchanges and assume that CO2 emissions by fossil fuel combustion and cement production are well-known from inventory estimates. However, atmospheric observation-based or "top-down" studies could potentially provide independent methods for evaluating fossil fuel CO2 emissions, in support of policies to reduce greenhouse gas emissions and mitigate climate change. Observation-based estimates of fossil fuel-derived CO2 may also improve estimates of biospheric CO2 exchange, which could help to characterize carbon storage and climate change mitigation by terrestrial ecosystems. We have been developing a top-down framework for estimating fossil fuel CO2 emissions in California that uses atmospheric observations and modeling. California is implementing the "Global Warming Solutions Act of 2006" to reduce total greenhouse gas emissions to 1990 levels by 2020, and it has a diverse array of ecosystems that may serve as CO2 sources or sinks. We performed three month-long field campaigns in different seasons in 2014-15 to collect flask samples from a state-wide network of 10 towers. Using measurements of radiocarbon in CO2, we estimate the fossil fuel-derived CO2 present in the flask samples, relative to marine background air observed at coastal sites. Radiocarbon (14C) is not present in fossil fuel-derived CO2 because of radioactive decay over millions of years, so fossil fuel emissions cause a measurable decrease in the 14C/C ratio in atmospheric CO2. We compare the observations of fossil fuel-derived CO2 to simulations based on atmospheric modeling and published fossil fuel flux estimates, and adjust the fossil fuel flux estimates in a statistical inversion that takes account of several uncertainties. We will present the results of the top-down technique to estimate fossil fuel emissions for our field campaigns in California, and we will give an outlook for future development of the technique in California.

  10. An Integrated Approach for Aircraft Engine Performance Estimation and Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    imon, Donald L.; Armstrong, Jeffrey B.

    2012-01-01

    A Kalman filter-based approach for integrated on-line aircraft engine performance estimation and gas path fault diagnostics is presented. This technique is specifically designed for underdetermined estimation problems where there are more unknown system parameters representing deterioration and faults than available sensor measurements. A previously developed methodology is applied to optimally design a Kalman filter to estimate a vector of tuning parameters, appropriately sized to enable estimation. The estimated tuning parameters can then be transformed into a larger vector of health parameters representing system performance deterioration and fault effects. The results of this study show that basing fault isolation decisions solely on the estimated health parameter vector does not provide ideal results. Furthermore, expanding the number of the health parameters to address additional gas path faults causes a decrease in the estimation accuracy of those health parameters representative of turbomachinery performance deterioration. However, improved fault isolation performance is demonstrated through direct analysis of the estimated tuning parameters produced by the Kalman filter. This was found to provide equivalent or superior accuracy compared to the conventional fault isolation approach based on the analysis of sensed engine outputs, while simplifying online implementation requirements. Results from the application of these techniques to an aircraft engine simulation are presented and discussed.

  11. A Hierarchical Model for Simultaneous Detection and Estimation in Multi-subject fMRI Studies

    PubMed Central

    Degras, David; Lindquist, Martin A.

    2014-01-01

    In this paper we introduce a new hierarchical model for the simultaneous detection of brain activation and estimation of the shape of the hemodynamic response in multi-subject fMRI studies. The proposed approach circumvents a major stumbling block in standard multi-subject fMRI data analysis, in that it both allows the shape of the hemodynamic response function to vary across region and subjects, while still providing a straightforward way to estimate population-level activation. An e cient estimation algorithm is presented, as is an inferential framework that not only allows for tests of activation, but also for tests for deviations from some canonical shape. The model is validated through simulations and application to a multi-subject fMRI study of thermal pain. PMID:24793829

  12. Soil Moisture or Groundwater?

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2017-12-01

    Partitioning the vertically integrated water storage variations estimated from GRACE satellite data into the components of which it is comprised requires independent information. Land surface models, which simulate the transfer and storage of moisture and energy at the land surface, are often used to estimate water storage variability of snow, surface water, and soil moisture. To obtain an estimate of changes in groundwater, the estimates of these storage components are removed from GRACE data. Biases in the modeled water storage components are therefore present in the residual groundwater estimate. In this study, we examine how soil moisture variability, estimated using the Community Land Model (CLM), depends on the vertical structure of the model. We then explore the implications of this uncertainty in the context of estimating groundwater variations using GRACE data.

  13. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number.

    PubMed

    Fragkos, Konstantinos C; Tsagris, Michail; Frangos, Christos C

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator.

  14. Publication Bias in Meta-Analysis: Confidence Intervals for Rosenthal's Fail-Safe Number

    PubMed Central

    Fragkos, Konstantinos C.; Tsagris, Michail; Frangos, Christos C.

    2014-01-01

    The purpose of the present paper is to assess the efficacy of confidence intervals for Rosenthal's fail-safe number. Although Rosenthal's estimator is highly used by researchers, its statistical properties are largely unexplored. First of all, we developed statistical theory which allowed us to produce confidence intervals for Rosenthal's fail-safe number. This was produced by discerning whether the number of studies analysed in a meta-analysis is fixed or random. Each case produces different variance estimators. For a given number of studies and a given distribution, we provided five variance estimators. Confidence intervals are examined with a normal approximation and a nonparametric bootstrap. The accuracy of the different confidence interval estimates was then tested by methods of simulation under different distributional assumptions. The half normal distribution variance estimator has the best probability coverage. Finally, we provide a table of lower confidence intervals for Rosenthal's estimator. PMID:27437470

  15. Detailed Project Report. Small Beach Erosion Control Project. Broadkill Beach, Delaware.

    DTIC Science & Technology

    1972-02-01

    this study. TABLE 3 ESTIMATED PROPERTY VALUES IN BROADKILL BEACH (July 1971) Beach Front Property* Entire Community Present Present Fair Value Fair ...between the 14th and 50th year reflect only the land, houses and utilities (minus salvage value estimated at 25% of the fair value ) that are located... Value $ $ 1,221,000 2,866,000 ftExcluding beach area. >4’ 5 11. The water entering Delaware Bay from Delaware River is polluted, but the degree of

  16. Machine detector interface studies: Layout and synchrotron radiation estimate in the future circular collider interaction region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boscolo, Manuela; Burkhardt, Helmut; Sullivan, Michael

    The interaction region layout for the e +e – future circular collider FCC-ee is presented together with a preliminary estimate of synchrotron radiation that affects this region. We describe in this paper the main guidelines of this design and the estimate of synchrotron radiation coming from the last bending magnets and from the final focus quadrupoles, with the software tools developed for this purpose. Here, the design follows the asymmetric optics layout as far as incoming bend radiation is concerned with the maximum foreseen beam energy of 175 GeV and we present a feasible initial layout with an indication ofmore » tolerable synchrotron radiation.« less

  17. Machine detector interface studies: Layout and synchrotron radiation estimate in the future circular collider interaction region

    DOE PAGES

    Boscolo, Manuela; Burkhardt, Helmut; Sullivan, Michael

    2017-01-27

    The interaction region layout for the e +e – future circular collider FCC-ee is presented together with a preliminary estimate of synchrotron radiation that affects this region. We describe in this paper the main guidelines of this design and the estimate of synchrotron radiation coming from the last bending magnets and from the final focus quadrupoles, with the software tools developed for this purpose. Here, the design follows the asymmetric optics layout as far as incoming bend radiation is concerned with the maximum foreseen beam energy of 175 GeV and we present a feasible initial layout with an indication ofmore » tolerable synchrotron radiation.« less

  18. Flood Scenario Simulation and Disaster Estimation of Ba-Ma Creek Watershed in Nantou County, Taiwan

    NASA Astrophysics Data System (ADS)

    Peng, S. H.; Hsu, Y. K.

    2018-04-01

    The present study proposed several scenario simulations of flood disaster according to the historical flood event and planning requirement in Ba-Ma Creek Watershed located in Nantou County, Taiwan. The simulations were made using the FLO-2D model, a numerical model which can compute the velocity and depth of flood on a two-dimensional terrain. Meanwhile, the calculated data were utilized to estimate the possible damage incurred by the flood disaster. The results thus obtained can serve as references for disaster prevention. Moreover, the simulated results could be employed for flood disaster estimation using the method suggested by the Water Resources Agency of Taiwan. Finally, the conclusions and perspectives are presented.

  19. Two Birds With One Stone: Estimating Population Vaccination Coverage From a Test-negative Vaccine Effectiveness Case-control Study.

    PubMed

    Doll, Margaret K; Morrison, Kathryn T; Buckeridge, David L; Quach, Caroline

    2016-10-15

    Vaccination program evaluation includes assessment of vaccine uptake and direct vaccine effectiveness (VE). Often examined separately, we propose a design to estimate rotavirus vaccination coverage using controls from a rotavirus VE test-negative case-control study and to examine coverage following implementation of the Quebec, Canada, rotavirus vaccination program. We present our assumptions for using these data as a proxy for coverage in the general population, explore effects of diagnostic accuracy on coverage estimates via simulations, and validate estimates with an external source. We found 79.0% (95% confidence interval, 74.3%, 83.0%) ≥2-dose rotavirus coverage among participants eligible for publicly funded vaccination. No differences were detected between study and external coverage estimates. Simulations revealed minimal bias in estimates with high diagnostic sensitivity and specificity. We conclude that controls from a VE case-control study may be a valuable resource of coverage information when reasonable assumptions can be made for estimate generalizability; high rotavirus coverage demonstrates success of the Quebec program. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  20. Estimating Rates of Motor Vehicle Crashes Using Medical Encounter Data: A Feasibility Study

    DTIC Science & Technology

    2015-11-05

    used to develop more detailed predictive risk models as well as strategies for preventing specific types of MVCs. Systematic Review of Evidence... used to estimate rates of accident-related injuries more generally,9 but not with specific reference to MVCs. For the present report, rates of...precise rate estimates based on person-years rather than active duty strength, (e) multivariable effects of specific risk /protective factors after

  1. Using airborne lidar as a sampling tool for estimating forest biomass resources in the upper Tanana Valley of interior Alaska

    Treesearch

    Hans-Erik Andersen; Jacob Strunk; Hailemariam Temesgen

    2011-01-01

    Airborne laser scanning, collected in a sampling mode, has the potential to be a valuable tool for estimating the biomass resources available to support bioenergy production in rural communities of interior Alaska. In this study, we present a methodology for estimating forest biomass over a 201,226-ha area (of which 163,913 ha are forested) in the upper Tanana valley...

  2. Doubly robust nonparametric inference on the average treatment effect.

    PubMed

    Benkeser, D; Carone, M; Laan, M J Van Der; Gilbert, P B

    2017-12-01

    Doubly robust estimators are widely used to draw inference about the average effect of a treatment. Such estimators are consistent for the effect of interest if either one of two nuisance parameters is consistently estimated. However, if flexible, data-adaptive estimators of these nuisance parameters are used, double robustness does not readily extend to inference. We present a general theoretical study of the behaviour of doubly robust estimators of an average treatment effect when one of the nuisance parameters is inconsistently estimated. We contrast different methods for constructing such estimators and investigate the extent to which they may be modified to also allow doubly robust inference. We find that while targeted minimum loss-based estimation can be used to solve this problem very naturally, common alternative frameworks appear to be inappropriate for this purpose. We provide a theoretical study and a numerical evaluation of the alternatives considered. Our simulations highlight the need for and usefulness of these approaches in practice, while our theoretical developments have broad implications for the construction of estimators that permit doubly robust inference in other problems.

  3. Incorporating partially identified sample segments into acreage estimation procedures: Estimates using only observations from the current year

    NASA Technical Reports Server (NTRS)

    Sielken, R. L., Jr. (Principal Investigator)

    1981-01-01

    Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.

  4. Sequential Bayesian Filters for Estimating Time Series of Wrapped and Unwrapped Angles with Hyperparameter Estimation

    NASA Astrophysics Data System (ADS)

    Umehara, Hiroaki; Okada, Masato; Naruse, Yasushi

    2018-03-01

    The estimation of angular time series data is a widespread issue relating to various situations involving rotational motion and moving objects. There are two kinds of problem settings: the estimation of wrapped angles, which are principal values in a circular coordinate system (e.g., the direction of an object), and the estimation of unwrapped angles in an unbounded coordinate system such as for the positioning and tracking of moving objects measured by the signal-wave phase. Wrapped angles have been estimated in previous studies by sequential Bayesian filtering; however, the hyperparameters that are to be solved and that control the properties of the estimation model were given a priori. The present study establishes a procedure of hyperparameter estimation from the observation data of angles only, using the framework of Bayesian inference completely as the maximum likelihood estimation. Moreover, the filter model is modified to estimate the unwrapped angles. It is proved that without noise our model reduces to the existing algorithm of Itoh's unwrapping transform. It is numerically confirmed that our model is an extension of unwrapping estimation from Itoh's unwrapping transform to the case with noise.

  5. Cost and schedule estimation study report

    NASA Technical Reports Server (NTRS)

    Condon, Steve; Regardie, Myrna; Stark, Mike; Waligora, Sharon

    1993-01-01

    This report describes the analysis performed and the findings of a study of the software development cost and schedule estimation models used by the Flight Dynamics Division (FDD), Goddard Space Flight Center. The study analyzes typical FDD projects, focusing primarily on those developed since 1982. The study reconfirms the standard SEL effort estimation model that is based on size adjusted for reuse; however, guidelines for the productivity and growth parameters in the baseline effort model have been updated. The study also produced a schedule prediction model based on empirical data that varies depending on application type. Models for the distribution of effort and schedule by life-cycle phase are also presented. Finally, this report explains how to use these models to plan SEL projects.

  6. Combining band recovery data and Pollock's robust design to model temporary and permanent emigration

    USGS Publications Warehouse

    Lindberg, M.S.; Kendall, W.L.; Hines, J.E.; Anderson, M.G.

    2001-01-01

    Capture-recapture models are widely used to estimate demographic parameters of marked populations. Recently, this statistical theory has been extended to modeling dispersal of open populations. Multistate models can be used to estimate movement probabilities among subdivided populations if multiple sites are sampled. Frequently, however, sampling is limited to a single site. Models described by Burnham (1993, in Marked Individuals in the Study of Bird Populations, 199-213), which combined open population capture-recapture and band-recovery models, can be used to estimate permanent emigration when sampling is limited to a single population. Similarly, Kendall, Nichols, and Hines (1997, Ecology 51, 563-578) developed models to estimate temporary emigration under Pollock's (1982, Journal of Wildlife Management 46, 757-760) robust design. We describe a likelihood-based approach to simultaneously estimate temporary and permanent emigration when sampling is limited to a single population. We use a sampling design that combines the robust design and recoveries of individuals obtained immediately following each sampling period. We present a general form for our model where temporary emigration is a first-order Markov process, and we discuss more restrictive models. We illustrate these models with analysis of data on marked Canvasback ducks. Our analysis indicates that probability of permanent emigration for adult female Canvasbacks was 0.193 (SE = 0.082) and that birds that were present at the study area in year i - 1 had a higher probability of presence in year i than birds that were not present in year i - 1.

  7. An integrated uncertainty analysis and data assimilation approach for improved streamflow predictions

    NASA Astrophysics Data System (ADS)

    Hogue, T. S.; He, M.; Franz, K. J.; Margulis, S. A.; Vrugt, J. A.

    2010-12-01

    The current study presents an integrated uncertainty analysis and data assimilation approach to improve streamflow predictions while simultaneously providing meaningful estimates of the associated uncertainty. Study models include the National Weather Service (NWS) operational snow model (SNOW17) and rainfall-runoff model (SAC-SMA). The proposed approach uses the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) to simultaneously estimate uncertainties in model parameters, forcing, and observations. An ensemble Kalman filter (EnKF) is configured with the DREAM-identified uncertainty structure and applied to assimilating snow water equivalent data into the SNOW17 model for improved snowmelt simulations. Snowmelt estimates then serves as an input to the SAC-SMA model to provide streamflow predictions at the basin outlet. The robustness and usefulness of the approach is evaluated for a snow-dominated watershed in the northern Sierra Mountains. This presentation describes the implementation of DREAM and EnKF into the coupled SNOW17 and SAC-SMA models and summarizes study results and findings.

  8. Hazard Function Estimation with Cause-of-Death Data Missing at Random

    PubMed Central

    Wang, Qihua; Dinse, Gregg E.; Liu, Chunling

    2010-01-01

    Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874

  9. Estimation of source location and ground impedance using a hybrid multiple signal classification and Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung

    2016-07-01

    A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.

  10. Low acuity and general practice-type presentations to emergency departments: a rural perspective.

    PubMed

    Allen, Penny; Cheek, Colleen; Foster, Simon; Ruigrok, Marielle; Wilson, Deborah; Shires, Lizzi

    2015-04-01

    To estimate the number of general practice (GP)-type patients attending a rural ED and provide a comparative rural estimate to a metropolitan study. Analysis of presentations to the two EDs in Northwest Tasmania from 1 January 2009 to 31 December 2013 using the Diagnosis, Sprivulis, Australian College of Emergency Medicine (ACEM) and the Australian Institute of Health and Welfare (AIHW) methods to estimate the number of GP-type presentations. There were 255,365 ED presentations in Northwest Tasmania during the study period. There were 86,973 GP-type presentations using the ACEM method, 142,006 using the AIHW method, 174,748 using the Diagnosis method and 28,922 low acuity patients identified using the Sprivulis method. The proportion of GP-type presentations identified using the four methods ranged from 15% to 69%. The results suggest that triage status and self-referral are not reliable indicators of low acuity in this rural area. In rural areas with a shortage of GPs, it is likely that many people appropriately self-refer to ED because they cannot access a GP. The results indicate that the ACEM method might be most useful for identifying GP-type patients in rural ED. However, this requires validation in other regions of Australia. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  11. Estimating the relative contributions of human withdrawals and climate variability to changes in groundwater

    NASA Astrophysics Data System (ADS)

    Swenson, S. C.; Lawrence, D. M.

    2014-12-01

    Estimating the relative contributions of human withdrawals and climate variability to changes in groundwater is a challenging task at present. One method that has been used recently is a model-data synthesis combining GRACE total water storage estimates with simulated water storage estimates from land surface models. In this method, water storage changes due to natural climate variations simulated by a model are removed from total water storage changes observed by GRACE; the residual is then interpreted as anthropogenic groundwater change. If the modeled water storage estimate contains systematic errors, these errors will also be present in the residual groundwater estimate. For example, simulations performed with the Community Land Model (CLM; the land component of the Community Earth System Model) generally show a weak (as much as 50% smaller) seasonal cycle of water storage in semi-arid regions when compared to GRACE satellite water storage estimates. This bias propagates into GRACE-CLM anthropogenic groundwater change estimates, which then exhibit unphysical seasonal variability. The CLM bias can be traced to the parameterization of soil evaporative resistance. Incorporating a new soil resistance parameterization in CLM greatly reduces the seasonal bias with respect to GRACE. In this study, we compare the improved CLM water storage estimates to GRACE and discuss the implications for estimates of anthropogenic groundwater withdrawal, showing examples for the Middle East and Southwestern United States.

  12. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  13. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  14. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals

    PubMed Central

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610

  15. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.

    PubMed

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.

  16. A Study toward the Evaluation of ALOS Images for LAI Estimation in Rice Fields

    NASA Astrophysics Data System (ADS)

    Sharifi Hashjin, Sh.; Darvishzadeh, R.; Khandan, R.

    2013-10-01

    For expanding and managing agricultural sources, satellite data have a key role in determining required information about different factors in plants Including Leaf Area Index (LAI).This paper has studied the potential of spectral indices in estimating rice canopy LAI in Amol city as one of the main sources of rice production in Iran. Due to its importance in provision of food and calorie of a major portion of population, rice product was chosen for study. A field campaign was conducted when rice was in the max growth stage (late of June). Also, two satellite images from ALOS-AVNIR-2 were used (simultaneous with conducted field works) to extract and determine vegetation indices. Then the Regression between measured data and vegetation indices, derived from combination of different bands, was evaluated and after that suitable vegetation indices were realized. Finally, statistics and calculations for introduction of a suitable model were presented. After examination of models, the results showed that RDVI and SAVI2, by determination coefficient and RMSE of 0.12-0.59 and 0.24-0.62, have more accuracy in LAI estimation. Results of present study demonstrated the potential of ALOS images, for LAI estimation and their significant role in monitoring and managing the rice plant.

  17. Probablilistic evaluation of earthquake detection and location capability for Illinois, Indiana, Kentucky, Ohio, and West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauk, F.J.; Christensen, D.H.

    1980-09-01

    Probabilistic estimations of earthquake detection and location capabilities for the states of Illinois, Indiana, Kentucky, Ohio and West Virginia are presented in this document. The algorithm used in these epicentrality and minimum-magnitude estimations is a version of the program NETWORTH by Wirth, Blandford, and Husted (DARPA Order No. 2551, 1978) which was modified for local array evaluation at the University of Michigan Seismological Observatory. Estimations of earthquake detection capability for the years 1970 and 1980 are presented in four regional minimum m/sub b/ magnitude contour maps. Regional 90% confidence error ellipsoids are included for m/sub b/ magnitude events from 2.0more » through 5.0 at 0.5 m/sub b/ unit increments. The close agreement between these predicted epicentral 90% confidence estimates and the calculated error ellipses associated with actual earthquakes within the studied region suggest that these error determinations can be used to estimate the reliability of epicenter location. 8 refs., 14 figs., 2 tabs.« less

  18. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  19. Emergency Physician Estimation of Blood Loss

    PubMed Central

    Ashburn, Jeffery C.; Harrison, Tamara; Ham, James J.; Strote, Jared

    2012-01-01

    Introduction Emergency physicians (EP) frequently estimate blood loss, which can have implications for clinical care. The objectives of this study were to examine EP accuracy in estimating blood loss on different surfaces and compare attending physician and resident performance. Methods A sample of 56 emergency department (ED) physicians (30 attending physicians and 26 residents) were asked to estimate the amount of moulage blood present in 4 scenarios: 500 mL spilled onto an ED cot; 25 mL spilled onto a 10-pack of 4 × 4-inch gauze; 100 mL on a T-shirt; and 150 mL in a commode filled with water. Standard estimate error (the absolute value of (estimated volume − actual volume)/actual volume × 100) was calculated for each estimate. Results The mean standard error for all estimates was 116% with a range of 0% to 1233%. Only 8% of estimates were within 20% of the true value. Estimates were most accurate for the sheet scenario and worst for the commode scenario. Residents and attending physicians did not perform significantly differently (P > 0.05). Conclusion Emergency department physicians do not estimate blood loss well in a variety of scenarios. Such estimates could potentially be misleading if used in clinical decision making. Clinical experience does not appear to improve estimation ability in this limited study. PMID:22942938

  20. An adaptive displacement estimation algorithm for improved reconstruction of thermal strain.

    PubMed

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M; Tillman, Bryan; Leers, Steven A; Kim, Kang

    2015-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas' estimator and time-shift estimators such as normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas' estimator is limited by phase-wrapping and NXcorr performs poorly when the SNR is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas' estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas' estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas' estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI showed that the adaptive displacement estimator was less biased than either Loupas' estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7 to 350% and the spatial accuracy by 1.2 to 23.0% (P < 0.001). An ex vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and resulted in improved strain reconstruction.

  1. An Adaptive Displacement Estimation Algorithm for Improved Reconstruction of Thermal Strain

    PubMed Central

    Ding, Xuan; Dutta, Debaditya; Mahmoud, Ahmed M.; Tillman, Bryan; Leers, Steven A.; Kim, Kang

    2014-01-01

    Thermal strain imaging (TSI) can be used to differentiate between lipid and water-based tissues in atherosclerotic arteries. However, detecting small lipid pools in vivo requires accurate and robust displacement estimation over a wide range of displacement magnitudes. Phase-shift estimators such as Loupas’ estimator and time-shift estimators like normalized cross-correlation (NXcorr) are commonly used to track tissue displacements. However, Loupas’ estimator is limited by phase-wrapping and NXcorr performs poorly when the signal-to-noise ratio (SNR) is low. In this paper, we present an adaptive displacement estimation algorithm that combines both Loupas’ estimator and NXcorr. We evaluated this algorithm using computer simulations and an ex-vivo human tissue sample. Using 1-D simulation studies, we showed that when the displacement magnitude induced by thermal strain was >λ/8 and the electronic system SNR was >25.5 dB, the NXcorr displacement estimate was less biased than the estimate found using Loupas’ estimator. On the other hand, when the displacement magnitude was ≤λ/4 and the electronic system SNR was ≤25.5 dB, Loupas’ estimator had less variance than NXcorr. We used these findings to design an adaptive displacement estimation algorithm. Computer simulations of TSI using Field II showed that the adaptive displacement estimator was less biased than either Loupas’ estimator or NXcorr. Strain reconstructed from the adaptive displacement estimates improved the strain SNR by 43.7–350% and the spatial accuracy by 1.2–23.0% (p < 0.001). An ex-vivo human tissue study provided results that were comparable to computer simulations. The results of this study showed that a novel displacement estimation algorithm, which combines two different displacement estimators, yielded improved displacement estimation and results in improved strain reconstruction. PMID:25585398

  2. Estimating stage-specific daily survival probabilities of nests when nest age is unknown

    USGS Publications Warehouse

    Stanley, T.R.

    2004-01-01

    Estimation of daily survival probabilities of nests is common in studies of avian populations. Since the introduction of Mayfield's (1961, 1975) estimator, numerous models have been developed to relax Mayfield's assumptions and account for biologically important sources of variation. Stanley (2000) presented a model for estimating stage-specific (e.g. incubation stage, nestling stage) daily survival probabilities of nests that conditions on “nest type” and requires that nests be aged when they are found. Because aging nests typically requires handling the eggs, there may be situations where nests can not or should not be aged and the Stanley (2000) model will be inapplicable. Here, I present a model for estimating stage-specific daily survival probabilities that conditions on nest stage for active nests, thereby obviating the need to age nests when they are found. Specifically, I derive the maximum likelihood function for the model, evaluate the model's performance using Monte Carlo simulations, and provide software for estimating parameters (along with an example). For sample sizes as low as 50 nests, bias was small and confidence interval coverage was close to the nominal rate, especially when a reduced-parameter model was used for estimation.

  3. Spatial Estimation of Sub-Hour Global Horizontal Irradiance Based on Official Observations and Remote Sensors

    PubMed Central

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-01-01

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations). PMID:24732102

  4. Spatial estimation of sub-hour Global Horizontal Irradiance based on official observations and remote sensors.

    PubMed

    Gutierrez-Corea, Federico-Vladimir; Manso-Callejo, Miguel-Angel; Moreno-Regidor, María-Pilar; Velasco-Gómez, Jesús

    2014-04-11

    This study was motivated by the need to improve densification of Global Horizontal Irradiance (GHI) observations, increasing the number of surface weather stations that observe it, using sensors with a sub-hour periodicity and examining the methods of spatial GHI estimation (by interpolation) with that periodicity in other locations. The aim of the present research project is to analyze the goodness of 15-minute GHI spatial estimations for five methods in the territory of Spain (three geo-statistical interpolation methods, one deterministic method and the HelioSat2 method, which is based on satellite images). The research concludes that, when the work area has adequate station density, the best method for estimating GHI every 15 min is Regression Kriging interpolation using GHI estimated from satellite images as one of the input variables. On the contrary, when station density is low, the best method is estimating GHI directly from satellite images. A comparison between the GHI observed by volunteer stations and the estimation model applied concludes that 67% of the volunteer stations analyzed present values within the margin of error (average of ±2 standard deviations).

  5. A Latent Class Approach to Estimating Test-Score Reliability

    ERIC Educational Resources Information Center

    van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas

    2011-01-01

    This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…

  6. Alternative Methods To Estimate the Number of Homeless Children and Youth. Final Report. Research Paper.

    ERIC Educational Resources Information Center

    Burt, Martha R.

    This report presents the results of a federally mandated study done to determine the best means of identifying, locating, and counting homeless children and youth, for the purpose of facilitating their successful participation in school and other educational activities. Several alternative approaches to obtaining consistent national estimates of…

  7. Strategic Flexibility in Computational Estimation for Chinese- and Canadian-Educated Adults

    ERIC Educational Resources Information Center

    Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke

    2014-01-01

    The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with…

  8. Is the Professor In? Faculty Presence during Office Hours

    ERIC Educational Resources Information Center

    Pfund, Rory A.; Rogan, Jessica D.; Burnham, Bryan R.; Norcross, John C.

    2013-01-01

    Two studies were conducted on the availability of full-time faculty during their posted office hours. In the first, we surveyed students and faculty at a single university on their estimates of the percentage of faculty present during office hours. Students ("N" = 380) and faculty ("N" = 176) estimated that 77% and 83% of…

  9. Forest Stand Canopy Structure Attribute Estimation from High Resolution Digital Airborne Imagery

    Treesearch

    Demetrios Gatziolis

    2006-01-01

    A study of forest stand canopy variable assessment using digital, airborne, multispectral imagery is presented. Variable estimation involves stem density, canopy closure, and mean crown diameter, and it is based on quantification of spatial autocorrelation among pixel digital numbers (DN) using variogram analysis and an alternative, non-parametric approach known as...

  10. An In Vitro Assessment of Bioaccessibility of Arsenicals in Rice and the Use of this Estimate within a Probabilistic Exposure Model

    EPA Science Inventory

    In this study, an in vitro synthetic gastrointestinal extraction protocol was used to estimate bioaccessibility of different arsenicals present in seventeen rice samples of various grain types that were collected across the US. The across matrix average for total arsenic was 209...

  11. Market projections of cellulose nanomaterial-enabled products- Part 1: Applications

    Treesearch

    Jo Anne Shatkin; Theodore H. Wegner; E.M. (Ted) Bilek; John Cowie

    2014-01-01

    Nanocellulose provides a new materials platform for the sustainable production of high-performance nano-enabled products in an array of applications. In this paper, potential applications for cellulose nanomaterials are identified as the first step toward estimating market volume. The overall study, presented in two parts, estimates market volume on the basis of...

  12. Permeability Estimation Directly From Logging-While-Drilling Induced Polarization Data

    NASA Astrophysics Data System (ADS)

    Fiandaca, G.; Maurya, P. K.; Balbarini, N.; Hördt, A.; Christiansen, A. V.; Foged, N.; Bjerg, P. L.; Auken, E.

    2018-04-01

    In this study, we present the prediction of permeability from time domain spectral induced polarization (IP) data, measured in boreholes on undisturbed formations using the El-log logging-while-drilling technique. We collected El-log data and hydraulic properties on unconsolidated Quaternary and Miocene deposits in boreholes at three locations at a field site in Denmark, characterized by different electrical water conductivity and chemistry. The high vertical resolution of the El-log technique matches the lithological variability at the site, minimizing ambiguity in the interpretation originating from resolution issues. The permeability values were computed from IP data using a laboratory-derived empirical relationship presented in a recent study for saturated unconsolidated sediments, without any further calibration. A very good correlation, within 1 order of magnitude, was found between the IP-derived permeability estimates and those derived using grain size analyses and slug tests, with similar depth trends and permeability contrasts. Furthermore, the effect of water conductivity on the IP-derived permeability estimations was found negligible in comparison to the permeability uncertainties estimated from the inversion and the laboratory-derived empirical relationship.

  13. Comparison of haemoglobin estimates using direct & indirect cyanmethaemoglobin methods.

    PubMed

    Bansal, Priyanka Gupta; Toteja, Gurudayal Singh; Bhatia, Neena; Gupta, Sanjeev; Kaur, Manpreet; Adhikari, Tulsi; Garg, Ashok Kumar

    2016-10-01

    Estimation of haemoglobin is the most widely used method to assess anaemia. Although direct cyanmethaemoglobin method is the recommended method for estimation of haemoglobin, but it may not be feasible under field conditions. Hence, the present study was undertaken to compare indirect cyanmethaemoglobin method against the conventional direct method for haemoglobin estimation. Haemoglobin levels were estimated for 888 adolescent girls aged 11-18 yr residing in an urban slum in Delhi by both direct and indirect cyanmethaemoglobin methods, and the results were compared. The mean haemoglobin levels for 888 whole blood samples estimated by direct and indirect cyanmethaemoglobin method were 116.1 ± 12.7 and 110.5 ± 12.5 g/l, respectively, with a mean difference of 5.67 g/l (95% confidence interval: 5.45 to 5.90, P<0.001); which is equivalent to 0.567 g%. The prevalence of anaemia was reported as 59.6 and 78.2 per cent by direct and indirect methods, respectively. Sensitivity and specificity of indirect cyanmethaemoglobin method were 99.2 and 56.4 per cent, respectively. Using regression analysis, prediction equation was developed for indirect haemoglobin values. The present findings revealed that indirect cyanmethaemoglobin method overestimated the prevalence of anaemia as compared to the direct method. However, if a correction factor is applied, indirect method could be successfully used for estimating true haemoglobin level. More studies should be undertaken to establish agreement and correction factor between direct and indirect cyanmethaemoglobin methods.

  14. Mean size estimation yields left-side bias: Role of attention on perceptual averaging.

    PubMed

    Li, Kuei-An; Yeh, Su-Ling

    2017-11-01

    The human visual system can estimate mean size of a set of items effectively; however, little is known about whether information on each visual field contributes equally to the mean size estimation. In this study, we examined whether a left-side bias (LSB)-perceptual judgment tends to depend more heavily on left visual field's inputs-affects mean size estimation. Participants were instructed to estimate the mean size of 16 spots. In half of the trials, the mean size of the spots on the left side was larger than that on the right side (the left-larger condition) and vice versa (the right-larger condition). Our results illustrated an LSB: A larger estimated mean size was found in the left-larger condition than in the right-larger condition (Experiment 1), and the LSB vanished when participants' attention was effectively cued to the right side (Experiment 2b). Furthermore, the magnitude of LSB increased with stimulus-onset asynchrony (SOA), when spots on the left side were presented earlier than the right side. In contrast, the LSB vanished and then induced a reversed effect with SOA when spots on the right side were presented earlier (Experiment 3). This study offers the first piece of evidence suggesting that LSB does have a significant influence on mean size estimation of a group of items, which is induced by a leftward attentional bias that enhances the prior entry effect on the left side.

  15. Sampling and estimating recreational use.

    Treesearch

    Timothy G. Gregoire; Gregory J. Buhyoff

    1999-01-01

    Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.

  16. Estimation of Stormwater Interception Rate for various LID Facilities

    NASA Astrophysics Data System (ADS)

    Kim, S.; Lee, O.; Choi, J.

    2017-12-01

    In this study, the stormwater interception rate is proposed to apply in the design of LID facilities. For this purpose, EPA-SWMM is built with some areas of Noksan National Industrial Complex where long-term observed stormwater data were monitored and stormwater interception rates for various design capacities of various LID facilities are estimated. While the sensitivity of stormwater interception rate according to design specifications of bio-retention and infiltration trench facilities is not large, the sensitivity of stormwater interception rate according to local rainfall characteristics is relatively big. As a result of comparing the present rainfall interception rate estimation method which is officially operated in Korea with the one proposed in this study, it will be presented that the present method is highly likely to overestimate the performance of the bio-retention and infiltration trench facilities. Finally, a new stormwater interception rate formulas for the bio-retention and infiltration trench LID facilities will be proposed. Acknowledgement This research was supported by a grant (2016000200002) from Public Welfare Technology Development Program funded by Ministry of Environment of Korean government.

  17. Potential radiological impact of tornadoes on the safety of Nuclear Fuel Services' West Valley Fuel Reprocessing Plant. 2. Reentrainment and discharge of radioactive materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, W Jr

    1981-07-01

    This report describes results of a parametric study of quantities of radioactive materials that might be discharged by a tornado-generated depressurization on contaminated process cells within the presently inoperative Nuclear Fuel Services' (NFS) fuel reprocessing facility near West Valley, New York. The study involved the following tasks: determining approximate quantities of radioactive materials in the cells and characterizing particle-size distribution; estimating the degree of mass reentrainment from particle-size distribution and from air speed data presented in Part 1; and estimating the quantities of radioactive material (source term) released from the cells to the atmosphere. The study has shown that improperlymore » sealed manipulator ports in the Process Mechanical Cell (PMC) present the most likely pathway for release of substantial quantities of radioactive material in the atmosphere under tornado accident conditions at the facility.« less

  18. New robust statistical procedures for the polytomous logistic regression models.

    PubMed

    Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro

    2018-05-17

    This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.

  19. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  20. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  1. Methods for estimating peak-flow frequencies at ungaged sites in Montana based on data through water year 2011: Chapter F in Montana StreamStats

    USGS Publications Warehouse

    Sando, Roy; Sando, Steven K.; McCarthy, Peter M.; Dutton, DeAnn M.

    2016-04-05

    The U.S. Geological Survey (USGS), in cooperation with the Montana Department of Natural Resources and Conservation, completed a study to update methods for estimating peak-flow frequencies at ungaged sites in Montana based on peak-flow data at streamflow-gaging stations through water year 2011. The methods allow estimation of peak-flow frequencies (that is, peak-flow magnitudes, in cubic feet per second, associated with annual exceedance probabilities of 66.7, 50, 42.9, 20, 10, 4, 2, 1, 0.5, and 0.2 percent) at ungaged sites. The annual exceedance probabilities correspond to 1.5-, 2-, 2.33-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence intervals, respectively.Regional regression analysis is a primary focus of Chapter F of this Scientific Investigations Report, and regression equations for estimating peak-flow frequencies at ungaged sites in eight hydrologic regions in Montana are presented. The regression equations are based on analysis of peak-flow frequencies and basin characteristics at 537 streamflow-gaging stations in or near Montana and were developed using generalized least squares regression or weighted least squares regression.All of the data used in calculating basin characteristics that were included as explanatory variables in the regression equations were developed for and are available through the USGS StreamStats application (http://water.usgs.gov/osw/streamstats/) for Montana. StreamStats is a Web-based geographic information system application that was created by the USGS to provide users with access to an assortment of analytical tools that are useful for water-resource planning and management. The primary purpose of the Montana StreamStats application is to provide estimates of basin characteristics and streamflow characteristics for user-selected ungaged sites on Montana streams. The regional regression equations presented in this report chapter can be conveniently solved using the Montana StreamStats application.Selected results from this study were compared with results of previous studies. For most hydrologic regions, the regression equations reported for this study had lower mean standard errors of prediction (in percent) than the previously reported regression equations for Montana. The equations presented for this study are considered to be an improvement on the previously reported equations primarily because this study (1) included 13 more years of peak-flow data; (2) included 35 more streamflow-gaging stations than previous studies; (3) used a detailed geographic information system (GIS)-based definition of the regulation status of streamflow-gaging stations, which allowed better determination of the unregulated peak-flow records that are appropriate for use in the regional regression analysis; (4) included advancements in GIS and remote-sensing technologies, which allowed more convenient calculation of basin characteristics and investigation of many more candidate basin characteristics; and (5) included advancements in computational and analytical methods, which allowed more thorough and consistent data analysis.This report chapter also presents other methods for estimating peak-flow frequencies at ungaged sites. Two methods for estimating peak-flow frequencies at ungaged sites located on the same streams as streamflow-gaging stations are described. Additionally, envelope curves relating maximum recorded annual peak flows to contributing drainage area for each of the eight hydrologic regions in Montana are presented and compared to a national envelope curve. In addition to providing general information on characteristics of large peak flows, the regional envelope curves can be used to assess the reasonableness of peak-flow frequency estimates determined using the regression equations.

  2. Assessing removals for North Central forest inventories.

    Treesearch

    W. Brad Smith

    1991-01-01

    Discusses method used by the Forest Inventory and Analysis Unit for estimating timber removals. Presents the relationship of timber utilization studies, primary lumber mill studies, and forest inventory data.

  3. Ultra Wide-Band Localization and SLAM: A Comparative Study for Mobile Robot Navigation

    PubMed Central

    Segura, Marcelo J.; Auat Cheein, Fernando A.; Toibero, Juan M.; Mut, Vicente; Carelli, Ricardo

    2011-01-01

    In this work, a comparative study between an Ultra Wide-Band (UWB) localization system and a Simultaneous Localization and Mapping (SLAM) algorithm is presented. Due to its high bandwidth and short pulses length, UWB potentially allows great accuracy in range measurements based on Time of Arrival (TOA) estimation. SLAM algorithms recursively estimates the map of an environment and the pose (position and orientation) of a mobile robot within that environment. The comparative study presented here involves the performance analysis of implementing in parallel an UWB localization based system and a SLAM algorithm on a mobile robot navigating within an environment. Real time results as well as error analysis are also shown in this work. PMID:22319397

  4. Using airborne light detection and ranging as a sampling tool for estimating forest biomass resources in the upper Tanana Valley of interior Alaska

    Treesearch

    Hans-Erik Andersen; Jacob Strunk; Hailemariam Temesgen

    2011-01-01

    Airborne laser scanning, collected in a sampling mode, has the potential to be a valuable tool for estimating the biomass resources available to support bioenergy production in rural communities of interior Alaska. In this study, we present a methodology for estimating forest biomass over a 201,226-ha area (of which 163,913 ha are forested) in the upper Tanana valley...

  5. Gravity-darkening exponents in semi-detached binary systems from their photometric observations. II.

    NASA Astrophysics Data System (ADS)

    Djurašević, G.; Rovithis-Livaniou, H.; Rovithis, P.; Georgiades, N.; Erkapić, S.; Pavlović, R.

    2006-01-01

    This second part of our study concerning gravity-darkening presents the results for 8 semi-detached close binary systems. From the light-curve analysis of these systems the exponent of the gravity-darkening (GDE) for the Roche lobe filling components has been empirically derived. The method used for the light-curve analysis is based on Roche geometry, and enables simultaneous estimation of the systems' parameters and the gravity-darkening exponents. Our analysis is restricted to the black-body approximation which can influence in some degree the parameter estimation. The results of our analysis are: 1) For four of the systems, namely: TX UMa, β Per, AW Cam and TW Cas, there is a very good agreement between empirically estimated and theoretically predicted values for purely convective envelopes. 2) For the AI Dra system, the estimated value of gravity-darkening exponent is greater, and for UX Her, TW And and XZ Pup lesser than corresponding theoretical predictions, but for all mentioned systems the obtained values of the gravity-darkening exponent are quite close to the theoretically expected values. 3) Our analysis has proved generally that with the correction of the previously estimated mass ratios of the components within some of the analysed systems, the theoretical predictions of the gravity-darkening exponents for stars with convective envelopes are highly reliable. The anomalous values of the GDE found in some earlier studies of these systems can be considered as the consequence of the inappropriate method used to estimate the GDE. 4) The empirical estimations of GDE given in Paper I and in the present study indicate that in the light-curve analysis one can apply the recent theoretical predictions of GDE with high confidence for stars with both convective and radiative envelopes.

  6. Estimation of the object orientation and location with the use of MEMS sensors

    NASA Astrophysics Data System (ADS)

    Sawicki, Aleksander; Walendziuk, Wojciech; Idzkowski, Adam

    2015-09-01

    The article presents the implementation of the estimation algorithms of orientation in 3D space and the displacement of an object in a 2D space. Moreover, a general orientation storage methods using Euler angles, quaternion and rotation matrix are presented. The experimental part presents the results of the complementary filter implementation. In the study experimental microprocessor module based on STM32f4 Discovery system and myRIO hardware platform equipped with FPGA were used. The attempt to track an object in two-dimensional space, which are showed in the final part of this article, were made with the use of the equipment mentioned above.

  7. Synthetic aperture radar target detection, feature extraction, and image formation techniques

    NASA Technical Reports Server (NTRS)

    Li, Jian

    1994-01-01

    This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.

  8. Estimation of the Proportion of Underachieving Students in Compulsory Secondary Education in Spain: An Application of the Rasch Model

    PubMed Central

    Veas, Alejandro; Gilar, Raquel; Miñano, Pablo; Castejón, Juan-Luis

    2016-01-01

    There are very few studies in Spain that treat underachievement rigorously, and those that do are typically related to gifted students. The present study examined the proportion of underachieving students using the Rasch measurement model. A sample of 643 first-year high school students (mean age = 12.09; SD = 0.47) from 8 schools in the province of Alicante (Spain) completed the Battery of Differential and General Skills (Badyg), and these students' General Points Average (GPAs) were recovered by teachers. Dichotomous and Partial credit Rasch models were performed. After adjusting the measurement instruments, the individual underachievement index provided a total sample of 181 underachieving students, or 28.14% of the total sample across the ability levels. This study confirms that the Rasch measurement model can accurately estimate the construct validity of both the intelligence test and the academic grades for the calculation of underachieving students. Furthermore, the present study constitutes a pioneer framework for the estimation of the prevalence of underachievement in Spain. PMID:26973586

  9. Estimating disease prevalence from two-phase surveys with non-response at the second phase

    PubMed Central

    Gao, Sujuan; Hui, Siu L.; Hall, Kathleen S.; Hendrie, Hugh C.

    2010-01-01

    SUMMARY In this paper we compare several methods for estimating population disease prevalence from data collected by two-phase sampling when there is non-response at the second phase. The traditional weighting type estimator requires the missing completely at random assumption and may yield biased estimates if the assumption does not hold. We review two approaches and propose one new approach to adjust for non-response assuming that the non-response depends on a set of covariates collected at the first phase: an adjusted weighting type estimator using estimated response probability from a response model; a modelling type estimator using predicted disease probability from a disease model; and a regression type estimator combining the adjusted weighting type estimator and the modelling type estimator. These estimators are illustrated using data from an Alzheimer’s disease study in two populations. Simulation results are presented to investigate the performances of the proposed estimators under various situations. PMID:10931514

  10. Measuring agreement of multivariate discrete survival times using a modified weighted kappa coefficient.

    PubMed

    Guo, Ying; Manatunga, Amita K

    2009-03-01

    Assessing agreement is often of interest in clinical studies to evaluate the similarity of measurements produced by different raters or methods on the same subjects. We present a modified weighted kappa coefficient to measure agreement between bivariate discrete survival times. The proposed kappa coefficient accommodates censoring by redistributing the mass of censored observations within the grid where the unobserved events may potentially happen. A generalized modified weighted kappa is proposed for multivariate discrete survival times. We estimate the modified kappa coefficients nonparametrically through a multivariate survival function estimator. The asymptotic properties of the kappa estimators are established and the performance of the estimators are examined through simulation studies of bivariate and trivariate survival times. We illustrate the application of the modified kappa coefficient in the presence of censored observations with data from a prostate cancer study.

  11. Critical elements on fitting the Bayesian multivariate Poisson Lognormal model

    NASA Astrophysics Data System (ADS)

    Zamzuri, Zamira Hasanah binti

    2015-10-01

    Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.

  12. Method development estimating ambient mercury concentration from monitored mercury wet deposition

    NASA Astrophysics Data System (ADS)

    Chen, S. M.; Qiu, X.; Zhang, L.; Yang, F.; Blanchard, P.

    2013-05-01

    Speciated atmospheric mercury data have recently been monitored at multiple locations in North America; but the spatial coverage is far less than the long-established mercury wet deposition network. The present study describes a first attempt linking ambient concentration with wet deposition using Beta distribution fitting of a ratio estimate. The mean, median, mode, standard deviation, and skewness of the fitted Beta distribution parameters were generated using data collected in 2009 at 11 monitoring stations. Comparing the normalized histogram and the fitted density function, the empirical and fitted Beta distribution of the ratio shows a close fit. The estimated ambient mercury concentration was further partitioned into reactive gaseous mercury and particulate bound mercury using linear regression model developed by Amos et al. (2012). The method presented here can be used to roughly estimate mercury ambient concentration at locations and/or times where such measurement is not available but where wet deposition is monitored.

  13. Valuing improved wetland quality using choice modeling

    NASA Astrophysics Data System (ADS)

    Morrison, Mark; Bennett, Jeff; Blamey, Russell

    1999-09-01

    The main stated preference technique used for estimating environmental values is the contingent valuation method. In this paper the results of an application of an alternative technique, choice modeling, are reported. Choice modeling has been developed in the marketing and transport applications but has only been used in a handful of environmental applications, most of which have focused on use values. The case study presented here involves the estimation of the nonuse environmental values provided by the Macquarie Marshes, a major wetland in New South Wales, Australia. Estimates of the nonuse value the community places on preventing job losses are also presented. The reported models are robust, having high explanatory power and variables that are statistically significant and consistent with expectations. These results provide support for the hypothesis that choice modeling can be used to estimate nonuse values for both environmental and social consequences of resource use changes.

  14. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  15. Performance and Weight Estimates for an Advanced Open Rotor Engine

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.; Tong, Michael T.

    2012-01-01

    NASA s Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft. The open rotor concept (also historically referred to an unducted fan or advanced turboprop) may allow for the achievement of this objective by reducing engine fuel consumption. To evaluate the potential impact of open rotor engines, cycle modeling and engine weight estimation capabilities have been developed. The initial development of the cycle modeling capabilities in the Numerical Propulsion System Simulation (NPSS) tool was presented in a previous paper. Following that initial development, further advancements have been made to the cycle modeling and weight estimation capabilities for open rotor engines and are presented in this paper. The developed modeling capabilities are used to predict the performance of an advanced open rotor concept using modern counter-rotating propeller designs. Finally, performance and weight estimates for this engine are presented and compared to results from a previous NASA study of advanced geared and direct-drive turbofans.

  16. Probability judgments under ambiguity and conflict

    PubMed Central

    Smithson, Michael

    2015-01-01

    Whether conflict and ambiguity are distinct kinds of uncertainty remains an open question, as does their joint impact on judgments of overall uncertainty. This paper reviews recent advances in our understanding of human judgment and decision making when both ambiguity and conflict are present, and presents two types of testable models of judgments under conflict and ambiguity. The first type concerns estimate-pooling to arrive at “best” probability estimates. The second type is models of subjective assessments of conflict and ambiguity. These models are developed for dealing with both described and experienced information. A framework for testing these models in the described-information setting is presented, including a reanalysis of a multi-nation data-set to test best-estimate models, and a study of participants' assessments of conflict, ambiguity, and overall uncertainty reported by Smithson (2013). A framework for research in the experienced-information setting is then developed, that differs substantially from extant paradigms in the literature. This framework yields new models of “best” estimates and perceived conflict. The paper concludes with specific suggestions for future research on judgment and decision making under conflict and ambiguity. PMID:26042081

  17. Probability judgments under ambiguity and conflict.

    PubMed

    Smithson, Michael

    2015-01-01

    Whether conflict and ambiguity are distinct kinds of uncertainty remains an open question, as does their joint impact on judgments of overall uncertainty. This paper reviews recent advances in our understanding of human judgment and decision making when both ambiguity and conflict are present, and presents two types of testable models of judgments under conflict and ambiguity. The first type concerns estimate-pooling to arrive at "best" probability estimates. The second type is models of subjective assessments of conflict and ambiguity. These models are developed for dealing with both described and experienced information. A framework for testing these models in the described-information setting is presented, including a reanalysis of a multi-nation data-set to test best-estimate models, and a study of participants' assessments of conflict, ambiguity, and overall uncertainty reported by Smithson (2013). A framework for research in the experienced-information setting is then developed, that differs substantially from extant paradigms in the literature. This framework yields new models of "best" estimates and perceived conflict. The paper concludes with specific suggestions for future research on judgment and decision making under conflict and ambiguity.

  18. Ionospheric modelling to boost the PPP-RTK positioning and navigation in Australia

    NASA Astrophysics Data System (ADS)

    Arsov, Kirco; Terkildsen, Michael; Olivares, German

    2017-04-01

    This paper deals with implementation of 3-D ionospheric model to support the GNSS positioning and navigation activities in Australia. We will introduce two strategies for Slant Total Electron Content (STEC) estimation from GNSS CORS sites in Australia. In the first scenario, the STEC is estimated in the PPP-RTK network processing. The ionosphere is estimated together with other GNSS network parameters, such as Satellite Clocks, Satellite Phase Biases, etc. Another approach is where STEC is estimated on a station by station basis by taking advantage of already known station position and different satellite ambiguities relations. Accuracy studies and considerations will be presented and discussed. Furthermore, based on this STEC, 3-D ionosphere modeling will be performed. We will present the simple interpolation, 3-D Tomography and bi-cubic splines as modeling techniques. In order to assess these models, a (user) PPP-RTK test bed is established and a sensitivity matrix will be introduced and analyzed based on time to first fix (TTFF) of ambiguities, positioning accuracy, PPP-RTK solution convergence time etc. Different spatial configurations and constellations will be presented and assessed.

  19. Impulsivity in College Students with and without ADHD

    ERIC Educational Resources Information Center

    Miller, Jessica A.

    2010-01-01

    Impulsivity is the cardinal symptom of ADHD. It is estimated that ADHD is present in eighteen percent of children and in four percent of adults. The present study repeats and extends a previous study (Gray, Breier, Foorman, & Fletcher, 2002) that measured impulsivity in adolescents with and without ADHD, which found higher false alarm rates…

  20. Byers Auto Group: A Case Study Into The Economics, Zoning, and Overall Process of Installing Small Wind Turbines at Two Automotive Dealerships in Ohio (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinclair, K.; Oteri, F.

    This presentation provides the talking points about a case study on the installation of a $600,000 small wind project, the installation process, estimated annual energy production and percentage of energy needs met by the turbines.

  1. Tree STEM and Canopy Biomass Estimates from Terrestrial Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Olofsson, K.; Holmgren, J.

    2017-10-01

    In this study an automatic method for estimating both the tree stem and the tree canopy biomass is presented. The point cloud tree extraction techniques operate on TLS data and models the biomass using the estimated stem and canopy volume as independent variables. The regression model fit error is of the order of less than 5 kg, which gives a relative model error of about 5 % for the stem estimate and 10-15 % for the spruce and pine canopy biomass estimates. The canopy biomass estimate was improved by separating the models by tree species which indicates that the method is allometry dependent and that the regression models need to be recomputed for different areas with different climate and different vegetation.

  2. A novel approach to estimation of the time to biomarker threshold: applications to HIV.

    PubMed

    Reddy, Tarylee; Molenberghs, Geert; Njagi, Edmund Njeru; Aerts, Marc

    2016-11-01

    In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient-specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus-positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Born iterative reconstruction using perturbed-phase field estimates.

    PubMed

    Astheimer, Jeffrey P; Waag, Robert C

    2008-10-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements.

  4. Training the East German Labour Force. Microeconometric Evaluations of Continuous Vocational Training after Unification. Studies in Contemporary Economics.

    ERIC Educational Resources Information Center

    Lechner, Michael

    This book presents empirical evaluations of the effects of different types of training programs in East Germany. Chapter 1 presents an overview of labor, the study objectives and results, and discussion of causality and the identification problem in evaluation studies. Chapter 2 examines point estimates of the effects of two types of continuous…

  5. Reinforcing flood-risk estimation.

    PubMed

    Reed, Duncan W

    2002-07-15

    Flood-frequency estimation is inherently uncertain. The practitioner applies a combination of gauged data, scientific method and hydrological judgement to derive a flood-frequency curve for a particular site. The resulting estimate can be thought fully satisfactory only if it is broadly consistent with all that is reliably known about the flood-frequency behaviour of the river. The paper takes as its main theme the search for information to strengthen a flood-risk estimate made from peak flows alone. Extra information comes in many forms, including documentary and monumental records of historical floods, and palaeological markers. Meteorological information is also useful, although rainfall rarity is difficult to assess objectively and can be a notoriously unreliable indicator of flood rarity. On highly permeable catchments, groundwater levels present additional data. Other types of information are relevant to judging hydrological similarity when the flood-frequency estimate derives from data pooled across several catchments. After highlighting information sources, the paper explores a second theme: that of consistency in flood-risk estimates. Following publication of the Flood estimation handbook, studies of flood risk are now using digital catchment data. Automated calculation methods allow estimates by standard methods to be mapped basin-wide, revealing anomalies at special sites such as river confluences. Such mapping presents collateral information of a new character. Can this be used to achieve flood-risk estimates that are coherent throughout a river basin?

  6. Intraclass correlation estimates for cancer screening outcomes: estimates and applications in the design of group-randomized cancer screening studies.

    PubMed

    Hade, Erinn M; Murray, David M; Pennell, Michael L; Rhoda, Dale; Paskett, Electra D; Champion, Victoria L; Crabtree, Benjamin F; Dietrich, Allen; Dignan, Mark B; Farmer, Melissa; Fenton, Joshua J; Flocke, Susan; Hiatt, Robert A; Hudson, Shawna V; Mitchell, Michael; Monahan, Patrick; Shariff-Marco, Salma; Slone, Stacey L; Stange, Kurt; Stewart, Susan L; Strickland, Pamela A Ohman

    2010-01-01

    Screening has become one of our best tools for early detection and prevention of cancer. The group-randomized trial is the most rigorous experimental design for evaluating multilevel interventions. However, identifying the proper sample size for a group-randomized trial requires reliable estimates of intraclass correlation (ICC) for screening outcomes, which are not available to researchers. We present crude and adjusted ICC estimates for cancer screening outcomes for various levels of aggregation (physician, clinic, and county) and provide an example of how these ICC estimates may be used in the design of a future trial. Investigators working in the area of cancer screening were contacted and asked to provide crude and adjusted ICC estimates using the analysis of variance method estimator. Of the 29 investigators identified, estimates were obtained from 10 investigators who had relevant data. ICC estimates were calculated from 13 different studies, with more than half of the studies collecting information on colorectal screening. In the majority of cases, ICC estimates could be adjusted for age, education, and other demographic characteristics, leading to a reduction in the ICC. ICC estimates varied considerably by cancer site and level of aggregation of the groups. Previously, only two articles had published ICCs for cancer screening outcomes. We have complied more than 130 crude and adjusted ICC estimates covering breast, cervical, colon, and prostate screening and have detailed them by level of aggregation, screening measure, and study characteristics. We have also demonstrated their use in planning a future trial and the need for the evaluation of the proposed interval estimator for binary outcomes under conditions typically seen in GRTs.

  7. Comparison of several methods for estimating low speed stability derivatives

    NASA Technical Reports Server (NTRS)

    Fletcher, H. S.

    1971-01-01

    Methods presented in five different publications have been used to estimate the low-speed stability derivatives of two unpowered airplane configurations. One configuration had unswept lifting surfaces, the other configuration was the D-558-II swept-wing research airplane. The results of the computations were compared with each other, with existing wind-tunnel data, and with flight-test data for the D-558-II configuration to assess the relative merits of the methods for estimating derivatives. The results of the study indicated that, in general, for low subsonic speeds, no one text appeared consistently better for estimating all derivatives.

  8. Estimation of metallic structure durability for a known law of stress variation

    NASA Astrophysics Data System (ADS)

    Mironov, V. I.; Lukashuk, O. A.; Ogorelkov, D. A.

    2017-12-01

    Overload of machines working in transient operational modes leads to such stresses in their bearing metallic structures that considerably exceed the endurance limit. The estimation of fatigue damages based on linear summation offers a more accurate prediction in terms of machine durability. The paper presents an alternative approach to the estimation of the factors of the cyclic degradation of a material. Free damped vibrations of the bridge girder of an overhead crane, which follow a known logarithmical decrement, are studied. It is shown that taking into account cyclic degradation substantially decreases the durability estimated for a product.

  9. Recent results of nonlinear estimators applied to hereditary systems.

    NASA Technical Reports Server (NTRS)

    Schiess, J. R.; Roland, V. R.; Wells, W. R.

    1972-01-01

    An application of the extended Kalman filter to delayed systems to estimate the state and time delay is presented. Two nonlinear estimators are discussed and the results compared with those of the Kalman filter. For all the filters considered, the hereditary system was treated with the delay in the pure form and by using Pade approximations of the delay. A summary of the convergence properties of the filters studied is given. The results indicate that the linear filter applied to the delayed system performs inadequately while the nonlinear filters provide reasonable estimates of both the state and the parameters.

  10. Planetary spacecraft cost modeling utilizing labor estimating relationships

    NASA Technical Reports Server (NTRS)

    Williams, Raymond

    1990-01-01

    A basic computerized technology is presented for estimating labor hours and cost of unmanned planetary and lunar programs. The user friendly methodology designated Labor Estimating Relationship/Cost Estimating Relationship (LERCER) organizes the forecasting process according to vehicle subsystem levels. The level of input variables required by the model in predicting cost is consistent with pre-Phase A type mission analysis. Twenty one program categories were used in the modeling. To develop the model, numerous LER and CER studies were surveyed and modified when required. The result of the research along with components of the LERCER program are reported.

  11. Energy Measurement Studies for CO2 Measurement with a Coherent Doppler Lidar System

    NASA Technical Reports Server (NTRS)

    Beyon, Jeffrey Y.; Koch, Grady J.; Vanvalkenburg, Randal L.; Yu, Jirong; Singh, Upendra N.; Kavaya, Michael J.

    2010-01-01

    The accurate measurement of energy in the application of lidar system for CO2 measurement is critical. Different techniques of energy estimation in the online and offline pulses are investigated for post processing of lidar returns. The cornerstone of the techniques is the accurate estimation of the spectrum of lidar signal and background noise. Since the background noise is not the ideal white Gaussian noise, simple average level estimation of noise level is not well fit in the energy estimation of lidar signal and noise. A brief review of the methods is presented in this paper.

  12. A note on variance estimation in random effects meta-regression.

    PubMed

    Sidik, Kurex; Jonkman, Jeffrey N

    2005-01-01

    For random effects meta-regression inference, variance estimation for the parameter estimates is discussed. Because estimated weights are used for meta-regression analysis in practice, the assumed or estimated covariance matrix used in meta-regression is not strictly correct, due to possible errors in estimating the weights. Therefore, this note investigates the use of a robust variance estimation approach for obtaining variances of the parameter estimates in random effects meta-regression inference. This method treats the assumed covariance matrix of the effect measure variables as a working covariance matrix. Using an example of meta-analysis data from clinical trials of a vaccine, the robust variance estimation approach is illustrated in comparison with two other methods of variance estimation. A simulation study is presented, comparing the three methods of variance estimation in terms of bias and coverage probability. We find that, despite the seeming suitability of the robust estimator for random effects meta-regression, the improved variance estimator of Knapp and Hartung (2003) yields the best performance among the three estimators, and thus may provide the best protection against errors in the estimated weights.

  13. Development and validation of a MRgHIFU non-invasive tissue acoustic property estimation technique.

    PubMed

    Johnson, Sara L; Dillon, Christopher; Odéen, Henrik; Parker, Dennis; Christensen, Douglas; Payne, Allison

    2016-11-01

    MR-guided high-intensity focussed ultrasound (MRgHIFU) non-invasive ablative surgeries have advanced into clinical trials for treating many pathologies and cancers. A remaining challenge of these surgeries is accurately planning and monitoring tissue heating in the face of patient-specific and dynamic acoustic properties of tissues. Currently, non-invasive measurements of acoustic properties have not been implemented in MRgHIFU treatment planning and monitoring procedures. This methods-driven study presents a technique using MR temperature imaging (MRTI) during low-temperature HIFU sonications to non-invasively estimate sample-specific acoustic absorption and speed of sound values in tissue-mimicking phantoms. Using measured thermal properties, specific absorption rate (SAR) patterns are calculated from the MRTI data and compared to simulated SAR patterns iteratively generated via the Hybrid Angular Spectrum (HAS) method. Once the error between the simulated and measured patterns is minimised, the estimated acoustic property values are compared to the true phantom values obtained via an independent technique. The estimated values are then used to simulate temperature profiles in the phantoms, and compared to experimental temperature profiles. This study demonstrates that trends in acoustic absorption and speed of sound can be non-invasively estimated with average errors of 21% and 1%, respectively. Additionally, temperature predictions using the estimated properties on average match within 1.2 °C of the experimental peak temperature rises in the phantoms. The positive results achieved in tissue-mimicking phantoms presented in this study indicate that this technique may be extended to in vivo applications, improving HIFU sonication temperature rise predictions and treatment assessment.

  14. Bayesian dose-response analysis for epidemiological studies with complex uncertainty in dose estimation.

    PubMed

    Kwon, Deukwoo; Hoffman, F Owen; Moroz, Brian E; Simon, Steven L

    2016-02-10

    Most conventional risk analysis methods rely on a single best estimate of exposure per person, which does not allow for adjustment for exposure-related uncertainty. Here, we propose a Bayesian model averaging method to properly quantify the relationship between radiation dose and disease outcomes by accounting for shared and unshared uncertainty in estimated dose. Our Bayesian risk analysis method utilizes multiple realizations of sets (vectors) of doses generated by a two-dimensional Monte Carlo simulation method that properly separates shared and unshared errors in dose estimation. The exposure model used in this work is taken from a study of the risk of thyroid nodules among a cohort of 2376 subjects who were exposed to fallout from nuclear testing in Kazakhstan. We assessed the performance of our method through an extensive series of simulations and comparisons against conventional regression risk analysis methods. When the estimated doses contain relatively small amounts of uncertainty, the Bayesian method using multiple a priori plausible draws of dose vectors gave similar results to the conventional regression-based methods of dose-response analysis. However, when large and complex mixtures of shared and unshared uncertainties are present, the Bayesian method using multiple dose vectors had significantly lower relative bias than conventional regression-based risk analysis methods and better coverage, that is, a markedly increased capability to include the true risk coefficient within the 95% credible interval of the Bayesian-based risk estimate. An evaluation of the dose-response using our method is presented for an epidemiological study of thyroid disease following radiation exposure. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Investigations of potential bias in the estimation of lambda using Pradel's (1996) model for capture-recapture data

    USGS Publications Warehouse

    Hines, James E.; Nichols, James D.

    2002-01-01

    Pradel's (1996) temporal symmetry model permitting direct estimation and modelling of population growth rate, u i , provides a potentially useful tool for the study of population dynamics using marked animals. Because of its recent publication date, the approach has not seen much use, and there have been virtually no investigations directed at robustness of the resulting estimators. Here we consider several potential sources of bias, all motivated by specific uses of this estimation approach. We consider sampling situations in which the study area expands with time and present an analytic expression for the bias in u i We next consider trap response in capture probabilities and heterogeneous capture probabilities and compute large-sample and simulation-based approximations of resulting bias in u i . These approximations indicate that trap response is an especially important assumption violation that can produce substantial bias. Finally, we consider losses on capture and emphasize the importance of selecting the estimator for u i that is appropriate to the question being addressed. For studies based on only sighting and resighting data, Pradel's (1996) u i ' is the appropriate estimator.

  16. Multielemental analyses of isomorphous Indian garnet gemstones by XRD and external pixe techniques.

    PubMed

    Venkateswarulu, P; Srinivasa Rao, K; Kasipathi, C; Ramakrishna, Y

    2012-12-01

    Garnet gemstones were collected from parts of Eastern Ghats geological formations of Andhra Pradesh, India and their gemological studies were carried out. Their study of chemistry is not possible as they represent mixtures of isomorphism nature, and none of the individual specimens indicate independent chemistry. Hence, non-destructive instrumental methodology of external PIXE technique was employed to understand their chemistry and identity. A 3 MeV proton beam was employed to excite the samples. In the present study geochemical characteristics of garnet gemstones were studied by proton induced X-ray emission. Almandine variety of garnet is found to be abundant in the present study by means of their chemical contents. The crystal structure and the lattice parameters were estimated using X-Ray Diffraction studies. The trace and minor elements are estimated using PIXE technique and major compositional elements are confirmed by XRD studies. The technique is found very useful in characterizing the garnet gemstones. The present work, thus establishes usefulness and versatility of the PIXE technique with external beam for research in Geo-scientific methodology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Development of advanced methods for analysis of experimental data in diffusion

    NASA Astrophysics Data System (ADS)

    Jaques, Alonso V.

    There are numerous experimental configurations and data analysis techniques for the characterization of diffusion phenomena. However, the mathematical methods for estimating diffusivities traditionally do not take into account the effects of experimental errors in the data, and often require smooth, noiseless data sets to perform the necessary analysis steps. The current methods used for data smoothing require strong assumptions which can introduce numerical "artifacts" into the data, affecting confidence in the estimated parameters. The Boltzmann-Matano method is used extensively in the determination of concentration - dependent diffusivities, D(C), in alloys. In the course of analyzing experimental data, numerical integrations and differentiations of the concentration profile are performed. These methods require smoothing of the data prior to analysis. We present here an approach to the Boltzmann-Matano method that is based on a regularization method to estimate a differentiation operation on the data, i.e., estimate the concentration gradient term, which is important in the analysis process for determining the diffusivity. This approach, therefore, has the potential to be less subjective, and in numerical simulations shows an increased accuracy in the estimated diffusion coefficients. We present a regression approach to estimate linear multicomponent diffusion coefficients that eliminates the need pre-treat or pre-condition the concentration profile. This approach fits the data to a functional form of the mathematical expression for the concentration profile, and allows us to determine the diffusivity matrix directly from the fitted parameters. Reformulation of the equation for the analytical solution is done in order to reduce the size of the problem and accelerate the convergence. The objective function for the regression can incorporate point estimations for error in the concentration, improving the statistical confidence in the estimated diffusivity matrix. Case studies are presented to demonstrate the reliability and the stability of the method. To the best of our knowledge there is no published analysis of the effects of experimental errors on the reliability of the estimates for the diffusivities. For the case of linear multicomponent diffusion, we analyze the effects of the instrument analytical spot size, positioning uncertainty, and concentration uncertainty on the resulting values of the diffusivities. These effects are studied using Monte Carlo method on simulated experimental data. Several useful scaling relationships were identified which allow more rigorous and quantitative estimates of the errors in the measured data, and are valuable for experimental design. To further analyze anomalous diffusion processes, where traditional diffusional transport equations do not hold, we explore the use of fractional calculus in analytically representing these processes is proposed. We use the fractional calculus approach for anomalous diffusion processes occurring through a finite plane sheet with one face held at a fixed concentration, the other held at zero, and the initial concentration within the sheet equal to zero. This problem is related to cases in nature where diffusion is enhanced relative to the classical process, and the order of differentiation is not necessarily a second--order differential equation. That is, differentiation is of fractional order alpha, where 1 ≤ alpha < 2. For alpha = 2, the presented solutions reduce to the classical second-order diffusion solution for the conditions studied. The solution obtained allows the analysis of permeation experiments. Frequently, hydrogen diffusion is analyzed using electrochemical permeation methods using the traditional, Fickian-based theory. Experimental evidence shows the latter analytical approach is not always appropiate, because reported data shows qualitative (and quantitative) deviation from its theoretical scaling predictions. Preliminary analysis of data shows better agreement with fractional diffusion analysis when compared to traditional square-root scaling. Although there is a large amount of work in the estimation of the diffusivity from experimental data, reported studies typically present only the analytical description for the diffusivity, without scattering. However, because these studies do not consider effects produced by instrument analysis, their direct applicability is limited. We propose alternatives to address these, and to evaluate their influence on the final resulting diffusivity values.

  18. Estimation of the Operating Characteristics when the Test Information of the Old Test is not Constant II: Simple Sum Procedure of the Conditional P.D.F. Approach/Normal Approach Method Using Three Subtests of the Old Test. Research Report 80-4.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    The rationale behind the method of estimating the operating characteristics of discrete item responses when the test information of the Old Test is not constant was presented previously. In the present study, two subtests of the Old Test, i.e. Subtests 1, and 2, each of which has a different non-constant test information function, are used in…

  19. A Bayesian network model for predicting pregnancy after in vitro fertilization.

    PubMed

    Corani, G; Magli, C; Giusti, A; Gianaroli, L; Gambardella, L M

    2013-11-01

    We present a Bayesian network model for predicting the outcome of in vitro fertilization (IVF). The problem is characterized by a particular missingness process; we propose a simple but effective averaging approach which improves parameter estimates compared to the traditional MAP estimation. We present results with generated data and the analysis of a real data set. Moreover, we assess by means of a simulation study the effectiveness of the model in supporting the selection of the embryos to be transferred. © 2013 Elsevier Ltd. All rights reserved.

  20. Line transect estimation of population size: the exponential case with grouped data

    USGS Publications Warehouse

    Anderson, D.R.; Burnham, K.P.; Crain, B.R.

    1979-01-01

    Gates, Marshall, and Olson (1968) investigated the line transect method of estimating grouse population densities in the case where sighting probabilities are exponential. This work is followed by a simulation study in Gates (1969). A general overview of line transect analysis is presented by Burnham and Anderson (1976). These articles all deal with the ungrouped data case. In the present article, an analysis of line transect data is formulated under the Gates framework of exponential sighting probabilities and in the context of grouped data.

  1. On the applicability of integrated circuit technology to general aviation orientation estimation

    NASA Technical Reports Server (NTRS)

    Debra, D. B.; Tashker, M. G.

    1976-01-01

    The criteria of the significant value of the panel instruments used in general aviation were examined and kinematic equations were added for comparison. An instrument survey was performed to establish the present state of the art in linear and angular accelerometers, pressure transducers, and magnetometers. A very preliminary evaluation was done of the computers available for data evaluation and estimator mechanization. The mathematical model of a light twin aircraft employed in the evaluation was documented, the results of the sensor survey and the results of the design studies were presented.

  2. Maximum likelihood estimation for Cox's regression model under nested case-control sampling.

    PubMed

    Scheike, Thomas H; Juul, Anders

    2004-04-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.

  3. Freezability genetics in rabbit semen.

    PubMed

    Lavara, R; Mocé, E; Baselga, M; Vicente, J S

    2017-10-15

    The aim of this study was to estimate the heritability of semen freezability and to estimate the genetic correlation between frozen-thawed sperm traits and the growth rate in a paternal rabbit line. Estimated heritabilities showed that frozen-thawed semen traits are heritable (ranged between 0.08 and 0.15). In the case of Live-FT (percentage of viable sperm after freezing), the estimated heritability is the highest one, and suggests the possibility of effective selection. After the study of genetic correlations it seems that daily weight gain (DG) was negatively correlated with sperm freezability, but no further conclusions could be drawn due to the high HPD95%. More data should be included in order to obtain better accuracy for the estimates of these genetic correlations. If the results obtained at present study were confirmed, it would imply that selection for DG could alter sperm cell membranes or seminal plasma composition, both components related to sperm cryoresistance. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Criterion-Related Validity of Sit-and-Reach Tests for Estimating Hamstring and Lumbar Extensibility: a Meta-Analysis

    PubMed Central

    Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús

    2014-01-01

    The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key Points Overall sit-and-reach tests have a moderate mean criterion-related validity for estimating hamstring extensibility, but they have a low mean validity for estimating lumbar extensibility. Among all the sit-and-reach test protocols, the Classic sit-and-reach test seems to be the best option to estimate hamstring extensibility. End scores (e.g., the Classic sit-and-reach test) are a better indicator of hamstring extensibility than the modifications that incorporate fingers-to-box distance (e.g., the Modified sit-and-reach test). When angular tests such as straight leg raise or knee extension tests cannot be used, sit-and-reach tests seem to be a useful field test alternative to estimate hamstring extensibility, but not to estimate lumbar extensibility. PMID:24570599

  5. Formative evaluation of a mobile liquid portion size estimation interface for people with varying literacy skills.

    PubMed

    Chaudry, Beenish Moalla; Connelly, Kay; Siek, Katie A; Welch, Janet L

    2013-12-01

    Chronically ill people, especially those with low literacy skills, often have difficulty estimating portion sizes of liquids to help them stay within their recommended fluid limits. There is a plethora of mobile applications that can help people monitor their nutritional intake but unfortunately these applications require the user to have high literacy and numeracy skills for portion size recording. In this paper, we present two studies in which the low- and the high-fidelity versions of a portion size estimation interface, designed using the cognitive strategies adults employ for portion size estimation during diet recall studies, was evaluated by a chronically ill population with varying literacy skills. The low fidelity interface was evaluated by ten patients who were all able to accurately estimate portion sizes of various liquids with the interface. Eighteen participants did an in situ evaluation of the high-fidelity version incorporated in a diet and fluid monitoring mobile application for 6 weeks. Although the accuracy of the estimation cannot be confirmed in the second study but the participants who actively interacted with the interface showed better health outcomes by the end of the study. Based on these findings, we provide recommendations for designing the next iteration of an accurate and low literacy-accessible liquid portion size estimation mobile interface.

  6. Verification bias: an under-recognized source of error in assessing the efficacy of MRI of the meniscii.

    PubMed

    Richardson, Michael L; Petscavage, Jonelle M

    2011-11-01

    The sensitivity and specificity of magnetic resonance imaging (MRI) for diagnosis of meniscal tears has been studied extensively, with tears usually verified by surgery. However, surgically unverified cases are often not considered in these studies, leading to verification bias, which can falsely increase the sensitivity and decrease the specificity estimates. Our study suggests that such bias may be very common in the meniscal MRI literature, and illustrates techniques to detect and correct for such bias. PubMed was searched for articles estimating sensitivity and specificity of MRI for meniscal tears. These were assessed for verification bias, deemed potentially present if a study included any patients whose MRI findings were not surgically verified. Retrospective global sensitivity analysis (GSA) was performed when possible. Thirty-nine of the 314 studies retrieved from PubMed specifically dealt with meniscal tears. All 39 included unverified patients, and hence, potential verification bias. Only seven articles included sufficient information to perform GSA. Of these, one showed definite verification bias, two showed no bias, and four others showed bias within certain ranges of disease prevalence. Only 9 of 39 acknowledged the possibility of verification bias. Verification bias is underrecognized and potentially common in published estimates of the sensitivity and specificity of MRI for the diagnosis of meniscal tears. When possible, it should be avoided by proper study design. If unavoidable, it should be acknowledged. Investigators should tabulate unverified as well as verified data. Finally, verification bias should be estimated; if present, corrected estimates of sensitivity and specificity should be used. Our online web-based calculator makes this process relatively easy. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  7. Breeding chorus indices are weakly related to estimated abundance of boreal chorus frogs

    Treesearch

    Paul Stephen Corn; Erin Muths; Amanda M. Kissel; Rick D. Scherer

    2011-01-01

    Call surveys used to monitor breeding choruses of anuran amphibians generate index values that are frequently used to represent the number of male frogs present, but few studies have quantified this relationship. We compared abundance of male Boreal Chorus Frogs (Pseudacris maculata), estimated using capture-recapture methods in two populations in Colorado, to call...

  8. Cost Study of Educational Media Systems and Their Equipment Components. Volume I, Guidelines for Determining Costs of Media Systems. Final Report.

    ERIC Educational Resources Information Center

    General Learning Corp., Washington, DC.

    Objective cost estimates for planning and operating systems should be made after an assessment of administrative factors (school environment) and instructional factors (learning objectives, type of presentation). Specification of appropriate sensory stimuli and the design of alternative systems also precede cost estimations for production,…

  9. Estimating past diameters of Douglas-fir trees.

    Treesearch

    Floyd A. Johnson

    1955-01-01

    Estimates of breast-height diameter outside bark for trees as of some previous date are required in certain kinds of forest growth studies. These past diameters may be found by subtracting total diameter growth from known present diameters, where total diameter growth is the sum of wood growth and bark growth. Wood growth is readily obtained by...

  10. Student Estimates of Public Speaking Competency: The Meaning Extraction Helper and Video Self-Evaluation

    ERIC Educational Resources Information Center

    LeFebvre, Luke; LeFebvre, Leah; Blackburn, Kate; Boyd, Ryan

    2015-01-01

    Video continues to be used in many basic communication courses as a way for students to self-evaluate speechmaking. In this study, students (N = 71) presented speeches, viewed the video recordings, and produced self-generated feedback. Comparing student's self-estimated grades from the self-evaluation against earned grades resulted in composite…

  11. Markov Chain Monte Carlo Estimation of Item Parameters for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    de la Torre, Jimmy; Stark, Stephen; Chernyshenko, Oleksandr S.

    2006-01-01

    The authors present a Markov Chain Monte Carlo (MCMC) parameter estimation procedure for the generalized graded unfolding model (GGUM) and compare it to the marginal maximum likelihood (MML) approach implemented in the GGUM2000 computer program, using simulated and real personality data. In the simulation study, test length, number of response…

  12. EEG Estimates of Cognitive Workload and Engagement Predict Math Problem Solving Outcomes

    ERIC Educational Resources Information Center

    Beal, Carole R.; Galan, Federico Cirett

    2012-01-01

    In the present study, the authors focused on the use of electroencephalography (EEG) data about cognitive workload and sustained attention to predict math problem solving outcomes. EEG data were recorded as students solved a series of easy and difficult math problems. Sequences of attention and cognitive workload estimates derived from the EEG…

  13. Three Methods of Estimating a Model of Group Effects: A Comparison with Reference to School Effect Studies.

    ERIC Educational Resources Information Center

    Igra, Amnon

    1980-01-01

    Three methods of estimating a model of school effects are compared: ordinary least squares; an approach based on the analysis of covariance; and, a residualized input-output approach. Results are presented using a matrix algebra formulation, and advantages of the first two methods are considered. (Author/GK)

  14. A Regression Study of Demand, Cost and Pricing Public Library Circulation Services.

    ERIC Educational Resources Information Center

    Stratton, Peter J.

    This paper examines three aspects of the public library's circulation service: (1) a demand function for the service is estimated; (2) a long-run unit circulation cost curve is developed; and (3) using the economist's notion of "efficiency," a general model for the pricing of the circulation service is presented. The estimated demand…

  15. Developing an Efficient Computational Method that Estimates the Ability of Students in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2012-01-01

    This paper presents a computational method that can efficiently estimate the ability of students from the log files of a Web-based learning environment capturing their problem solving processes. The computational method developed in this study approximates the posterior distribution of the student's ability obtained from the conventional Bayes…

  16. Performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data.

    PubMed

    Yelland, Lisa N; Salter, Amy B; Ryan, Philip

    2011-10-15

    Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.

  17. Estimation for bilinear stochastic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.; Marcus, S. I.

    1974-01-01

    Three techniques for the solution of bilinear estimation problems are presented. First, finite dimensional optimal nonlinear estimators are presented for certain bilinear systems evolving on solvable and nilpotent lie groups. Then the use of harmonic analysis for estimation problems evolving on spheres and other compact manifolds is investigated. Finally, an approximate estimation technique utilizing cumulants is discussed.

  18. Simplification of an MCNP model designed for dose rate estimation

    NASA Astrophysics Data System (ADS)

    Laptev, Alexander; Perry, Robert

    2017-09-01

    A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.

  19. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  20. System technology analysis of aeroassisted orbital transfer vehicles: Moderate lift/drag (0.75-1.5). Volume 3: Cost estimates and work breakdown structure/dictionary, phase 1 and 2

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Technology payoffs of representative ground based (Phase 1) and space based (Phase 2) mid lift/drag ratio aeroassisted orbit transfer vehicles (AOTV) were assessed and prioritized. A narrative summary of the cost estimates and work breakdown structure/dictionary for both study phases is presented. Costs were estimated using the Grumman Space Programs Algorithm for Cost Estimating (SPACE) computer program and results are given for four AOTV configurations. The work breakdown structure follows the standard of the joint government/industry Space Systems Cost Analysis Group (SSCAG). A table is provided which shows cost estimates for each work breakdown structure element.

  1. Overview of Ongoing NRMRL GI Research

    EPA Science Inventory

    This presentation is an overview of ongoing NRMRL Green Infrastructure research and addresses the question: What do we need to know to present a cogent estimate of the value of Green Infrastructure? Discussions included are: stormwater well study, rain gardens and permeable su...

  2. Measurement of acoustic velocity components in a turbulent flow using LDV and high-repetition rate PIV

    NASA Astrophysics Data System (ADS)

    Léon, Olivier; Piot, Estelle; Sebbane, Delphine; Simon, Frank

    2017-06-01

    The present study provides theoretical details and experimental validation results to the approach proposed by Minotti et al. (Aerosp Sci Technol 12(5):398-407, 2008) for measuring amplitudes and phases of acoustic velocity components (AVC) that are waveform parameters of each component of velocity induced by an acoustic wave, in fully turbulent duct flows carrying multi-tone acoustic waves. Theoretical results support that the turbulence rejection method proposed, based on the estimation of cross power spectra between velocity measurements and a reference signal such as a wall pressure measurement, provides asymptotically efficient estimators with respect to the number of samples. Furthermore, it is shown that the estimator uncertainties can be simply estimated, accounting for the characteristics of the measured flow turbulence spectra. Two laser-based measurement campaigns were conducted in order to validate the acoustic velocity estimation approach and the uncertainty estimates derived. While in previous studies estimates were obtained using laser Doppler velocimetry (LDV), it is demonstrated that high-repetition rate particle image velocimetry (PIV) can also be successfully employed. The two measurement techniques provide very similar acoustic velocity amplitude and phase estimates for the cases investigated, that are of practical interest for acoustic liner studies. In a broader sense, this approach may be beneficial for non-intrusive sound emission studies in wind tunnel testings.

  3. The structural identifiability and parameter estimation of a multispecies model for the transmission of mastitis in dairy cows with postmilking teat disinfection.

    PubMed

    White, L J; Evans, N D; Lam, T J G M; Schukken, Y H; Medley, G F; Godfrey, K R; Chappell, M J

    2002-01-01

    A mathematical model for the transmission of two interacting classes of mastitis causing bacterial pathogens in a herd of dairy cows is presented and applied to a specific data set. The data were derived from a field trial of a specific measure used in the control of these pathogens, where half the individuals were subjected to the control and in the others the treatment was discontinued. The resultant mathematical model (eight non-linear simultaneous ordinary differential equations) therefore incorporates heterogeneity in the host as well as the infectious agent and consequently the effects of control are intrinsic in the model structure. A structural identifiability analysis of the model is presented demonstrating that the scope of the novel method used allows application to high order non-linear systems. The results of a simultaneous estimation of six unknown system parameters are presented. Previous work has only estimated a subset of these either simultaneously or individually. Therefore not only are new estimates provided for the parameters relating to the transmission and control of the classes of pathogens under study, but also information about the relationships between them. We exploit the close link between mathematical modelling, structural identifiability analysis, and parameter estimation to obtain biological insights into the system modelled.

  4. Robust Portfolio Optimization Using Pseudodistances.

    PubMed

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  5. Robust Portfolio Optimization Using Pseudodistances

    PubMed Central

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  6. Aggregation of estimated numbers of undiscovered deposits: an R-script with an example from the Chu Sarysu Basin, Kazakhtan: Chapter B in Global mineral resource assessment

    USGS Publications Warehouse

    Schuenemeyer, John H.; Zientek, Michael L.; Box, Stephen E.

    2011-01-01

    Mineral resource assessments completed by the U.S. Geological Survey during the past three decades express geologically based estimates of numbers of undiscovered mineral deposits as probability distributions. Numbers of undiscovered deposits of a given type are estimated in geologically defined regions. Using Monte Carlo simulations, these undiscovered deposit estimates are combined with tonnage and grade models to derive a probability distribution describing amounts of commodities and rock that could be present in undiscovered deposits within a study area. In some situations, it is desirable to aggregate the assessment results from several study areas. This report provides a script developed in open-source statistical software, R, that aggregates undiscovered deposit estimates of a given type, assuming independence, total dependence, or some degree of correlation among aggregated areas, given a user-specified correlation matrix.

  7. Intra-class correlation estimates for assessment of vitamin A intake in children.

    PubMed

    Agarwal, Girdhar G; Awasthi, Shally; Walter, Stephen D

    2005-03-01

    In many community-based surveys, multi-level sampling is inherent in the design. In the design of these studies, especially to calculate the appropriate sample size, investigators need good estimates of intra-class correlation coefficient (ICC), along with the cluster size, to adjust for variation inflation due to clustering at each level. The present study used data on the assessment of clinical vitamin A deficiency and intake of vitamin A-rich food in children in a district in India. For the survey, 16 households were sampled from 200 villages nested within eight randomly-selected blocks of the district. ICCs and components of variances were estimated from a three-level hierarchical random effects analysis of variance model. Estimates of ICCs and variance components were obtained at village and block levels. Between-cluster variation was evident at each level of clustering. In these estimates, ICCs were inversely related to cluster size, but the design effect could be substantial for large clusters. At the block level, most ICC estimates were below 0.07. At the village level, many ICC estimates ranged from 0.014 to 0.45. These estimates may provide useful information for the design of epidemiological studies in which the sampled (or allocated) units range in size from households to large administrative zones.

  8. Study of the Phototransference in GR-200 Dosimetric Material and its Convenience for Dose Re-estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baly, L.; Otazo, M. R.; Molina, D.

    2006-09-08

    A study of the phototransference of charges from deep to dosimetric traps in GR-200 material is presented and its convenience for dose re-estimation in the dose range between 2 and 100mSv is also analyzed. The recovering coefficient (RC) defined as the ratio between the phototransferred thermoluminescence (PTTL) and the original thermoluminescence (TL) of the dosimetric trap was used to evaluate the ratio of phototransferred charges from deep traps and the original charges in the dosimetric traps. The results show the convenience of this method for dose re-estimation for this material in the selected range of doses.

  9. Statistical study of generalized nonlinear phase step estimation methods in phase-shifting interferometry.

    PubMed

    Langoju, Rajesh; Patil, Abhijit; Rastogi, Pramod

    2007-11-20

    Signal processing methods based on maximum-likelihood theory, discrete chirp Fourier transform, and spectral estimation methods have enabled accurate measurement of phase in phase-shifting interferometry in the presence of nonlinear response of the piezoelectric transducer to the applied voltage. We present the statistical study of these generalized nonlinear phase step estimation methods to identify the best method by deriving the Cramér-Rao bound. We also address important aspects of these methods for implementation in practical applications and compare the performance of the best-identified method with other bench marking algorithms in the presence of harmonics and noise.

  10. Estimating subcatchment runoff coefficients using weather radar and a downstream runoff sensor.

    PubMed

    Ahm, Malte; Thorndahl, Søren; Rasmussen, Michael R; Bassø, Lene

    2013-01-01

    This paper presents a method for estimating runoff coefficients of urban drainage subcatchments based on a combination of high resolution weather radar data and flow measurements from a downstream runoff sensor. By utilising the spatial variability of the precipitation it is possible to estimate the runoff coefficients of the separate subcatchments. The method is demonstrated through a case study of an urban drainage catchment (678 ha) located in the city of Aarhus, Denmark. The study has proven that it is possible to use corresponding measurements of the relative rainfall distribution over the catchment and downstream runoff measurements to identify the runoff coefficients at subcatchment level.

  11. Endogenous pain modulation in chronic orofacial pain: a systematic review and meta-analysis.

    PubMed

    Moana-Filho, Estephan J; Herrero Babiloni, Alberto; Theis-Mahon, Nicole R

    2018-06-15

    Abnormal endogenous pain modulation was suggested as a potential mechanism for chronic pain, ie, increased pain facilitation and/or impaired pain inhibition underlying symptoms manifestation. Endogenous pain modulation function can be tested using psychophysical methods such as temporal summation of pain (TSP) and conditioned pain modulation (CPM), which assess pain facilitation and inhibition, respectively. Several studies have investigated endogenous pain modulation function in patients with nonparoxysmal orofacial pain (OFP) and reported mixed results. This study aimed to provide, through a qualitative and quantitative synthesis of the available literature, overall estimates for TSP/CPM responses in patients with OFP relative to controls. MEDLINE, Embase, and the Cochrane databases were searched, and references were screened independently by 2 raters. Twenty-six studies were included for qualitative review, and 22 studies were included for meta-analysis. Traditional meta-analysis and robust variance estimation were used to synthesize overall estimates for standardized mean difference. The overall standardized estimate for TSP was 0.30 (95% confidence interval: 0.11-0.49; P = 0.002), with moderate between-study heterogeneity (Q [df = 17] = 41.8, P = 0.001; I = 70.2%). Conditioned pain modulation's estimated overall effect size was large but above the significance threshold (estimate = 1.36; 95% confidence interval: -0.09 to 2.81; P = 0.066), with very large heterogeneity (Q [df = 8] = 108.3, P < 0.001; I = 98.0%). Sensitivity analyses did not affect the overall estimate for TSP; for CPM, the overall estimate became significant if specific random-effect models were used or if the most influential study was removed. Publication bias was not present for TSP studies, whereas it substantially influenced CPM's overall estimate. These results suggest increased pain facilitation and trend for pain inhibition impairment in patients with nonparoxysmal OFP.

  12. Comparison study on disturbance estimation techniques in precise slow motion control

    NASA Astrophysics Data System (ADS)

    Fan, S.; Nagamune, R.; Altintas, Y.; Fan, D.; Zhang, Z.

    2010-08-01

    Precise low speed motion control is important for the industrial applications of both micro-milling machine tool feed drives and electro-optical tracking servo systems. It calls for precise position and instantaneous velocity measurement and disturbance, which involves direct drive motor force ripple, guide way friction and cutting force etc., estimation. This paper presents a comparison study on dynamic response and noise rejection performance of three existing disturbance estimation techniques, including the time-delayed estimators, the state augmented Kalman Filters and the conventional disturbance observers. The design technique essentials of these three disturbance estimators are introduced. For designing time-delayed estimators, it is proposed to substitute Kalman Filter for Luenberger state observer to improve noise suppression performance. The results show that the noise rejection performances of the state augmented Kalman Filters and the time-delayed estimators are much better than the conventional disturbance observers. These two estimators can give not only the estimation of the disturbance but also the low noise level estimations of position and instantaneous velocity. The bandwidth of the state augmented Kalman Filters is wider than the time-delayed estimators. In addition, the state augmented Kalman Filters can give unbiased estimations of the slow varying disturbance and the instantaneous velocity, while the time-delayed estimators can not. The simulation and experiment conducted on X axis of a 2.5-axis prototype micro milling machine are provided.

  13. Assessing the sensitivity of bovine tuberculosis surveillance in Canada's cattle population, 2009-2013.

    PubMed

    El Allaki, Farouk; Harrington, Noel; Howden, Krista

    2016-11-01

    The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system sensitivity. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  14. Estimation of some transducer parameters in a broadband piezoelectric transmitter by using an artificial intelligence technique.

    PubMed

    Ruíz, A; Ramos, A; San Emeterio, J L

    2004-04-01

    An estimation procedure to efficiently find approximate values of internal parameters in ultrasonic transducers intended for broadband operation would be a valuable tool to discover internal construction data. This information is necessary in the modelling and simulation of acoustic and electrical behaviour related to ultrasonic systems containing commercial transducers. There is not a general solution for this generic problem of parameter estimation in the case of broadband piezoelectric probes. In this paper, this general problem is briefly analysed for broadband conditions. The viability of application in this field of an artificial intelligence technique supported on the modelling of the transducer internal components is studied. A genetic algorithm (GA) procedure is presented and applied to the estimation of different parameters, related to two transducers which are working as pulsed transmitters. The efficiency of this GA technique is studied, considering the influence of the number and variation range of the estimated parameters. Estimation results are experimentally ratified.

  15. Experiments with central-limit properties of spatial samples from locally covariant random fields

    USGS Publications Warehouse

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  16. Food waste quantification in primary production - The Nordic countries as a case study.

    PubMed

    Hartikainen, Hanna; Mogensen, Lisbeth; Svanes, Erik; Franke, Ulrika

    2018-01-01

    Our understanding of food waste in the food supply chain has increased, but very few studies have been published on food waste in primary production. The overall aims of this study were to quantify the total amount of food waste in primary production in Finland, Sweden, Norway and Denmark, and to create a framework for how to define and quantify food waste in primary production. The quantification of food waste was based on case studies conducted in the present study and estimates published in scientific literature. The chosen scope of the study was to quantify the amount of edible food (excluding inedible parts like peels and bones) produced for human consumption that did not end up as food. As a result, the quantification was different from the existing guidelines. One of the main differences is that food that ends up as animal feed is included in the present study, whereas this is not the case for the recently launched food waste definition of the FUSIONS project. To distinguish the 'food waste' definition of the present study from the existing definitions and to avoid confusion with established usage of the term, a new term 'side flow' (SF) was introduced as a synonym for food waste in primary production. A rough estimate of the total amount of food waste in primary production in Finland, Sweden, Norway and Denmark was made using SF and 'FUSIONS Food Waste' (FFW) definitions. The SFs in primary production in the four Nordic countries were an estimated 800,000 tonnes per year with an additional 100,000 tonnes per year from the rearing phase of animals. The 900,000 tonnes per year of SF corresponds to 3.7% of the total production of 24,000,000 tonnes per year of edible primary products. When using the FFW definition proposed by the FUSIONS project, the FFW amount was estimated at 330,000 tonnes per year, or 1% of the total production. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Revisiting typhoid fever surveillance in low and middle income countries: lessons from systematic literature review of population-based longitudinal studies.

    PubMed

    Mogasale, Vittal; Mogasale, Vijayalaxmi V; Ramani, Enusa; Lee, Jung Seok; Park, Ju Yeon; Lee, Kang Sung; Wierzba, Thomas F

    2016-01-29

    The control of typhoid fever being an important public health concern in low and middle income countries, improving typhoid surveillance will help in planning and implementing typhoid control activities such as deployment of new generation Vi conjugate typhoid vaccines. We conducted a systematic literature review of longitudinal population-based blood culture-confirmed typhoid fever studies from low and middle income countries published from 1(st) January 1990 to 31(st) December 2013. We quantitatively summarized typhoid fever incidence rates and qualitatively reviewed study methodology that could have influenced rate estimates. We used meta-analysis approach based on random effects model in summarizing the hospitalization rates. Twenty-two papers presented longitudinal population-based and blood culture-confirmed typhoid fever incidence estimates from 20 distinct sites in low and middle income countries. The reported incidence and hospitalizations rates were heterogeneous as well as the study methodology across the sites. We elucidated how the incidence rates were underestimated in published studies. We summarized six categories of under-estimation biases observed in these studies and presented potential solutions. Published longitudinal typhoid fever studies in low and middle income countries are geographically clustered and the methodology employed has a potential for underestimation. Future studies should account for these limitations.

  18. Accuracy of the visual estimation method as a predictor of food intake in Alzheimer's patients provided with different types of food.

    PubMed

    Amano, Nobuko; Nakamura, Tomiyo

    2018-02-01

    The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P < 0.001) indicated that the reliability of the visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  19. Properties of model-averaged BMDLs: a study of model averaging in dichotomous response risk estimation.

    PubMed

    Wheeler, Matthew W; Bailer, A John

    2007-06-01

    Model averaging (MA) has been proposed as a method of accounting for model uncertainty in benchmark dose (BMD) estimation. The technique has been used to average BMD dose estimates derived from dichotomous dose-response experiments, microbial dose-response experiments, as well as observational epidemiological studies. While MA is a promising tool for the risk assessor, a previous study suggested that the simple strategy of averaging individual models' BMD lower limits did not yield interval estimators that met nominal coverage levels in certain situations, and this performance was very sensitive to the underlying model space chosen. We present a different, more computationally intensive, approach in which the BMD is estimated using the average dose-response model and the corresponding benchmark dose lower bound (BMDL) is computed by bootstrapping. This method is illustrated with TiO(2) dose-response rat lung cancer data, and then systematically studied through an extensive Monte Carlo simulation. The results of this study suggest that the MA-BMD, estimated using this technique, performs better, in terms of bias and coverage, than the previous MA methodology. Further, the MA-BMDL achieves nominal coverage in most cases, and is superior to picking the "best fitting model" when estimating the benchmark dose. Although these results show utility of MA for benchmark dose risk estimation, they continue to highlight the importance of choosing an adequate model space as well as proper model fit diagnostics.

  20. Estimating abundance in the presence of species uncertainty

    USGS Publications Warehouse

    Chambert, Thierry A.; Hossack, Blake R.; Fishback, LeeAnn; Davenport, Jon M.

    2016-01-01

    1.N-mixture models have become a popular method for estimating abundance of free-ranging animals that are not marked or identified individually. These models have been used on count data for single species that can be identified with certainty. However, co-occurring species often look similar during one or more life stages, making it difficult to assign species for all recorded captures. This uncertainty creates problems for estimating species-specific abundance and it can often limit life stages to which we can make inference. 2.We present a new extension of N-mixture models that accounts for species uncertainty. In addition to estimating site-specific abundances and detection probabilities, this model allows estimating probability of correct assignment of species identity. We implement this hierarchical model in a Bayesian framework and provide all code for running the model in BUGS-language programs. 3.We present an application of the model on count data from two sympatric freshwater fishes, the brook stickleback (Culaea inconstans) and the ninespine stickleback (Pungitius pungitius), ad illustrate implementation of covariate effects (habitat characteristics). In addition, we used a simulation study to validate the model and illustrate potential sample size issues. We also compared, for both real and simulated data, estimates provided by our model to those obtained by a simple N-mixture model when captures of unknown species identification were discarded. In the latter case, abundance estimates appeared highly biased and very imprecise, while our new model provided unbiased estimates with higher precision. 4.This extension of the N-mixture model should be useful for a wide variety of studies and taxa, as species uncertainty is a common issue. It should notably help improve investigation of abundance and vital rate characteristics of organisms’ early life stages, which are sometimes more difficult to identify than adults.

  1. Evaluation of a photographic food atlas as a tool for quantifying food portion size in the United Arab Emirates

    PubMed Central

    Platat, Carine; El Mesmoudi, Najoua; El Sadig, Mohamed; Tewfik, Ihab

    2018-01-01

    Although, United Arab Emirates (UAE) has one of the highest prevalence of overweight, obesity and type 2 diabetes in the world, however, validated dietary assessment aids to estimate food intake of individuals and populations in the UAE are currently lacking. We conducted two observational studies to evaluate the accuracy of a photographic food atlas which was developed as a tool for food portion size estimation in the UAE. The UAE Food Atlas presents eight portion sizes for each food. Study 1 involved portion size estimations of 13 food items consumed during the previous day. Study 2 involved portion size estimations of nine food items immediately after consumption. Differences between the food portion sizes estimated from the photographs and the weighed food portions (estimation error), as well as the percentage differences relative to the weighed food portion for each tested food item were calculated. Four of the evaluated food items were underestimated (by -8.9% to -18.4%), while nine were overestimated (by 9.5% to 90.9%) in Study 1. Moreover, there were significant differences between estimated and eaten food portions for eight food items (P<0.05). In Study 2, one food item was underestimated (-8.1%) while eight were overestimated (range 2.52% to 82.1%). Furthermore, there were significant differences between estimated and eaten food portions (P<0.05) for six food items. The limits of agreement between the estimated and consumed food portion size were wide indicating a large variability in food portion estimation errors. These reported findings highlight the need for further developments of the UAE Food Atlas to improve the accuracy of food portion size intake estimations in dietary assessments. Additionally, recalling food portions from the previous day did not seem to increase food portion estimation errors in this study. PMID:29698434

  2. A UNIFIED FRAMEWORK FOR VARIANCE COMPONENT ESTIMATION WITH SUMMARY STATISTICS IN GENOME-WIDE ASSOCIATION STUDIES.

    PubMed

    Zhou, Xiang

    2017-12-01

    Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS. MQS is based on the method of moments (MoM) and the minimal norm quadratic unbiased estimation (MINQUE) criterion, and brings two seemingly unrelated methods-the renowned Haseman-Elston (HE) regression and the recent LD score regression (LDSC)-into the same unified statistical framework. With this new framework, we provide an alternative but mathematically equivalent form of HE that allows for the use of summary statistics. We provide an exact estimation form of LDSC to yield unbiased and statistically more efficient estimates. A key feature of our method is its ability to pair marginal z -scores computed using all samples with SNP correlation information computed using a small random subset of individuals (or individuals from a proper reference panel), while capable of producing estimates that can be almost as accurate as if both quantities are computed using the full data. As a result, our method produces unbiased and statistically efficient estimates, and makes use of summary statistics, while it is computationally efficient for large data sets. Using simulations and applications to 37 phenotypes from 8 real data sets, we illustrate the benefits of our method for estimating and partitioning SNP heritability in population studies as well as for heritability estimation in family studies. Our method is implemented in the GEMMA software package, freely available at www.xzlab.org/software.html.

  3. Influence of hypo- and hyperthermia on death time estimation - A simulation study.

    PubMed

    Muggenthaler, H; Hubig, M; Schenkl, S; Mall, G

    2017-09-01

    Numerous physiological and pathological mechanisms can cause elevated or lowered body core temperatures. Deviations from the physiological level of about 37°C can influence temperature based death time estimations. However, it has not been investigated by means of thermodynamics, to which extent hypo- and hyperthermia bias death time estimates. Using numerical simulation, the present study investigates the errors inherent in temperature based death time estimation in case of elevated or lowered body core temperatures before death. The most considerable errors with regard to the normothermic model occur in the first few hours post-mortem. With decreasing body core temperature and increasing post-mortem time the error diminishes and stagnates at a nearly constant level. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. From reading numbers to seeing ratios: a benefit of icons for risk comprehension.

    PubMed

    Tubau, Elisabet; Rodríguez-Ferreiro, Javier; Barberia, Itxaso; Colomé, Àngels

    2018-06-21

    Promoting a better understanding of statistical data is becoming increasingly important for improving risk comprehension and decision-making. In this regard, previous studies on Bayesian problem solving have shown that iconic representations help infer frequencies in sets and subsets. Nevertheless, the mechanisms by which icons enhance performance remain unclear. Here, we tested the hypothesis that the benefit offered by icon arrays lies in a better alignment between presented and requested relationships, which should facilitate the comprehension of the requested ratio beyond the represented quantities. To this end, we analyzed individual risk estimates based on data presented either in standard verbal presentations (percentages and natural frequency formats) or as icon arrays. Compared to the other formats, icons led to estimates that were more accurate, and importantly, promoted the use of equivalent expressions for the requested probability. Furthermore, whereas the accuracy of the estimates based on verbal formats depended on their alignment with the text, all the estimates based on icons were equally accurate. Therefore, these results support the proposal that icons enhance the comprehension of the ratio and its mapping onto the requested probability and point to relational misalignment as potential interference for text-based Bayesian reasoning. The present findings also argue against an intrinsic difficulty with understanding single-event probabilities.

  5. Extension of the simulated drinking game procedure to multiple drinking games.

    PubMed

    Cameron, Jennifer M; Leon, Matthew R; Correia, Christopher J

    2011-08-01

    The present study extended the Simulated Drinking Game Procedure (SDGP) to obtain information about different types of drinking games. Phase I participants (N = 545) completed online screening questionnaires assessing substance use and drinking game participation. Participants who met the selection criteria for Phase II (N = 92) participated in laboratory sessions that consisted of three different periods of drinking game play. Sixty-two percent (N = 57) of the sample was female. Data from these sessions was used to estimate the peak Blood Alcohol Concentration (BAC) a participant would achieve if they consumed alcohol while participating in the SDGP. Total consumption and estimated BAC varied as a function of game type. The total consumption and estimated BAC obtained while playing Beer Pong and Memory varied significantly as a function of group. Total ounces consumed while playing Three Man varied significantly as a function of group; however, the variation in estimated BAC obtained while playing Three Man was not significant. Results indicated that estimated BACs were higher for female participants across game type. Previous experience playing the three drinking games had no impact on total drink consumption or estimated BAC obtained while participating in the SDGP. The present study demonstrated that the SDGP can be used to generate estimates of how much alcohol is consumed and the associated obtained BAC during multiple types of drinking games. In order to fully examine whether previous experience factors in to overall alcohol consumption and BAC, future research should extend the SDGP to incorporate laboratory administration of alcohol during drinking game participation. (c) 2011 APA, all rights reserved.

  6. Coal gasification systems engineering and analysis. Appendix D: Cost and economic studies

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The detailed cost estimate documentation for the designs prepared in this study are presented. The include: (1) Koppers-Totzek, (2) Texaco (3) Babcock and Wilcox, (4) BGC-Lurgi, and (5) Lurgi. The alternate product cost estimates include: (1) Koppers-Totzek and Texaco single product facilities (methane, methanol, gasoline, hydrogen), (2) Kopers-Totzek SNG and MBG, (3) Kopers-Totzek and Texaco SNG and MBG, and (4) Lurgi-methane and Lurgi-methane and methanol.

  7. Connection equation and shaly-sand correction for electrical resistivity

    USGS Publications Warehouse

    Lee, Myung W.

    2011-01-01

    Estimating the amount of conductive and nonconductive constituents in the pore space of sediments by using electrical resistivity logs generally loses accuracy where clays are present in the reservoir. Many different methods and clay models have been proposed to account for the conductivity of clay (termed the shaly-sand correction). In this study, the connectivity equation (CE), which is a new approach to model non-Archie rocks, is used to correct for the clay effect and is compared with results using the Waxman and Smits method. The CE presented here requires no parameters other than an adjustable constant, which can be derived from the resistivity of water-saturated sediments. The new approach was applied to estimate water saturation of laboratory data and to estimate gas hydrate saturations at the Mount Elbert well on the Alaska North Slope. Although not as accurate as the Waxman and Smits method to estimate water saturations for the laboratory measurements, gas hydrate saturations estimated at the Mount Elbert well using the proposed CE are comparable to estimates from the Waxman and Smits method. Considering its simplicity, it has high potential to be used to account for the clay effect on electrical resistivity measurement in other systems.

  8. Theoretical and experimental study of DOA estimation using AML algorithm for an isotropic and non-isotropic 3D array

    NASA Astrophysics Data System (ADS)

    Asgari, Shadnaz; Ali, Andreas M.; Collier, Travis C.; Yao, Yuan; Hudson, Ralph E.; Yao, Kung; Taylor, Charles E.

    2007-09-01

    The focus of most direction-of-arrival (DOA) estimation problems has been based mainly on a two-dimensional (2D) scenario where we only need to estimate the azimuth angle. But in various practical situations we have to deal with a three-dimensional scenario. The importance of being able to estimate both azimuth and elevation angles with high accuracy and low complexity is of interest. We present the theoretical and the practical issues of DOA estimation using the Approximate-Maximum-Likelihood (AML) algorithm in a 3D scenario. We show that the performance of the proposed 3D AML algorithm converges to the Cramer-Rao Bound. We use the concept of an isotropic array to reduce the complexity of the proposed algorithm by advocating a decoupled 3D version. We also explore a modified version of the decoupled 3D AML algorithm which can be used for DOA estimation with non-isotropic arrays. Various numerical results are presented. We use two acoustic arrays each consisting of 8 microphones to do some field measurements. The processing of the measured data from the acoustic arrays for different azimuth and elevation angles confirms the effectiveness of the proposed methods.

  9. Efficient multidimensional regularization for Volterra series estimation

    NASA Astrophysics Data System (ADS)

    Birpoutsoukis, Georgios; Csurcsia, Péter Zoltán; Schoukens, Johan

    2018-05-01

    This paper presents an efficient nonparametric time domain nonlinear system identification method. It is shown how truncated Volterra series models can be efficiently estimated without the need of long, transient-free measurements. The method is a novel extension of the regularization methods that have been developed for impulse response estimates of linear time invariant systems. To avoid the excessive memory needs in case of long measurements or large number of estimated parameters, a practical gradient-based estimation method is also provided, leading to the same numerical results as the proposed Volterra estimation method. Moreover, the transient effects in the simulated output are removed by a special regularization method based on the novel ideas of transient removal for Linear Time-Varying (LTV) systems. Combining the proposed methodologies, the nonparametric Volterra models of the cascaded water tanks benchmark are presented in this paper. The results for different scenarios varying from a simple Finite Impulse Response (FIR) model to a 3rd degree Volterra series with and without transient removal are compared and studied. It is clear that the obtained models capture the system dynamics when tested on a validation dataset, and their performance is comparable with the white-box (physical) models.

  10. Measuring global monopole velocities, one by one

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl

    We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less

  11. Estimating numbers of females with cubs-of-the-year in the Yellowstone grizzly bear population

    USGS Publications Warehouse

    Keating, K.A.; Schwartz, C.C.; Haroldson, M.A.; Moody, D.

    2001-01-01

    For grizzly bears (Ursus arctos horribilis) in the Greater Yellowstone Ecosystem (GYE), minimum population size and allowable numbers of human-caused mortalities have been calculated as a function of the number of unique females with cubs-of-the-year (FCUB) seen during a 3- year period. This approach underestimates the total number of FCUB, thereby biasing estimates of population size and sustainable mortality. Also, it does not permit calculation of valid confidence bounds. Many statistical methods can resolve or mitigate these problems, but there is no universal best method. Instead, relative performances of different methods can vary with population size, sample size, and degree of heterogeneity among sighting probabilities for individual animals. We compared 7 nonparametric estimators, using Monte Carlo techniques to assess performances over the range of sampling conditions deemed plausible for the Yellowstone population. Our goal was to estimate the number of FCUB present in the population each year. Our evaluation differed from previous comparisons of such estimators by including sample coverage methods and by treating individual sightings, rather than sample periods, as the sample unit. Consequently, our conclusions also differ from earlier studies. Recommendations regarding estimators and necessary sample sizes are presented, together with estimates of annual numbers of FCUB in the Yellowstone population with bootstrap confidence bounds.

  12. Trajectory prediction for ballistic missiles based on boost-phase LOS measurements

    NASA Astrophysics Data System (ADS)

    Yeddanapudi, Murali; Bar-Shalom, Yaakov

    1997-10-01

    This paper addresses the problem of the estimation of the trajectory of a tactical ballistic missile using line of sight (LOS) measurements from one or more passive sensors (typically satellites). The major difficulties of this problem include: the estimation of the unknown time of launch, incorporation of (inaccurate) target thrust profiles to model the target dynamics during the boost phase and an overall ill-conditioning of the estimation problem due to poor observability of the target motion via the LOS measurements. We present a robust estimation procedure based on the Levenberg-Marquardt algorithm that provides both the target state estimate and error covariance taking into consideration the complications mentioned above. An important consideration in the defense against tactical ballistic missiles is the determination of the target position and error covariance at the acquisition range of a surveillance radar in the vicinity of the impact point. We present a systematic procedure to propagate the target state and covariance to a nominal time, when it is within the detection range of a surveillance radar to obtain a cueing volume. Mont Carlo simulation studies on typical single and two sensor scenarios indicate that the proposed algorithms are accurate in terms of the estimates and the estimator calculated covariances are consistent with the errors.

  13. Analyzing self-controlled case series data when case confirmation rates are estimated from an internal validation sample.

    PubMed

    Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M

    2018-05-16

    Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Comparing Bayesian estimates of genetic differentiation of molecular markers and quantitative traits: an application to Pinus sylvestris.

    PubMed

    Waldmann, P; García-Gil, M R; Sillanpää, M J

    2005-06-01

    Comparison of the level of differentiation at neutral molecular markers (estimated as F(ST) or G(ST)) with the level of differentiation at quantitative traits (estimated as Q(ST)) has become a standard tool for inferring that there is differential selection between populations. We estimated Q(ST) of timing of bud set from a latitudinal cline of Pinus sylvestris with a Bayesian hierarchical variance component method utilizing the information on the pre-estimated population structure from neutral molecular markers. Unfortunately, the between-family variances differed substantially between populations that resulted in a bimodal posterior of Q(ST) that could not be compared in any sensible way with the unimodal posterior of the microsatellite F(ST). In order to avoid publishing studies with flawed Q(ST) estimates, we recommend that future studies should present heritability estimates for each trait and population. Moreover, to detect variance heterogeneity in frequentist methods (ANOVA and REML), it is of essential importance to check also that the residuals are normally distributed and do not follow any systematically deviating trends.

  15. Validity of photographs for food portion estimation in a rural West African setting.

    PubMed

    Huybregts, L; Roberfroid, D; Lachat, C; Van Camp, J; Kolsteren, P

    2008-06-01

    To validate food photographs for food portion size estimation of frequently consumed dishes, to be used in a 24-hour recall food consumption study of pregnant women in a rural environment in Burkina Faso. This food intake study is part of an intervention evaluating the efficacy of prenatal micronutrient supplementation on birth outcomes. Women of childbearing age (15-45 years). A food photograph album containing four photographs of food portions per food item was compiled for eight selected food items. Subjects were presented two food items each in the morning and two in the afternoon. These foods were weighed to the exact weight of a food depicted in one of the photographs and were in the same receptacles. The next day another fieldworker presented the food photographs to the subjects to test their ability to choose the correct photograph. The correct photograph out of the four proposed was chosen in 55% of 1028 estimations. For each food, proportions of underestimating and overestimating participants were balanced, except for rice and couscous. On a group level, mean differences between served and estimated portion sizes were between -8.4% and 6.3%. Subjects who attended school were almost twice as likely to choose the correct photograph. The portion size served (small vs. largest sizes) had a significant influence on the portion estimation ability. The results from this study indicate that in a West African rural setting, food photographs can be a valuable tool for the quantification of food portion size on group level.

  16. Predicting muscle forces during the propulsion phase of single leg triple hop test.

    PubMed

    Alvim, Felipe Costa; Lucareli, Paulo Roberto Garcia; Menegaldo, Luciano Luporini

    2018-01-01

    Functional biomechanical tests allow the assessment of musculoskeletal system impairments in a simple way. Muscle force synergies associated with movement can provide additional information for diagnosis. However, such forces cannot be directly measured noninvasively. This study aims to estimate muscle activations and forces exerted during the preparation phase of the single leg triple hop test. Two different approaches were tested: static optimization (SO) and computed muscle control (CMC). As an indirect validation, model-estimated muscle activations were compared with surface electromyography (EMG) of selected hip and thigh muscles. Ten physically healthy active women performed a series of jumps, and ground reaction forces, kinematics and EMG data were recorded. An existing OpenSim model with 92 musculotendon actuators was used to estimate muscle forces. Reflective markers data were processed using the OpenSim Inverse Kinematics tool. Residual Reduction Algorithm (RRA) was applied recursively before running the SO and CMC. For both, the same adjusted kinematics were used as inputs. Both approaches presented similar residuals amplitudes. SO showed a closer agreement between the estimated activations and the EMGs of some muscles. Due to inherent EMG methodological limitations, the superiority of SO in relation to CMC can be only hypothesized. It should be confirmed by conducting further studies comparing joint contact forces. The workflow presented in this study can be used to estimate muscle forces during the preparation phase of the single leg triple hop test and allows investigating muscle activation and coordination. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Spacelab user implementation assessment study. Volume 3: Resource requirements development

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The resources requirements for the integration and checkout of spacelab payloads are presented in three categories: mission-unique, sustaining, and non-recurring. The requirements are identified by concept and by center. Cost estimates for the resource requirements are also presented.

  18. Estimation of multiple accelerated motions using chirp-Fourier transform and clustering.

    PubMed

    Alexiadis, Dimitrios S; Sergiadis, George D

    2007-01-01

    Motion estimation in the spatiotemporal domain has been extensively studied and many methodologies have been proposed, which, however, cannot handle both time-varying and multiple motions. Extending previously published ideas, we present an efficient method for estimating multiple, linearly time-varying motions. It is shown that the estimation of accelerated motions is equivalent to the parameter estimation of superpositioned chirp signals. From this viewpoint, one can exploit established signal processing tools such as the chirp-Fourier transform. It is shown that accelerated motion results in energy concentration along planes in the 4-D space: spatial frequencies-temporal frequency-chirp rate. Using fuzzy c-planes clustering, we estimate the plane/motion parameters. The effectiveness of our method is verified on both synthetic as well as real sequences and its advantages are highlighted.

  19. Big Numbers about Small Children: Estimating the Economic Benefits of Addressing Undernutrition.

    PubMed

    Alderman, Harold; Behrman, Jere R; Puett, Chloe

    2017-02-01

    Different approaches have been used to estimate the economic benefits of reducing undernutrition and to estimate the costs of investing in such programs on a global scale. While many of these studies are ultimately based on evidence from well-designed efficacy trials, all require a number of assumptions to project the impact of such trials to larger populations and to translate the value of the expected improvement in nutritional status into economic terms. This paper provides a short critique of some approaches to estimating the benefits of investments in child nutrition and then presents an alternative set of estimates based on different core data. These new estimates reinforce the basic conclusions of the existing literature: the economic value from reducing undernutrition in undernourished populations is likely to be substantial.

  20. Estimating reproduction numbers for adults and children from case data

    PubMed Central

    Glass, K.; Mercer, G. N.; Nishiura, H.; McBryde, E. S.; Becker, N. G.

    2011-01-01

    We present a method for estimating reproduction numbers for adults and children from daily onset data, using pandemic influenza A(H1N1) data as a case study. We investigate the impact of different underlying transmission assumptions on our estimates, and identify that asymmetric reproduction matrices are often appropriate. Under-reporting of cases can bias estimates of the reproduction numbers if reporting rates are not equal across the two age groups. However, we demonstrate that the estimate of the higher reproduction number is robust to disproportionate data-thinning. Applying the method to 2009 pandemic influenza H1N1 data from Japan, we demonstrate that the reproduction number for children was considerably higher than that of adults, and that our estimates are insensitive to our choice of reproduction matrix. PMID:21345858

  1. Surface acoustic impediography: a new technology for fingerprint mapping and biometric identification: a numerical study

    NASA Astrophysics Data System (ADS)

    Schmitt, Rainer M.; Scott, W. Guy; Irving, Richard D.; Arnold, Joe; Bardons, Charles; Halpert, Daniel; Parker, Lawrence

    2004-09-01

    A new type of fingerprint sensor is presented. The sensor maps the acoustic impedance of the fingerprint pattern by estimating the electrical impedance of its sensor elements. The sensor substrate, made of 1-3 piezo-ceramic, which is fabricated inexpensively at large scales, can provide a resolution up to 50 μm over an area of 20 x 25 mm2. Using FE modeling the paper presents the numerical validation of the basic principle. It evaluates an optimized pillar aspect ratio, estimates spatial resolution and the point spread function for a 100 μm and 50 μm pitch model. In addition, first fingerprints obtained with the prototype sensor are presented.

  2. Studies of the net surface radiative flux from satellite radiances during FIFE

    NASA Technical Reports Server (NTRS)

    Frouin, Robert

    1993-01-01

    Studies of the net surface radiative flux from satellite radiances during First ISLSCP Field Experiment (FIFE) are presented. Topics covered include: radiative transfer model validation; calibration of VISSR and AVHRR solar channels; development and refinement of algorithms to estimate downward solar and terrestrial irradiances at the surface, including photosynthetically available radiation (PAR) and surface albedo; verification of these algorithms using in situ measurements; production of maps of shortwave irradiance, surface albedo, and related products; analysis of the temporal variability of shortwave irradiance over the FIFE site; development of a spectroscopy technique to estimate atmospheric total water vapor amount; and study of optimum linear combinations of visible and near-infrared reflectances for estimating the fraction of PAR absorbed by plants.

  3. Fear, anger, and risk.

    PubMed

    Lerner, J S; Keltner, D

    2001-07-01

    Drawing on an appraisal-tendency framework (J. S. Lerner & D. Keltner, 2000), the authors predicted and found that fear and anger have opposite effects on risk perception. Whereas fearful people expressed pessimistic risk estimates and risk-averse choices, angry people expressed optimistic risk estimates and risk-seeking choices. These opposing patterns emerged for naturally occurring and experimentally induced fear and anger. Moreover, estimates of angry people more closely resembled those of happy people than those of fearful people. Consistent with predictions, appraisal tendencies accounted for these effects: Appraisals of certainty and control moderated and (in the case of control) mediated the emotion effects. As a complement to studies that link affective valence to judgment outcomes, the present studies highlight multiple benefits of studying specific emotions.

  4. Regional and longitudinal estimation of product lifespan distribution: a case study for automobiles and a simplified estimation method.

    PubMed

    Oguchi, Masahiro; Fuse, Masaaki

    2015-02-03

    Product lifespan estimates are important information for understanding progress toward sustainable consumption and estimating the stocks and end-of-life flows of products. Publications reported actual lifespan of products; however, quantitative data are still limited for many countries and years. This study presents regional and longitudinal estimation of lifespan distribution of consumer durables, taking passenger cars as an example, and proposes a simplified method for estimating product lifespan distribution. We estimated lifespan distribution parameters for 17 countries based on the age profile of in-use cars. Sensitivity analysis demonstrated that the shape parameter of the lifespan distribution can be replaced by a constant value for all the countries and years. This enabled a simplified estimation that does not require detailed data on the age profile. Applying the simplified method, we estimated the trend in average lifespans of passenger cars from 2000 to 2009 for 20 countries. Average lifespan differed greatly between countries (9-23 years) and was increasing in many countries. This suggests consumer behavior differs greatly among countries and has changed over time, even in developed countries. The results suggest that inappropriate assumptions of average lifespan may cause significant inaccuracy in estimating the stocks and end-of-life flows of products.

  5. How the 2SLS/IV estimator can handle equality constraints in structural equation models: a system-of-equations approach.

    PubMed

    Nestler, Steffen

    2014-05-01

    Parameters in structural equation models are typically estimated using the maximum likelihood (ML) approach. Bollen (1996) proposed an alternative non-iterative, equation-by-equation estimator that uses instrumental variables. Although this two-stage least squares/instrumental variables (2SLS/IV) estimator has good statistical properties, one problem with its application is that parameter equality constraints cannot be imposed. This paper presents a mathematical solution to this problem that is based on an extension of the 2SLS/IV approach to a system of equations. We present an example in which our approach was used to examine strong longitudinal measurement invariance. We also investigated the new approach in a simulation study that compared it with ML in the examination of the equality of two latent regression coefficients and strong measurement invariance. Overall, the results show that the suggested approach is a useful extension of the original 2SLS/IV estimator and allows for the effective handling of equality constraints in structural equation models. © 2013 The British Psychological Society.

  6. Estimating Planetary Boundary Layer Heights from NOAA Profiler Network Wind Profiler Data

    NASA Technical Reports Server (NTRS)

    Molod, Andrea M.; Salmun, H.; Dempsey, M

    2015-01-01

    An algorithm was developed to estimate planetary boundary layer (PBL) heights from hourly archived wind profiler data from the NOAA Profiler Network (NPN) sites located throughout the central United States. Unlike previous studies, the present algorithm has been applied to a long record of publicly available wind profiler signal backscatter data. Under clear conditions, summertime averaged hourly time series of PBL heights compare well with Richardson-number based estimates at the few NPN stations with hourly temperature measurements. Comparisons with clear sky reanalysis based estimates show that the wind profiler PBL heights are lower by approximately 250-500 m. The geographical distribution of daily maximum PBL heights corresponds well with the expected distribution based on patterns of surface temperature and soil moisture. Wind profiler PBL heights were also estimated under mostly cloudy conditions, and are generally higher than both the Richardson number based and reanalysis PBL heights, resulting in a smaller clear-cloudy condition difference. The algorithm presented here was shown to provide a reliable summertime climatology of daytime hourly PBL heights throughout the central United States.

  7. Estimation of Salivary Glucose and Glycogen Content in Exfoliated Buccal Mucosal Cells of Patients with Type II Diabetes Mellitus

    PubMed Central

    Gopinathan, Deepa Moothedathu; Sukumaran, Sunil

    2015-01-01

    Background Diabetes mellitus is a common metabolic disorder which shows an increasing incidence worldwide. Constant monitoring of blood glucose in diabetic patient is required which involves painful invasive techniques. Saliva is gaining acceptance as diagnostic tool for various systemic diseases which can be collected noninvasively and by individuals with limited training. Aim The aim of the present study was to analyse the possibility of using salivary glucose and glycogen content of buccal mucosal cells as a diagnostic marker in Type II Diabetes mellitus patients which can be considered as adjuvant diagnostic tool to the gold standards. Materials and Methods Sample consists of 30 study and 30 control groups. Saliva was collected by passive drool method.Intravenous blood samples were collected for glucose estimation. Exfoliated buccal mucosal cells were collected from apparently normal buccal mucosa, smeared on dry glass slide and stained with PAS. Blood and salivary glucose are estimated by Glucose Oxidase endpoint method. For Glycogen estimation, number of PAS positive cells in fifty unfolded cells was analysed. Results The results of the present study revealed a significant increase in the salivary glucose level and the number of PAS positive buccal mucosal cells in the diabetics than in the controls. The correlation between the fasting serum glucose and fasting salivary glucose and also that between fasting serum glucose and PAS positive cells was statistically significant. But the correlation between the staining intensity and fasting serum glucose was statistically insignificant. Conclusion With the results of the present study it is revealed that salivary glucose and PAS positive cells are increased in diabetics which can be considered as adjuvant diagnostic tool for Diabetes mellitus. PMID:26155572

  8. Training of carbohydrate estimation for people with diabetes using mobile augmented reality.

    PubMed

    Domhardt, Michael; Tiefengrabner, Martin; Dinic, Radomir; Fötschl, Ulrike; Oostingh, Gertie J; Stütz, Thomas; Stechemesser, Lars; Weitgasser, Raimund; Ginzinger, Simon W

    2015-05-01

    Imprecise carbohydrate counting as a measure to guide the treatment of diabetes may be a source of errors resulting in problems in glycemic control. Exact measurements can be tedious, leading most patients to estimate their carbohydrate intake. In the presented pilot study a smartphone application (BE(AR)), that guides the estimation of the amounts of carbohydrates, was used by a group of diabetic patients. Eight adult patients with diabetes mellitus type 1 were recruited for the study. At the beginning of the study patients were introduced to BE(AR) in sessions lasting 45 minutes per patient. Patients redraw the real food in 3D on the smartphone screen. Based on a selected food type and the 3D form created using BE(AR) an estimation of carbohydrate content is calculated. Patients were supplied with the application on their personal smartphone or a loaner device and were instructed to use the application in real-world context during the study period. For evaluation purpose a test measuring carbohydrate estimation quality was designed and performed at the beginning and the end of the study. In 44% of the estimations performed at the end of the study the error reduced by at least 6 grams of carbohydrate. This improvement occurred albeit several problems with the usage of BE(AR) were reported. Despite user interaction problems in this group of patients the provided intervention resulted in a reduction in the absolute error of carbohydrate estimation. Intervention with smartphone applications to assist carbohydrate counting apparently results in more accurate estimations. © 2015 Diabetes Technology Society.

  9. Integrating acoustic telemetry into mark-recapture models to improve the precision of apparent survival and abundance estimates.

    PubMed

    Dudgeon, Christine L; Pollock, Kenneth H; Braccini, J Matias; Semmens, Jayson M; Barnett, Adam

    2015-07-01

    Capture-mark-recapture models are useful tools for estimating demographic parameters but often result in low precision when recapture rates are low. Low recapture rates are typical in many study systems including fishing-based studies. Incorporating auxiliary data into the models can improve precision and in some cases enable parameter estimation. Here, we present a novel application of acoustic telemetry for the estimation of apparent survival and abundance within capture-mark-recapture analysis using open population models. Our case study is based on simultaneously collecting longline fishing and acoustic telemetry data for a large mobile apex predator, the broadnose sevengill shark (Notorhynchus cepedianus), at a coastal site in Tasmania, Australia. Cormack-Jolly-Seber models showed that longline data alone had very low recapture rates while acoustic telemetry data for the same time period resulted in at least tenfold higher recapture rates. The apparent survival estimates were similar for the two datasets but the acoustic telemetry data showed much greater precision and enabled apparent survival parameter estimation for one dataset, which was inestimable using fishing data alone. Combined acoustic telemetry and longline data were incorporated into Jolly-Seber models using a Monte Carlo simulation approach. Abundance estimates were comparable to those with longline data only; however, the inclusion of acoustic telemetry data increased precision in the estimates. We conclude that acoustic telemetry is a useful tool for incorporating in capture-mark-recapture studies in the marine environment. Future studies should consider the application of acoustic telemetry within this framework when setting up the study design and sampling program.

  10. The cost of vision loss in Canada. 1. Methodology.

    PubMed

    Gordon, Keith D; Cruess, Alan F; Bellan, Lorne; Mitchell, Scott; Pezzullo, M Lynne

    2011-08-01

    This paper outlines the methodology used to estimate the cost of vision loss in Canada. The results of this study will be presented in a second paper. The cost of vision loss (VL) in Canada was estimated using a prevalence-based approach. This was done by estimating the number of people with VL in a base period (2007) and the costs associated with treating them. The cost estimates included direct health system expenditures on eye conditions that cause VL, as well as other indirect financial costs such as productivity losses. Estimates were also made of the value of the loss of healthy life, measured in Disability Adjusted Life Years or DALY's. To estimate the number of cases of VL in the population, epidemiological data on prevalence rates were applied to population data. The number of cases of VL was stratified by gender, age, ethnicity, severity and cause. The following sources were used for estimating prevalence: Population-based eye studies; Canadian Surveys; Canadian journal articles and research studies; and International Population Based Eye Studies. Direct health costs were obtained primarily from Health Canada and Canadian Institute for Health Information (CIHI) sources, while costs associated with productivity losses were based on employment information compiled by Statistics Canada and on economic theory of productivity loss. Costs related to vision rehabilitation (VR) were obtained from Canadian VR organizations. This study shows that it is possible to estimate the costs for VL for a country in the absence of ongoing local epidemiological studies. Copyright © 2011 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  11. Mixture distributions of wind speed in the UAE

    NASA Astrophysics Data System (ADS)

    Shin, J.; Ouarda, T.; Lee, T. S.

    2013-12-01

    Wind speed probability distribution is commonly used to estimate potential wind energy. The 2-parameter Weibull distribution has been most widely used to characterize the distribution of wind speed. However, it is unable to properly model wind speed regimes when wind speed distribution presents bimodal and kurtotic shapes. Several studies have concluded that the Weibull distribution should not be used for frequency analysis of wind speed without investigation of wind speed distribution. Due to these mixture distributional characteristics of wind speed data, the application of mixture distributions should be further investigated in the frequency analysis of wind speed. A number of studies have investigated the potential wind energy in different parts of the Arabian Peninsula. Mixture distributional characteristics of wind speed were detected from some of these studies. Nevertheless, mixture distributions have not been employed for wind speed modeling in the Arabian Peninsula. In order to improve our understanding of wind energy potential in Arabian Peninsula, mixture distributions should be tested for the frequency analysis of wind speed. The aim of the current study is to assess the suitability of mixture distributions for the frequency analysis of wind speed in the UAE. Hourly mean wind speed data at 10-m height from 7 stations were used in the current study. The Weibull and Kappa distributions were employed as representatives of the conventional non-mixture distributions. 10 mixture distributions are used and constructed by mixing four probability distributions such as Normal, Gamma, Weibull and Extreme value type-one (EV-1) distributions. Three parameter estimation methods such as Expectation Maximization algorithm, Least Squares method and Meta-Heuristic Maximum Likelihood (MHML) method were employed to estimate the parameters of the mixture distributions. In order to compare the goodness-of-fit of tested distributions and parameter estimation methods for sample wind data, the adjusted coefficient of determination, Bayesian Information Criterion (BIC) and Chi-squared statistics were computed. Results indicate that MHML presents the best performance of parameter estimation for the used mixture distributions. In most of the employed 7 stations, mixture distributions give the best fit. When the wind speed regime shows mixture distributional characteristics, most of these regimes present the kurtotic statistical characteristic. Particularly, applications of mixture distributions for these stations show a significant improvement in explaining the whole wind speed regime. In addition, the Weibull-Weibull mixture distribution presents the best fit for the wind speed data in the UAE.

  12. The effect of the geomagnetic field on cosmic ray energy estimates and large scale anisotropy searches on data from the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Pierre Auger Collaboration; Abreu, P.; Aglietta, M.; Ahn, E. J.; Albuquerque, I. F. M.; Allard, D.; Allekotte, I.; Allen, J.; Allison, P.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Antičić, T.; Anzalone, A.; Aramo, C.; Arganda, E.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Bäcker, T.; Balzer, M.; Barber, K. B.; Barbosa, A. F.; Bardenet, R.; Barroso, S. L. C.; Baughman, B.; Bäuml, J.; Beatty, J. J.; Becker, B. R.; Becker, K. H.; Bellétoile, A.; Bellido, J. A.; BenZvi, S.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brogueira, P.; Brown, W. C.; Bruijn, R.; Buchholz, P.; Bueno, A.; Burton, R. E.; Caballero-Mora, K. S.; Caramete, L.; Caruso, R.; Castellina, A.; Catalano, O.; Cataldi, G.; Cazon, L.; Cester, R.; Chauvin, J.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chou, A.; Chudoba, J.; Clay, R. W.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cook, H.; Cooper, M. J.; Coppens, J.; Cordier, A.; Coutu, S.; Covault, C. E.; Creusot, A.; Criss, A.; Cronin, J.; Curutiu, A.; Dagoret-Campagne, S.; Dallier, R.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; De Donato, C.; de Jong, S. J.; De La Vega, G.; de Mello Junior, W. J. M.; de Mello Neto, J. R. T.; De Mitri, I.; de Souza, V.; de Vries, K. D.; Decerprit, G.; del Peral, L.; del Río, M.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; dos Anjos, J. C.; Dova, M. T.; D'Urso, D.; Dutan, I.; Ebr, J.; Engel, R.; Erdmann, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Fajardo Tapia, I.; Falcke, H.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Ferrero, A.; Fick, B.; Filevich, A.; Filipčič, A.; Fliescher, S.; Fracchiolla, C. E.; Fraenkel, E. D.; Fröhlich, U.; Fuchs, B.; Gaior, R.; Gamarra, R. F.; Gambetta, S.; García, B.; García Gámez, D.; Garcia-Pinto, D.; Gascon, A.; Gemmeke, H.; Gesterling, K.; Ghia, P. L.; Giaccari, U.; Giller, M.; Glass, H.; Gold, M. S.; Golup, G.; Gomez Albarracin, F.; Gómez Berisso, M.; Gonçalves, P.; Gonzalez, D.; Gonzalez, J. G.; Gookin, B.; Góra, D.; Gorgi, A.; Gouffon, P.; Gozzini, S. R.; Grashorn, E.; Grebe, S.; Griffith, N.; Grigat, M.; Grillo, A. F.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Guzman, A.; Hague, J. D.; Hansen, P.; Harari, D.; Harmsma, S.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Herve, A. E.; Hojvat, C.; Hollon, N.; Holmes, V. C.; Homola, P.; Hörandel, J. R.; Horneffer, A.; Horvath, P.; Hrabovský, M.; Huege, T.; Insolia, A.; Ionita, F.; Italiano, A.; Jarne, C.; Jiraskova, S.; Josebachuili, M.; Kadija, K.; Kampert, K. H.; Karhan, P.; Kasper, P.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kelley, J. L.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Knapp, J.; Koang, D.-H.; Kotera, K.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuehn, F.; Kuempel, D.; Kulbartz, J. K.; Kunka, N.; La Rosa, G.; Lachaud, C.; Lautridou, P.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Lemiere, A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Macolino, C.; Maldera, S.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, J.; Marin, V.; Maris, I. C.; Marquez Falcon, H. R.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Mathes, H. J.; Matthews, J.; Matthews, J. A. J.; Matthiae, G.; Maurizio, D.; Mazur, P. O.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Mertsch, P.; Meurer, C.; Mićanović, S.; Micheletti, M. I.; Miller, W.; Miramonti, L.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morales, B.; Morello, C.; Moreno, E.; Moreno, J. C.; Morris, C.; Mostafá, M.; Moura, C. A.; Mueller, S.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navarro, J. L.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Nhung, P. T.; Niemietz, L.; Nierstenhoefer, N.; Nitz, D.; Nosek, D.; Nožka, L.; Nyklicek, M.; Oehlschläger, J.; Olinto, A.; Oliva, P.; Olmos-Gilbaja, V. M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Parente, G.; Parizot, E.; Parra, A.; Parsons, R. D.; Pastor, S.; Paul, T.; Pech, M.; Pękala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Petrera, S.; Petrinca, P.; Petrolini, A.; Petrov, Y.; Petrovic, J.; Pfendner, C.; Phan, N.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Ponce, V. H.; Pontz, M.; Privitera, P.; Prouza, M.; Quel, E. J.; Querchfeld, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rivera, H.; Rizi, V.; Roberts, J.; Robledo, C.; Rodrigues de Carvalho, W.; Rodriguez, G.; Rodriguez Martino, J.; Rodriguez Rojo, J.; Rodriguez-Cabo, I.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Rouillé-d'Orfeuil, B.; Roulet, E.; Rovero, A. C.; Rühle, C.; Salamida, F.; Salazar, H.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarkar, S.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Schmidt, F.; Scholten, O.; Schoorlemmer, H.; Schovancova, J.; Schovánek, P.; Schröder, F.; Schulte, S.; Schuster, D.; Sciutto, S. J.; Scuderi, M.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Silva Lopez, H. H.; Śacute; Smiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Spinka, H.; Squartini, R.; Stanic, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Strazzeri, E.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Šuša, T.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Tamashiro, A.; Tapia, A.; Tartare, M.; Taşąu, O.; Tavera Ruiz, C. G.; Tcaciuc, R.; Tegolo, D.; Thao, N. T.; Thomas, D.; Tiffenberg, J.; Timmermans, C.; Tiwari, D. K.; Tkaczyk, W.; Todero Peixoto, C. J.; Tomé, B.; Tonachini, A.; Travnicek, P.; Tridapalli, D. B.; Tristram, G.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van den Berg, A. M.; Varela, E.; Vargas Cárdenas, B.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Wahlberg, H.; Wahrlich, P.; Wainberg, O.; Walz, D.; Warner, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Westerhoff, S.; Whelan, B. J.; Wieczorek, G.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Winnick, M. G.; Wommer, M.; Wundheiler, B.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zimbres Silva, M.; Ziolkowski, M.

    2011-11-01

    We present a comprehensive study of the influence of the geomagnetic field on the energy estimation of extensive air showers with a zenith angle smaller than 60°, detected at the Pierre Auger Observatory. The geomagnetic field induces an azimuthal modulation of the estimated energy of cosmic rays up to the ~ 2% level at large zenith angles. We present a method to account for this modulation of the reconstructed energy. We analyse the effect of the modulation on large scale anisotropy searches in the arrival direction distributions of cosmic rays. At a given energy, the geomagnetic effect is shown to induce a pseudo-dipolar pattern at the percent level in the declination distribution that needs to be accounted for.

  13. Geostatistical risk estimation at waste disposal sites in the presence of hot spots.

    PubMed

    Komnitsas, Kostas; Modis, Kostas

    2009-05-30

    The present paper aims to estimate risk by using geostatistics at the wider coal mining/waste disposal site of Belkovskaya, Tula region, in Russia. In this area the presence of hot spots causes a spatial trend in the mean value of the random field and a non-Gaussian data distribution. Prior to application of geostatistics, subtraction of trend and appropriate smoothing and transformation of the data into a Gaussian form were carried out; risk maps were then generated for the wider study area in order to assess the probability of exceeding risk thresholds. Finally, the present paper discusses the need for homogenization of soil risk thresholds regarding hazardous elements that will enhance reliability of risk estimation and enable application of appropriate rehabilitation actions in contaminated areas.

  14. Born iterative reconstruction using perturbed-phase field estimates

    PubMed Central

    Astheimer, Jeffrey P.; Waag, Robert C.

    2008-01-01

    A method of image reconstruction from scattering measurements for use in ultrasonic imaging is presented. The method employs distorted-wave Born iteration but does not require using a forward-problem solver or solving large systems of equations. These calculations are avoided by limiting intermediate estimates of medium variations to smooth functions in which the propagated fields can be approximated by phase perturbations derived from variations in a geometric path along rays. The reconstruction itself is formed by a modification of the filtered-backpropagation formula that includes correction terms to account for propagation through an estimated background. Numerical studies that validate the method for parameter ranges of interest in medical applications are presented. The efficiency of this method offers the possibility of real-time imaging from scattering measurements. PMID:19062873

  15. Application of cokriging techniques for the estimation of hail size

    NASA Astrophysics Data System (ADS)

    Farnell, Carme; Rigo, Tomeu; Martin-Vide, Javier

    2018-01-01

    There are primarily two ways of estimating hail size: the first is the direct interpolation of point observations, and the second is the transformation of remote sensing fields into measurements of hail properties. Both techniques have advantages and limitations as regards generating the resultant map of hail damage. This paper presents a new methodology that combines the above mentioned techniques in an attempt to minimise the limitations and take advantage of the benefits of interpolation and the use of remote sensing data. The methodology was tested for several episodes with good results being obtained for the estimation of hail size at practically all the points analysed. The study area presents a large database of hail episodes, and for this reason, it constitutes an optimal test bench.

  16. Descriptive epidemiology of cervical dystonia.

    PubMed

    Defazio, Giovanni; Jankovic, Joseph; Giel, Jennifer L; Papapetropoulos, Spyridon

    2013-01-01

    Cervical dystonia (CD), the most common form of adult-onset focal dystonia, has a heterogeneous clinical presentation with variable clinical features, leading to difficulties and delays in diagnosis. Owing to the lack of reviews specifically focusing on the frequency of primary CD in the general population, we performed a systematic literature search to examine its prevalence/incidence and analyze methodological differences among studies. We performed a systematic literature search to examine the prevalence data of primary focal CD. Sixteen articles met our methodological criteria. Because the reported prevalence estimates were found to vary widely across studies, we analyzed methodological differences and other factors to determine whether true differences exist in prevalence rates among geographic areas (and by gender and age distributions), as well as to facilitate recommendations for future studies. Prevalence estimates ranged from 20-4,100 cases/million. Generally, studies that relied on service-based and record-linkage system data likely underestimated the prevalence of CD, whereas population-based studies suffered from over-ascertainment. The more methodologically robust studies yielded a range of estimates of 28-183 cases/million. Despite the varying prevalence estimates, an approximate 2:1 female:male ratio was consistent among many studies. Three studies estimated incidence, ranging from 8-12 cases/million person-years. Although several studies have attempted to estimate the prevalence and incidence of CD, there is a need for additional well-designed epidemiological studies on primary CD that include large populations; use defined CD diagnostic criteria; and stratify for factors such as age, gender, and ethnicity.

  17. DISABILITIES IN OKLAHOMA--ESTIMATES AND PROJECTIONS, REPORT OF THE OKLAHOMA SURVEY OF DISABILITIES.

    ERIC Educational Resources Information Center

    BOHLEBER, MICHAEL E.

    THE PURPOSE OF THE STUDY WAS TO PROVIDE REASONABLY ACCURATE ESTIMATES OF THE NUMBER AND TYPES OF DISABLED PERSONS AND THEIR NEEDS AS A BASIS FOR BOTH PRESENT AND FUTURE PLANNING. PERSONAL INTERVIEWS WERE CONDUCTED WITH ADULT RESPONDENTS IN 3,000 HOUSEHOLDS IN OKLAHOMA, A RANDOM SAMPLE STATIFIED ON THE RURAL-URBAN DIMENSION. DATA FROM 2,058…

  18. Toward robust estimation of the components of forest population change: simulation results

    Treesearch

    Francis A. Roesch

    2014-01-01

    This report presents the full simulation results of the work described in Roesch (2014), in which multiple levels of simulation were used to test the robustness of estimators for the components of forest change. In that study, a variety of spatial-temporal populations were created based on, but more variable than, an actual forest monitoring dataset, and then those...

  19. IDENTIFICATION OF OFF-FARM AGRICULTURAL OCCUPATIONS AND THE EDUCATION NEEDED FOR EMPLOYMENT IN THESE OCCUPATIONS IN DELAWARE.

    ERIC Educational Resources Information Center

    BARWICK, RALPH P.

    THE PURPOSES OF THE STUDY WERE TO (1) IDENTIFY PRESENT AND EMERGING OFF-FARM AGRICULTURAL OCCUPATIONS, (2) ESTIMATE THE NUMBER EMPLOYED, (3) ESTIMATE THE NUMBER TO BE EMPLOYED IN THE FUTURE, AND (4) DETERMINE COMPETENCIES NEEDED IN SELECTED OCCUPATIONAL FAMILIES. A DISPROPORTIONATE RANDOM SAMPLE OF 267 BUSINESSES OR SERVICES WAS DRAWN FROM A LIST…

  20. Those Who Are Left behind: An Estimate of the Number of Family Members of Suicide Victims in Japan

    ERIC Educational Resources Information Center

    Chen, Joe; Choi, Yun Jeong; Mori, Kohta; Sawada, Yasuyuki; Sugano, Saki

    2009-01-01

    This paper contributes to the literature of suicide studies by presenting procedures and its estimates of the number of family members who lose their loved ones to suicide. Using Japanese aggregate level data, three main findings emerge: first, there are approximately five bereaved family members per suicide; second, in 2006, there were about…

  1. Consequences of Ignoring Guessing when Estimating the Latent Density in Item Response Theory

    ERIC Educational Resources Information Center

    Woods, Carol M.

    2008-01-01

    In Ramsay-curve item response theory (RC-IRT), the latent variable distribution is estimated simultaneously with the item parameters. In extant Monte Carlo evaluations of RC-IRT, the item response function (IRF) used to fit the data is the same one used to generate the data. The present simulation study examines RC-IRT when the IRF is imperfectly…

  2. Highlights of Trends in Pregnancies and Pregnancy Rates by Outcome: Estimates for the United States, 1976-96.

    ERIC Educational Resources Information Center

    Ventura, Stephanie J.; Mosher, William D.; Curtin, Sally C.; Abma, Joyce C.; Henshaw, Stanley

    1999-01-01

    This report presents key findings from a comprehensive report on pregnancies and pregnancy rates for U.S. women. The study incorporates birth, abortion, and fetal loss data to compile national estimates of pregnancy rates according to a variety of characteristics, including age, race, Hispanic origin, and marital status. Data from the National…

  3. Near-Miss Effects on Response Latencies and Win Estimations of Slot Machine Players

    ERIC Educational Resources Information Center

    Dixon, Mark R.; Schreiber, James E.

    2004-01-01

    The present study examined the degree to which slot machine near-miss trials, or trials that displayed 2 of 3 winning symbols on the payoff line, affected response times and win estimations of 12 recreational slot machine players. Participants played a commercial slot machine in a casino-like laboratory for course extra-credit points. Videotaped…

  4. Improved soil water deficit estimation through the integration of canopy temperature measurements into a soil water balance model

    USDA-ARS?s Scientific Manuscript database

    Correct prediction of the dynamics of total available water in the root zone (TAWr) is critical for irrigation management as shown in the soil water balance model presented in FAO paper 56 (Allen et al., 1998). In this study, we propose a framework to improve TAWr estimation by incorporating the cro...

  5. Morphological study on the prediction of the site of surface slides

    Treesearch

    Hiromasa Hiura

    1991-01-01

    The annual continual occurrence of surface slides in the basin was estimated by modifying the estimation formula of Yoshimatsu. The Weibull Distribution Function revealed to be usefull for presenting the state and the transition of surface slides in the basin. Three parameters of the Weibull Function are recognized to be the linear function of the area ratio a/A. The...

  6. The Economic Burden of Child Maltreatment in the United States and Implications for Prevention

    ERIC Educational Resources Information Center

    Fang, Xiangming; Brown, Derek S.; Florence, Curtis S.; Mercy, James A.

    2012-01-01

    Objectives: To present new estimates of the average lifetime costs per child maltreatment victim and aggregate lifetime costs for all new child maltreatment cases incurred in 2008 using an incidence-based approach. Methods: This study used the best available secondary data to develop cost per case estimates. For each cost category, the paper used…

  7. On the Utility of National Datasets and Resource Cost Models for Estimating Faculty Instructional Costs in Higher Education

    ERIC Educational Resources Information Center

    Morphew, Christopher; Baker, Bruce

    2007-01-01

    In this article, the authors present the results of a research study in which they used two national datasets to construct and examine a model that estimates relative faculty instructional costs for specific undergraduate degree programs and also identifies differences in these costs by region and institutional type. They conducted this research…

  8. Pressure injury in Australian public hospitals: a cost-of-illness study.

    PubMed

    Nguyen, Kim-Huong; Chaboyer, Wendy; Whitty, Jennifer A

    2015-06-01

    Pressure injuries (PI) are largely preventable and can be viewed as an adverse outcome of a healthcare admission, yet they affect millions of people and consume billions of dollars in healthcare spending. The existing literature in Australia presents a patchy picture of the economic burden of PI on society and the health system. The aim of the present study was to provide a more comprehensive and updated picture of PI by state and severity using publicly available data. A cost-of-illness analysis was conducted using a prevalence approach and a 1-year time horizon based on data from the existing literature extrapolated using simulation methods to estimate the costs by PI severity and state subgroups. The treatment cost across all states and severity in 2012-13 was estimated to be A$983 million per annum, representing approximately 1.9% of all public hospital expenditure or 0.6% of the public recurrent health expenditure. The opportunity cost was valued at an additional A$820 million per annum. These estimates were associated with a total number of 121 645 PI cases in 2012-13 and a total number of 524 661 bed days lost. The costs estimated in the present study highlight the economic waste for the Australian health system associated with a largely avoidable injury. Wastage can also be reduced by preventing moderate injuries (Stage I and II) from developing into severe cases (Stage III and IV), because the severe cases, accounting for 12% of cases, mounted to 30% of the total cost.

  9. Manpower studies for the United States. Part II. Demand for eye care. A public opinion poll based upon a Gallup poll survey.

    PubMed

    Reinecke, R D; Steinberg, T

    1981-04-01

    This is the second in the series of Ophthalmology Manpower Studies. Part I presented estimates of disease prevalence and incidence, the average amount of time required to care for such conditions, and based on that information, the total hours of ophthalmological services required to care for all the projected need in the population. Using different estimates of the average number of hours worked per year per ophthalmologist (based on a 35, 40 and 48 hours/week in patient care), estimates of the total number of ophthalmologists required were calculated. This method is basically similar to the method later adopted by the Graduate Medical Education National Advisory Committee (GMENAC) to arrive at estimates of hours of ophthalmological services required for 1990. However, instead of using all the need present in the population, the GMENAC panel chose to use an "adjusted-needs based" model as a compromise between total need and actual utilization, the former being an overestimation and the latter being an underestimation since it is in part a function of the barriers to medical care. Since some of these barriers to medical care include informational factors, as well as availability and accessibility, this study was undertaken to assess the utilization of these services and the adequacy of present ophthalmological manpower in the opinion of the consumer. Also, since the consumer's choice or behavior depends on being informed about the differences between optometrists and ophthalmologists, such knowledge was assessed and the responses further evaluated after explanatory statements were made to the responders.

  10. Detection of soil erosion within pinyon-juniper woodlands using Thematic Mapper (TM) data

    NASA Technical Reports Server (NTRS)

    Price, Kevin P.

    1993-01-01

    Multispectral measurements collected by Landsat Thematic Mapper (TM) were correlated with field measurements, direct soil loss estimates, and Universal Soil Loss Equation (USLE) estimates to determine the sensitivity of TM data to varying degrees of soil erosion in pinyon-juniper woodland in central Utah. TM data were also evaluated as a predictor of the USLE Crop Management C factor for pinyon-juniper woodlands. TM spectral data were consistently better predictors of soil erosion factors than any combination of field factors. TM data were more sensitive to vegetation variations than the USLE C factor. USLE estimates showed low annual rates of erosion which varied little among the study sites. Direct measurements of rate of soil loss using the SEDIMENT (Soil Erosion DIrect measureMENT) technique, indicated high and varying rates of soil loss among the sites since tree establishment. Erosion estimates from the USLE and SEDIMENT methods suggest that erosion rates have been severe in the past, but because significant amounts of soil have already been eroded, and the surface is now armored by rock debris, present erosion rates are lower. Indicators of accelerated erosion were still present on all sites, however, suggesting that the USLE underestimated erosion within the study area.

  11. Categorical Working Memory Representations are used in Delayed Estimation of Continuous Colors

    PubMed Central

    Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J

    2016-01-01

    In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In two experiments we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. PMID:27797548

  12. Beneficial effect of daidzin in dry eye rat model through the suppression of inflammation and oxidative stress in the cornea.

    PubMed

    Xiao, Fan; Cui, Hua; Zhong, Xiao

    2018-05-01

    Present investigation evaluates the effect of daidzin in dry eye rat model through the suppression of inflammation and oxidative stress in the cornea. Briefly, electron spine resonance was used for the estimation of radical scavenging activity of daidzin and COX Fluorescent Activity Assay Kit was used for the estimation of PGS activity. Dry eye rat model was developed by removing the lacrimal gland and effect of daidzin was evaluated in dry eye rat model by estimating the fluorescein score, tear volume and expressions of heme oxigenase (HO-1), TNF α, Interlukin 6 (IL-6), matrix metallopeptidase 9 (MMP-9) and PGS-2. Result of the present study suggested that daidzin possess tyrosyl radical scavenging activity and thereby decreases the oxidative stress. Activity of PGS significantly increases in dry eye which was inhibited by daidzin treatment due to competitive inhibition of PGS. It also recovers the tear volume in dry eye rat model in which lacrimal gland was removed. Thus corneal erosion was improved by daidzin in dry eye rat model. Thus present study concludes that treatment with daidzin protects the cornea in dry eye rat model by suppression inflammation and oxidative stress.

  13. Categorical working memory representations are used in delayed estimation of continuous colors.

    PubMed

    Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J

    2017-01-01

    In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember, and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work, we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In 2 experiments, we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  14. Does exposure to simulated patient cases improve accuracy of clinicians' predictive value estimates of diagnostic test results? A within-subjects experiment at St Michael's Hospital, Toronto, Canada.

    PubMed

    Armstrong, Bonnie; Spaniol, Julia; Persaud, Nav

    2018-02-13

    Clinicians often overestimate the probability of a disease given a positive test result (positive predictive value; PPV) and the probability of no disease given a negative test result (negative predictive value; NPV). The purpose of this study was to investigate whether experiencing simulated patient cases (ie, an 'experience format') would promote more accurate PPV and NPV estimates compared with a numerical format. Participants were presented with information about three diagnostic tests for the same fictitious disease and were asked to estimate the PPV and NPV of each test. Tests varied with respect to sensitivity and specificity. Information about each test was presented once in the numerical format and once in the experience format. The study used a 2 (format: numerical vs experience) × 3 (diagnostic test: gold standard vs low sensitivity vs low specificity) within-subjects design. The study was completed online, via Qualtrics (Provo, Utah, USA). 50 physicians (12 clinicians and 38 residents) from the Department of Family and Community Medicine at St Michael's Hospital in Toronto, Canada, completed the study. All participants had completed at least 1 year of residency. Estimation accuracy was quantified by the mean absolute error (MAE; absolute difference between estimate and true predictive value). PPV estimation errors were larger in the numerical format (MAE=32.6%, 95% CI 26.8% to 38.4%) compared with the experience format (MAE=15.9%, 95% CI 11.8% to 20.0%, d =0.697, P<0.001). Likewise, NPV estimation errors were larger in the numerical format (MAE=24.4%, 95% CI 14.5% to 34.3%) than in the experience format (MAE=11.0%, 95% CI 6.5% to 15.5%, d =0.303, P=0.015). Exposure to simulated patient cases promotes accurate estimation of predictive values in clinicians. This finding carries implications for diagnostic training and practice. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. Transcriptome-derived stromal and immune scores infer clinical outcomes of patients with cancer.

    PubMed

    Liu, Wei; Ye, Hua; Liu, Ying-Fu; Xu, Chao-Qun; Zhong, Yue-Xian; Tian, Tian; Ma, Shi-Wei; Tao, Huan; Li, Ling; Xue, Li-Chun; He, Hua-Qin

    2018-04-01

    The stromal and immune cells that form the tumor microenvironment serve a key role in the aggressiveness of tumors. Current tumor-centric interpretations of cancer transcriptome data ignore the roles of stromal and immune cells. The aim of the present study was to investigate the clinical utility of stromal and immune cells in tissue-based transcriptome data. The 'Estimation of STromal and Immune cells in MAlignant Tumor tissues using Expression data' (ESTIMATE) algorithm was used to probe diverse cancer datasets and the fraction of stromal and immune cells in tumor tissues was scored. The association between the ESTIMATE scores and patient survival data was asessed; it was indicated that the two scores have implications for patient survival, metastasis and recurrence. Analysis of a colorectal cancer progression dataset revealed that decreased levels immune cells could serve an important role in cancer progression. The results of the present study indicated that trasncriptome-derived stromal and immune scores may be a useful indicator of cancer prognosis.

  16. The worldwide costs of dementia 2015 and comparisons with 2010.

    PubMed

    Wimo, Anders; Guerchet, Maëlenn; Ali, Gemma-Claire; Wu, Yu-Tzu; Prina, A Matthew; Winblad, Bengt; Jönsson, Linus; Liu, Zhaorui; Prince, Martin

    2017-01-01

    In 2010, Alzheimer's Disease International presented estimates of the global cost of illness (COI) of dementia. Since then, new studies have been conducted, and the number of people with dementia has increased. Here, we present an update of the global cost estimates. This is a societal, prevalence-based global COI study. The worldwide costs of dementia were estimated at United States (US) $818 billion in 2015, an increase of 35% since 2010; 86% of the costs occur in high-income countries. Costs of informal care and the direct costs of social care still contribute similar proportions of total costs, whereas the costs in the medical sector are much lower. The threshold of US $1 trillion will be crossed by 2018. Worldwide costs of dementia are enormous and still inequitably distributed. The increase in costs arises from increases in numbers of people with dementia and in increases in per person costs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Estimates of emergency operating capacity in US manufacturing and nonmanufacturing industries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belzer, D.B.; Serot, D.E.; Kellogg, M.A.

    1991-03-01

    Development of integrated mobilization preparedness policies requires planning estimates of available productive capacity during national emergency conditions. Such estimates must be developed in a manner that allows evaluation of current trends in capacity and the consideration of uncertainties in various data inputs and in engineering assumptions. This study, conducted by Pacific Northwest Laboratory (PNL), developed estimates of emergency operating capacity (EOC) for 446 manufacturing industries at the 4-digit Standard Industrial Classification (SIC) level of aggregation and for 24 key non-manufacturing sectors. This volume presents tabular and graphical results of the historical analysis and projections for each SIC industry. (JF)

  18. Estimating procedure times for surgeries by determining location parameters for the lognormal model.

    PubMed

    Spangler, William E; Strum, David P; Vargas, Luis G; May, Jerrold H

    2004-05-01

    We present an empirical study of methods for estimating the location parameter of the lognormal distribution. Our results identify the best order statistic to use, and indicate that using the best order statistic instead of the median may lead to less frequent incorrect rejection of the lognormal model, more accurate critical value estimates, and higher goodness-of-fit. Using simulation data, we constructed and compared two models for identifying the best order statistic, one based on conventional nonlinear regression and the other using a data mining/machine learning technique. Better surgical procedure time estimates may lead to improved surgical operations.

  19. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  20. Adaptive Anchoring Model: How Static and Dynamic Presentations of Time Series Influence Judgments and Predictions.

    PubMed

    Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick

    2018-01-01

    When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.

  1. Cogeneration technology alternatives study. Volume 4: Heat Sources, balance of plant and auxiliary systems

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Data and information established for heat sources balance of plant items, thermal energy storage, and heat pumps are presented. Design case descriptions are given along with projected performance values. Capital cost estimates for representative cogeneration plants are also presented.

  2. Female genital mutilation/cutting in Italy: an enhanced estimation for first generation migrant women based on 2016 survey data.

    PubMed

    Ortensi, Livia Elisa; Farina, Patrizia; Leye, Els

    2018-01-12

    Migration flows of women from Female Genital Mutilation/Cutting practicing countries have generated a need for data on women potentially affected by Female Genital Mutilation/Cutting. This paper presents enhanced estimates for foreign-born women and asylum seekers in Italy in 2016, with the aim of supporting resource planning and policy making, and advancing the methodological debate on estimation methods. The estimates build on the most recent methodological development in Female Genital Mutilation/Cutting direct and indirect estimation for Female Genital Mutilation/Cutting non-practicing countries. Direct estimation of prevalence was performed for 9 communities using the results of the survey FGM-Prev, held in Italy in 2016. Prevalence for communities not involved in the FGM-Prev survey was estimated using to the 'extrapolation-of-FGM/C countries prevalence data method' with corrections according to the selection hypothesis. It is estimated that 60 to 80 thousand foreign-born women aged 15 and over with Female Genital Mutilation/Cutting are present in Italy in 2016. We also estimated the presence of around 11 to 13 thousand cut women aged 15 and over among asylum seekers to Italy in 2014-2016. Due to the long established presence of female migrants from some practicing communities Female Genital Mutilation/Cutting is emerging as an issue also among women aged 60 and over from selected communities. Female Genital Mutilation/Cutting is an additional source of concern for slightly more than 60% of women seeking asylum. Reliable estimates on Female Genital Mutilation/Cutting at country level are important for evidence-based policy making and service planning. This study suggests that indirect estimations cannot fully replace direct estimations, even if corrections for migrant socioeconomic selection can be implemented to reduce the bias.

  3. Apprentices & Trainees: Early Trend Estimates. December Quarter, 2014

    ERIC Educational Resources Information Center

    National Centre for Vocational Education Research (NCVER), 2015

    2015-01-01

    This publication presents early estimates of apprentice and trainee commencements for the December quarter 2014. Indicative information about this quarter is presented here; the most recent figures are estimated, taking into account reporting lags that occur at the time of data collection. The early trend estimates are derived from the National…

  4. Apprentices & Trainees: Early Trend Estimates. March Quarter, 2012

    ERIC Educational Resources Information Center

    National Centre for Vocational Education Research (NCVER), 2012

    2012-01-01

    This publication presents early estimates of apprentice and trainee commencements for the March quarter 2012. Indicative information about this quarter is presented here; the most recent figures are estimated, taking into account reporting lags that occur at the time of data collection. The early trend estimates are derived from the National…

  5. Estimating Uncertainty in Long Term Total Ozone Records from Multiple Sources

    NASA Technical Reports Server (NTRS)

    Frith, Stacey M.; Stolarski, Richard S.; Kramarova, Natalya; McPeters, Richard D.

    2014-01-01

    Total ozone measurements derived from the TOMS and SBUV backscattered solar UV instrument series cover the period from late 1978 to the present. As the SBUV series of instruments comes to an end, we look to the 10 years of data from the AURA Ozone Monitoring Instrument (OMI) and two years of data from the Ozone Mapping Profiler Suite (OMPS) on board the Suomi National Polar-orbiting Partnership satellite to continue the record. When combining these records to construct a single long-term data set for analysis we must estimate the uncertainty in the record resulting from potential biases and drifts in the individual measurement records. In this study we present a Monte Carlo analysis used to estimate uncertainties in the Merged Ozone Dataset (MOD), constructed from the Version 8.6 SBUV2 series of instruments. We extend this analysis to incorporate OMI and OMPS total ozone data into the record and investigate the impact of multiple overlapping measurements on the estimated error. We also present an updated column ozone trend analysis and compare the size of statistical error (error from variability not explained by our linear regression model) to that from instrument uncertainty.

  6. Developmental Differences in Frequency Judgments of Words and Pictures.

    ERIC Educational Resources Information Center

    Ghatala, Elizabeth S.; Levin, Joel R.

    Children in kindergarten, third grade, and fifth grade were presented a list of either pictures or words (with items presented varying numbers of times on the study trail). In both picture and word conditions, half of the subjects estimated how many times each item had been presented (absolute judgments) and the other half judged which of two…

  7. Study of plasma environments for the integrated Space Station electromagnetic analysis system

    NASA Technical Reports Server (NTRS)

    Singh, Nagendra

    1992-01-01

    The final report includes an analysis of various plasma effects on the electromagnetic environment of the Space Station Freedom. Effects of arcing are presented. Concerns of control of arcing by a plasma contactor are highlighted. Generation of waves by contaminant ions are studied and amplitude levels of the waves are estimated. Generation of electromagnetic waves by currents in the structure of the space station, driven by motional EMF, is analyzed and the radiation level is estimated.

  8. A potential-energy surface study of the 2A1 and low-lying dissociative states of the methoxy radical

    NASA Technical Reports Server (NTRS)

    Jackels, C. F.

    1985-01-01

    Accurate, ab initio quantum chemical techniques are applied in the present study of low lying bound and dissociative states of the methoxy radical at C3nu conformations, using a double zeta quality basis set that is augmented with polarization and diffuse functions. Excitation energy estimates are obtained for vertical excitation, vertical deexcitation, and system origin. The rate of methoxy photolysis is estimated to be too small to warrant its inclusion in atmospheric models.

  9. The estimation of galactic cosmic ray penetration and dose rates

    NASA Technical Reports Server (NTRS)

    Burrell, M. O.; Wright, J. J.

    1972-01-01

    This study is concerned with approximation methods that can be readily applied to estimate the absorbed dose rate from cosmic rays in rads - tissue or rems inside simple geometries of aluminum. The present work is limited to finding the dose rate at the center of spherical shells or behind plane slabs. The dose rate is calculated at tissue-point detectors or for thin layers of tissue. This study considers cosmic-rays dose rates for both free-space and earth-orbiting missions.

  10. The Flight Optimization System Weights Estimation Method

    NASA Technical Reports Server (NTRS)

    Wells, Douglas P.; Horvath, Bryce L.; McCullers, Linwood A.

    2017-01-01

    FLOPS has been the primary aircraft synthesis software used by the Aeronautics Systems Analysis Branch at NASA Langley Research Center. It was created for rapid conceptual aircraft design and advanced technology impact assessments. FLOPS is a single computer program that includes weights estimation, aerodynamics estimation, engine cycle analysis, propulsion data scaling and interpolation, detailed mission performance analysis, takeoff and landing performance analysis, noise footprint estimation, and cost analysis. It is well known as a baseline and common denominator for aircraft design studies. FLOPS is capable of calibrating a model to known aircraft data, making it useful for new aircraft and modifications to existing aircraft. The weight estimation method in FLOPS is known to be of high fidelity for conventional tube with wing aircraft and a substantial amount of effort went into its development. This report serves as a comprehensive documentation of the FLOPS weight estimation method. The development process is presented with the weight estimation process.

  11. Estimating Power System Dynamic States Using Extended Kalman Filter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jaroslaw

    2014-10-31

    Abstract—The state estimation tools which are currently deployed in power system control rooms are based on a steady state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper investigates the application of Extended Kalman Filtering techniques for estimating dynamic states in the state estimation process. The new formulated “dynamic state estimation” includes true system dynamics reflected in differential equations, not like previously proposed “dynamic state estimation” which only considers the time-variant snapshots based on steady state modeling. This newmore » dynamic state estimation using Extended Kalman Filter has been successfully tested on a multi-machine system. Sensitivity studies with respect to noise levels, sampling rates, model errors, and parameter errors are presented as well to illustrate the robust performance of the developed dynamic state estimation process.« less

  12. Online Updating of Statistical Inference in the Big Data Setting.

    PubMed

    Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui

    2016-01-01

    We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.

  13. Survival analysis for the missing censoring indicator model using kernel density estimation techniques

    PubMed Central

    Subramanian, Sundarraman

    2008-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented. PMID:18953423

  14. Survival analysis for the missing censoring indicator model using kernel density estimation techniques.

    PubMed

    Subramanian, Sundarraman

    2006-01-01

    This article concerns asymptotic theory for a new estimator of a survival function in the missing censoring indicator model of random censorship. Specifically, the large sample results for an inverse probability-of-non-missingness weighted estimator of the cumulative hazard function, so far not available, are derived, including an almost sure representation with rate for a remainder term, and uniform strong consistency with rate of convergence. The estimator is based on a kernel estimate for the conditional probability of non-missingness of the censoring indicator. Expressions for its bias and variance, in turn leading to an expression for the mean squared error as a function of the bandwidth, are also obtained. The corresponding estimator of the survival function, whose weak convergence is derived, is asymptotically efficient. A numerical study, comparing the performances of the proposed and two other currently existing efficient estimators, is presented.

  15. Indirectly estimated absolute lung cancer mortality rates by smoking status and histological type based on a systematic review

    PubMed Central

    2013-01-01

    Background National smoking-specific lung cancer mortality rates are unavailable, and studies presenting estimates are limited, particularly by histology. This hinders interpretation. We attempted to rectify this by deriving estimates indirectly, combining data from national rates and epidemiological studies. Methods We estimated study-specific absolute mortality rates and variances by histology and smoking habit (never/ever/current/former) based on relative risk estimates derived from studies published in the 20th century, coupled with WHO mortality data for age 70–74 for the relevant country and period. Studies with populations grossly unrepresentative nationally were excluded. 70–74 was chosen based on analyses of large cohort studies presenting rates by smoking and age. Variations by sex, period and region were assessed by meta-analysis and meta-regression. Results 148 studies provided estimates (Europe 59, America 54, China 22, other Asia 13), 54 providing estimates by histology (squamous cell carcinoma, adenocarcinoma). For all smoking habits and lung cancer types, mortality rates were higher in males, the excess less evident for never smokers. Never smoker rates were clearly highest in China, and showed some increasing time trend, particularly for adenocarcinoma. Ever smoker rates were higher in parts of Europe and America than in China, with the time trend very clear, especially for adenocarcinoma. Variations by time trend and continent were clear for current smokers (rates being higher in Europe and America than Asia), but less clear for former smokers. Models involving continent and trend explained much variability, but non-linearity was sometimes seen (with rates lower in 1991–99 than 1981–90), and there was regional variation within continent (with rates in Europe often high in UK and low in Scandinavia, and higher in North than South America). Conclusions The indirect method may be questioned, because of variations in definition of smoking and lung cancer type in the epidemiological database, changes over time in diagnosis of lung cancer types, lack of national representativeness of some studies, and regional variation in smoking misclassification. However, the results seem consistent with the literature, and provide additional information on variability by time and region, including evidence of a rise in never smoker adenocarcinoma rates relative to squamous cell carcinoma rates. PMID:23570286

  16. Proceedings of the 1997 Northeastern Recreation Research Symposium

    Treesearch

    Hans G. Vogelsong; [Editor

    1998-01-01

    Contains articles presented at the 1997 Northeastern Recreation Research Symposium. Contents cover recreation; protected areas and social science; water based recreation management studies; forest recreation management studies; outdoor recreation management studies; estimation of economic impact of recreation and tourism; place meaning and attachment; tourism studies;...

  17. A study of parameter identification

    NASA Technical Reports Server (NTRS)

    Herget, C. J.; Patterson, R. E., III

    1978-01-01

    A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.

  18. Adjusting survival estimates for premature transmitter failure: A case study from the Sacramento-San Joaquin Delta

    USGS Publications Warehouse

    Holbrook, Christopher M.; Perry, Russell W.; Brandes, Patricia L.; Adams, Noah S.

    2013-01-01

    In telemetry studies, premature tag failure causes negative bias in fish survival estimates because tag failure is interpreted as fish mortality. We used mark-recapture modeling to adjust estimates of fish survival for a previous study where premature tag failure was documented. High rates of tag failure occurred during the Vernalis Adaptive Management Plan’s (VAMP) 2008 study to estimate survival of fall-run Chinook salmon (Oncorhynchus tshawytscha) during migration through the San Joaquin River and Sacramento-San Joaquin Delta, California. Due to a high rate of tag failure, the observed travel time distribution was likely negatively biased, resulting in an underestimate of tag survival probability in this study. Consequently, the bias-adjustment method resulted in only a small increase in estimated fish survival when the observed travel time distribution was used to estimate the probability of tag survival. Since the bias-adjustment failed to remove bias, we used historical travel time data and conducted a sensitivity analysis to examine how fish survival might have varied across a range of tag survival probabilities. Our analysis suggested that fish survival estimates were low (95% confidence bounds range from 0.052 to 0.227) over a wide range of plausible tag survival probabilities (0.48–1.00), and this finding is consistent with other studies in this system. When tags fail at a high rate, available methods to adjust for the bias may perform poorly. Our example highlights the importance of evaluating the tag life assumption during survival studies, and presents a simple framework for evaluating adjusted survival estimates when auxiliary travel time data are available.

  19. Nonlinear finite element model updating for damage identification of civil structures using batch Bayesian estimation

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Hamed; Astroza, Rodrigo; Conte, Joel P.; de Callafon, Raymond A.

    2017-02-01

    This paper presents a framework for structural health monitoring (SHM) and damage identification of civil structures. This framework integrates advanced mechanics-based nonlinear finite element (FE) modeling and analysis techniques with a batch Bayesian estimation approach to estimate time-invariant model parameters used in the FE model of the structure of interest. The framework uses input excitation and dynamic response of the structure and updates a nonlinear FE model of the structure to minimize the discrepancies between predicted and measured response time histories. The updated FE model can then be interrogated to detect, localize, classify, and quantify the state of damage and predict the remaining useful life of the structure. As opposed to recursive estimation methods, in the batch Bayesian estimation approach, the entire time history of the input excitation and output response of the structure are used as a batch of data to estimate the FE model parameters through a number of iterations. In the case of non-informative prior, the batch Bayesian method leads to an extended maximum likelihood (ML) estimation method to estimate jointly time-invariant model parameters and the measurement noise amplitude. The extended ML estimation problem is solved efficiently using a gradient-based interior-point optimization algorithm. Gradient-based optimization algorithms require the FE response sensitivities with respect to the model parameters to be identified. The FE response sensitivities are computed accurately and efficiently using the direct differentiation method (DDM). The estimation uncertainties are evaluated based on the Cramer-Rao lower bound (CRLB) theorem by computing the exact Fisher Information matrix using the FE response sensitivities with respect to the model parameters. The accuracy of the proposed uncertainty quantification approach is verified using a sampling approach based on the unscented transformation. Two validation studies, based on realistic structural FE models of a bridge pier and a moment resisting steel frame, are performed to validate the performance and accuracy of the presented nonlinear FE model updating approach and demonstrate its application to SHM. These validation studies show the excellent performance of the proposed framework for SHM and damage identification even in the presence of high measurement noise and/or way-out initial estimates of the model parameters. Furthermore, the detrimental effects of the input measurement noise on the performance of the proposed framework are illustrated and quantified through one of the validation studies.

  20. Nuclear electric propulsion mission engineering study development program and costs estimates, Phase 2 review

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The results are presented of the second six-month performance period of the Nuclear Electric Propulsion Mission Engineering Study. A brief overview of the program, identifying the study objectives and approach, and a discussion of the program status and schedule are presented. The program results are reviewed and key conclusions to date are summarized. Planned effort for the remainder of the program is reviewed.

  1. Methods for estimating missing human skeletal element osteometric dimensions employed in the revised fully technique for estimating stature.

    PubMed

    Auerbach, Benjamin M

    2011-05-01

    One of the greatest limitations to the application of the revised Fully anatomical stature estimation method is the inability to measure some of the skeletal elements required in its calculation. These element dimensions cannot be obtained due to taphonomic factors, incomplete excavation, or disease processes, and result in missing data. This study examines methods of imputing these missing dimensions using observable Fully measurements from the skeleton and the accuracy of incorporating these missing element estimations into anatomical stature reconstruction. These are further assessed against stature estimations obtained from mathematical regression formulae for the lower limb bones (femur and tibia). Two thousand seven hundred and seventeen North and South American indigenous skeletons were measured, and subsets of these with observable Fully dimensions were used to simulate missing elements and create estimation methods and equations. Comparisons were made directly between anatomically reconstructed statures and mathematically derived statures, as well as with anatomically derived statures with imputed missing dimensions. These analyses demonstrate that, while mathematical stature estimations are more accurate, anatomical statures incorporating missing dimensions are not appreciably less accurate and are more precise. The anatomical stature estimation method using imputed missing dimensions is supported. Missing element estimation, however, is limited to the vertebral column (only when lumbar vertebrae are present) and to talocalcaneal height (only when femora and tibiae are present). Crania, entire vertebral columns, and femoral or tibial lengths cannot be reliably estimated. Further discussion of the applicability of these methods is discussed. Copyright © 2011 Wiley-Liss, Inc.

  2. Estimation of future outflows of e-waste in India.

    PubMed

    Dwivedy, Maheshwar; Mittal, R K

    2010-03-01

    The purpose of this study is to construct an approach and a methodology to estimate the future outflows of electronic waste (e-waste) in India. Consequently, the study utilizes a time-series multiple lifespan end-of-life model proposed by Peralta and Fontanos for estimating the current and future quantities of e-waste in India. The model estimates future e-waste generation quantities by modeling their usage and disposal. The present work considers two scenarios for the approximation of e-waste generation based on user preferences to store or to recycle the e-waste. This model will help formal recyclers in India to make strategic decisions in planning for appropriate recycling infrastructure and institutional capacity building. Also an extension of the model proposed by Peralta and Fontanos is developed with the objective of helping decision makers to conduct WEEE estimates under a variety of assumptions to suit their region of study. During 2007-2011, the total WEEE estimates will be around 2.5 million metric tons which include waste from personal computers (PC), television, refrigerators and washing machines. During the said period, the waste from PC will account for 30% of total units of WEEE generated. Copyright 2009 Elsevier Ltd. All rights reserved.

  3. Investigations of potential bias in the estimation of lambda using Pradel's (1996) model for capture-recapture data

    USGS Publications Warehouse

    Hines, J.E.; Nichols, J.D.

    2002-01-01

    Pradel's (1996) temporal symmetry model permitting direct estimation and modelling of population growth rate, lambda sub i provides a potentially useful tool for the study of population dynamics using marked animals. Because of its recent publication date, the approach has not seen much use, and there have been virtually no investigations directed at robustness of the resulting estimators. Here we consider several potential sources of bias, all motivated by specific uses of this estimation approach. We consider sampling situations in which the study area expands with time and present an analytic expression for the bias in lambda hat sub i. We next consider trap response in capture probabilities and heterogeneous capture probabilities and compute large-sample and simulation-based approximations of resulting bias in lambda hat sub i. These approximations indicate that trap response is an especially important assumption violation that can produce substantial bias. Finally, we consider losses on capture and emphasize the importance of selecting the estimator for lambda sub i that is appropriate to the question being addressed. For studies based on only sighting and resighting data, Pradel's (1996) lambda hat prime sub i is the appropriate estimator.

  4. Estimation of future outflows of e-waste in India

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dwivedy, Maheshwar, E-mail: dwivedy_m@bits-pilani.ac.i; Mittal, R.K.

    2010-03-15

    The purpose of this study is to construct an approach and a methodology to estimate the future outflows of electronic waste (e-waste) in India. Consequently, the study utilizes a time-series multiple lifespan end-of-life model proposed by Peralta and Fontanos for estimating the current and future quantities of e-waste in India. The model estimates future e-waste generation quantities by modeling their usage and disposal. The present work considers two scenarios for the approximation of e-waste generation based on user preferences to store or to recycle the e-waste. This model will help formal recyclers in India to make strategic decisions in planningmore » for appropriate recycling infrastructure and institutional capacity building. Also an extension of the model proposed by Peralta and Fontanos is developed with the objective of helping decision makers to conduct WEEE estimates under a variety of assumptions to suit their region of study. During 2007-2011, the total WEEE estimates will be around 2.5 million metric tons which include waste from personal computers (PC), television, refrigerators and washing machines. During the said period, the waste from PC will account for 30% of total units of WEEE generated.« less

  5. Age estimation using exfoliative cytology and radiovisiography: A comparative study

    PubMed Central

    Nallamala, Shilpa; Guttikonda, Venkateswara Rao; Manchikatla, Praveen Kumar; Taneeru, Sravya

    2017-01-01

    Introduction: Age estimation is one of the essential factors in establishing the identity of an individual. Among various methods, exfoliative cytology (EC) is a unique, noninvasive technique, involving simple, and pain-free collection of intact cells from the oral cavity for microscopic examination. Objective: The study was undertaken with an aim to estimate the age of an individual from the average cell size of their buccal smears calculated using image analysis morphometric software and the pulp–tooth area ratio in mandibular canine of the same individual using radiovisiography (RVG). Materials and Methods: Buccal smears were collected from 100 apparently healthy individuals. After fixation in 95% alcohol, the smears were stained using Papanicolaou stain. The average cell size was measured using image analysis software (Image-Pro Insight 8.0). The RVG images of mandibular canines were obtained, pulp and tooth areas were traced using AutoCAD 2010 software, and area ratio was calculated. The estimated age was then calculated using regression analysis. Results: The paired t-test between chronological age and estimated age by cell size and pulp–tooth area ratio was statistically nonsignificant (P > 0.05). Conclusion: In the present study, age estimated by pulp–tooth area ratio and EC yielded good results. PMID:29657491

  6. Age estimation using exfoliative cytology and radiovisiography: A comparative study.

    PubMed

    Nallamala, Shilpa; Guttikonda, Venkateswara Rao; Manchikatla, Praveen Kumar; Taneeru, Sravya

    2017-01-01

    Age estimation is one of the essential factors in establishing the identity of an individual. Among various methods, exfoliative cytology (EC) is a unique, noninvasive technique, involving simple, and pain-free collection of intact cells from the oral cavity for microscopic examination. The study was undertaken with an aim to estimate the age of an individual from the average cell size of their buccal smears calculated using image analysis morphometric software and the pulp-tooth area ratio in mandibular canine of the same individual using radiovisiography (RVG). Buccal smears were collected from 100 apparently healthy individuals. After fixation in 95% alcohol, the smears were stained using Papanicolaou stain. The average cell size was measured using image analysis software (Image-Pro Insight 8.0). The RVG images of mandibular canines were obtained, pulp and tooth areas were traced using AutoCAD 2010 software, and area ratio was calculated. The estimated age was then calculated using regression analysis. The paired t -test between chronological age and estimated age by cell size and pulp-tooth area ratio was statistically nonsignificant ( P > 0.05). In the present study, age estimated by pulp-tooth area ratio and EC yielded good results.

  7. Near surface water content estimation using GPR data: investigations within California vineyards

    NASA Astrophysics Data System (ADS)

    Hubbard, S.; Grote, K.; Lunt, I.; Rubin, Y.

    2003-04-01

    Detailed estimates of water content are necessary for variety of hydrogeological investigations. In viticulture applications, this information is particularly useful for assisting the design of both vineyard layout and efficient irrigation/agrochemical application. However, it is difficult to obtain sufficient information about the spatial variation of water content within the root zone using conventional point or wellbore measurements. We have investigated the applicability of ground penetrating radar (GPR) methods to estimate near surface water content within two California vineyard study sites: the Robert Mondavi Vineyard in Napa County and the Dehlinger Vineyard within Sonoma County. Our research at the winery study sites involves assessing the feasibility of obtaining accurate, non-invasive and dense estimates of water content and the changes in water content over space and time using both groundwave and reflected GPR events. We will present the spatial and temporal estimates of water content obtained from the GPR data at both sites. We will compare our estimates with conventional measurements of water content (obtained using gravimetric, TDR, and neutron probe techniques) as well as with soil texture and plant vigor measurements. Through these comparisons, we will illustrate the potential of GPR for providing reliable and spatially dense water content estimates and the linkages between water content, soil properties and ecosystem responses at the two study sites.

  8. Objectivity and validity of EMG method in estimating anaerobic threshold.

    PubMed

    Kang, S-K; Kim, J; Kwon, M; Eom, H

    2014-08-01

    The purposes of this study were to verify and compare the performances of anaerobic threshold (AT) point estimates among different filtering intervals (9, 15, 20, 25, 30 s) and to investigate the interrelationships of AT point estimates obtained by ventilatory threshold (VT) and muscle fatigue thresholds using electromyographic (EMG) activity during incremental exercise on a cycle ergometer. 69 untrained male university students, yet pursuing regular exercise voluntarily participated in this study. The incremental exercise protocol was applied with a consistent stepwise increase in power output of 20 watts per minute until exhaustion. AT point was also estimated in the same manner using V-slope program with gas exchange parameters. In general, the estimated values of AT point-time computed by EMG method were more consistent across 5 filtering intervals and demonstrated higher correlations among themselves when compared with those values obtained by VT method. The results found in the present study suggest that the EMG signals could be used as an alternative or a new option in estimating AT point. Also the proposed computing procedure implemented in Matlab for the analysis of EMG signals appeared to be valid and reliable as it produced nearly identical values and high correlations with VT estimates. © Georg Thieme Verlag KG Stuttgart · New York.

  9. Use of cryostat sections from snap-frozen nervous tissue for combining stereological estimates with histological, cellular, or molecular analyses on adjacent sections.

    PubMed

    Schmitz, C; Dafotakis, M; Heinsen, H; Mugrauer, K; Niesel, A; Popken, G J; Stephan, M; Van de Berg, W D; von Hörsten, S; Korr, H

    2000-10-01

    Adequate tissue preparation is essential for both modern stereological and immunohistochemical investigations. However, combining these methodologies in a single study presents a number of obstacles pertaining to optimal histological preparation. Tissue shrinkage and loss of nuclei/nucleoli from the unprotected section surfaces of unembedded tissue used for immunohistochemistry may be problematic with regard to adequate stereological design. In this study, frozen cryostat sections from hippocampal and cerebellar regions of two rat strains and cerebellar and cerebral regions from a human brain were analyzed to determine the potential impact of these factors on estimates of neuron number obtained using the optical disector. Neuronal nuclei and nucleoli were clearly present in thin sections of snap-frozen rat (3 microm) and human (6 microm) tissue, indicating that neuronal nuclei/nucleoli are not unavoidably lost from unprotected section surfaces of unembedded tissue. In order to quantify the potential impact of any nuclear loss, optical fractionator estimates of rat hippocampal pyramidal cells in areas CA1-3 and cerebellar granule and Purkinje cells were made using minimal (1 microm) upper guard zones. Estimates did not differ from data reported previously in the literature. This data indicates that cryostat sections of snap-frozen nervous tissue may successfully be used for estimating total neuronal numbers using optical disectors.

  10. Comparing U.S. Injury Death Estimates from GBD 2015 and CDC WONDER.

    PubMed

    Wu, Yue; Cheng, Xunjie; Ning, Peishan; Cheng, Peixia; Schwebel, David C; Hu, Guoqing

    2018-01-07

    Objective : The purpose of the present study was to examine consistency in injury death statistics from the United States CDC Wide-ranging Online Data for Epidemiologic Research (CDC WONDER) with those from GBD 2015 estimates. Methods : Differences in deaths and the percent difference in deaths between GBD 2015 and CDC WONDER were assessed, as were changes in deaths between 2000 and 2015 for the two datasets. Results : From 2000 to 2015, GBD 2015 estimates for the U.S. injury deaths were somewhat higher than CDC WONDER estimates in most categories, with the exception of deaths from falls and from forces of nature, war, and legal intervention in 2015. Encouragingly, the difference in total injury deaths between the two data sources narrowed from 44,897 (percent difference in deaths = 41%) in 2000 to 34,877 (percent difference in deaths = 25%) in 2015. Differences in deaths and percent difference in deaths between the two data sources varied greatly across injury cause and over the assessment years. The two data sources present consistent changes in direction from 2000 to 2015 for all injury causes except for forces of nature, war, and legal intervention, and adverse effects of medical treatment. Conclusions : We conclude that further studies are warranted to interpret the inconsistencies in data and develop estimation approaches that increase the consistency of the two datasets.

  11. Peak-flow frequency for tributaries of the Colorado River downstream of Austin, Texas

    USGS Publications Warehouse

    Asquith, William H.

    1998-01-01

    Peak-flow frequency for 38 stations with at least 8 years of data in natural (unregulated and nonurbanized) basins was estimated on the basis of annual peak-streamflow data through water year 1995. Peak-flow frequency represents the peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, 250, and 500 years. The peak-flow frequency and drainage basin characteristics for the stations were used to develop two sets of regression equations to estimate peak-flow frequency for tributaries of the Colorado River in the study area. One set of equations was developed for contributing drainage areas less than 32 square miles, and another set was developed for contributing drainage areas greater than 32 square miles. A procedure is presented to estimate the peak discharge at sites where both sets of equations are considered applicable. Additionally, procedures are presented to compute the 50-, 67-, and 90-percent prediction interval for any estimation from the equations.

  12. Estimating one's own and one's relatives' multiple intelligence: a cross-cultural study from East Timor and Portugal.

    PubMed

    Neto, Félix; Furnham, Adrian; Pinto, Maria da Conceição

    2009-11-01

    This study examined estimates of their own, and their parents' general and multiple intelligences. Three hundred and twenty three students from East Timor, and one hundred eighty three students from Portugal estimated their own, and their parents' IQ scores on each of Gardner's ten multiple intelligences. Men believed they were more intelligent than were women on mathematical (logical), spatial, and naturalistic intelligence. There were consistent and clear culture differences. Portuguese gave higher self, and family ratings than Timorese, as expected. Participants of both cultures rated overall intelligence of their father higher than that of their mother. Implications of these results for education and self-presentations are considered.

  13. A cost-effectiveness comparison of existing and Landsat-aided snow water content estimation systems

    NASA Technical Reports Server (NTRS)

    Sharp, J. M.; Thomas, R. W.

    1975-01-01

    This study describes how Landsat imagery can be cost-effectively employed to augment an operational hydrologic model. Attention is directed toward the estimation of snow water content, a major predictor variable in the volumetric runoff forecasting model presently used by the California Department of Water Resources. A stratified double sampling scheme is supplemented with qualitative and quantitative analyses of existing operations to develop a comparison between the existing and satellite-aided approaches to snow water content estimation. Results show a decided advantage for the Landsat-aided approach.

  14. Tug fleet and ground operations schedules and controls. Volume 3: Program cost estimates

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Cost data for the tug DDT&E and operations phases are presented. Option 6 is the recommended option selected from seven options considered and was used as the basis for ground processing estimates. Option 6 provides for processing the tug in a factory clean environment in the low bay area of VAB with subsequent cleaning to visibly clean. The basis and results of the trade study to select Option 6 processing plan is included. Cost estimating methodology, a work breakdown structure, and a dictionary of WBS definitions is also provided.

  15. [Estimation of infant and child mortality in the eastern provinces of Cuba].

    PubMed

    Gonzalez, G; Herrera, L

    1986-01-01

    An estimate of infant and child mortality in the eastern provinces of Cuba is presented using the Brass method as adapted by Trussell. "Estimations by urban and rural zones are also performed within the provinces studied, and results are compared with those possible to obtain by continuous statistics. Results obtained show that in the eastern [part] of the country Holguin and Guantanamo are the provinces with highest infantile mortality rates, and the lowest rates correspond to Granma, followed by Santiago de Cuba." (SUMMARY IN ENG AND FRE) excerpt

  16. Real-Time Radar-Based Tracking and State Estimation of Multiple Non-Conformant Aircraft

    NASA Technical Reports Server (NTRS)

    Cook, Brandon; Arnett, Timothy; Macmann, Owen; Kumar, Manish

    2017-01-01

    In this study, a novel solution for automated tracking of multiple unknown aircraft is proposed. Many current methods use transponders to self-report state information and augment track identification. While conformant aircraft typically report transponder information to alert surrounding aircraft of its state, vehicles may exist in the airspace that are non-compliant and need to be accurately tracked using alternative methods. In this study, a multi-agent tracking solution is presented that solely utilizes primary surveillance radar data to estimate aircraft state information. Main research challenges include state estimation, track management, data association, and establishing persistent track validity. In an effort to realize these challenges, techniques such as Maximum a Posteriori estimation, Kalman filtering, degree of membership data association, and Nearest Neighbor Spanning Tree clustering are implemented for this application.

  17. Modelling maximum river flow by using Bayesian Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Cheong, R. Y.; Gabda, D.

    2017-09-01

    Analysis of flood trends is vital since flooding threatens human living in terms of financial, environment and security. The data of annual maximum river flows in Sabah were fitted into generalized extreme value (GEV) distribution. Maximum likelihood estimator (MLE) raised naturally when working with GEV distribution. However, previous researches showed that MLE provide unstable results especially in small sample size. In this study, we used different Bayesian Markov Chain Monte Carlo (MCMC) based on Metropolis-Hastings algorithm to estimate GEV parameters. Bayesian MCMC method is a statistical inference which studies the parameter estimation by using posterior distribution based on Bayes’ theorem. Metropolis-Hastings algorithm is used to overcome the high dimensional state space faced in Monte Carlo method. This approach also considers more uncertainty in parameter estimation which then presents a better prediction on maximum river flow in Sabah.

  18. COSTS OF URBAN STORMWATER CONTROL

    EPA Science Inventory

    This report presents information on the cost of stormwater pollution control facilities in urban areas, including collection, control, and treatment systems. Information on prior cost studies of control technologies and cost estimating models used in these studies was collected,...

  19. COSTS OF URBAN STORMWATER CONTROL

    EPA Science Inventory

    This paper presents information on the cost of stormwater pollution control facilities in urban areas, including collection, control, and treatment systems. Information on prior cost studies of control technologies and cost estimating models used in these studies was collected, r...

  20. New alternatives for reference evapotranspiration estimation in West Africa using limited weather data and ancillary data supply strategies.

    NASA Astrophysics Data System (ADS)

    Landeras, Gorka; Bekoe, Emmanuel; Ampofo, Joseph; Logah, Frederick; Diop, Mbaye; Cisse, Madiama; Shiri, Jalal

    2018-05-01

    Accurate estimation of reference evapotranspiration ( ET 0 ) is essential for the computation of crop water requirements, irrigation scheduling, and water resources management. In this context, having a battery of alternative local calibrated ET 0 estimation methods is of great interest for any irrigation advisory service. The development of irrigation advisory services will be a major breakthrough for West African agriculture. In the case of many West African countries, the high number of meteorological inputs required by the Penman-Monteith equation has been indicated as constraining. The present paper investigates for the first time in Ghana, the estimation ability of artificial intelligence-based models (Artificial Neural Networks (ANNs) and Gene Expression Programing (GEPs)), and ancillary/external approaches for modeling reference evapotranspiration ( ET 0 ) using limited weather data. According to the results of this study, GEPs have emerged as a very interesting alternative for ET 0 estimation at all the locations of Ghana which have been evaluated in this study under different scenarios of meteorological data availability. The adoption of ancillary/external approaches has been also successful, moreover in the southern locations. The interesting results obtained in this study using GEPs and some ancillary approaches could be a reference for future studies about ET 0 estimation in West Africa.

  1. A New Method for Assessing How Sensitivity and Specificity of Linkage Studies Affects Estimation

    PubMed Central

    Moore, Cecilia L.; Amin, Janaki; Gidding, Heather F.; Law, Matthew G.

    2014-01-01

    Background While the importance of record linkage is widely recognised, few studies have attempted to quantify how linkage errors may have impacted on their own findings and outcomes. Even where authors of linkage studies have attempted to estimate sensitivity and specificity based on subjects with known status, the effects of false negatives and positives on event rates and estimates of effect are not often described. Methods We present quantification of the effect of sensitivity and specificity of the linkage process on event rates and incidence, as well as the resultant effect on relative risks. Formulae to estimate the true number of events and estimated relative risk adjusted for given linkage sensitivity and specificity are then derived and applied to data from a prisoner mortality study. The implications of false positive and false negative matches are also discussed. Discussion Comparisons of the effect of sensitivity and specificity on incidence and relative risks indicate that it is more important for linkages to be highly specific than sensitive, particularly if true incidence rates are low. We would recommend that, where possible, some quantitative estimates of the sensitivity and specificity of the linkage process be performed, allowing the effect of these quantities on observed results to be assessed. PMID:25068293

  2. Individual differences in non-symbolic numerical abilities predict mathematical achievements but contradict ATOM.

    PubMed

    Agrillo, Christian; Piffer, Laura; Adriano, Andrea

    2013-07-01

    A significant debate surrounds the nature of the cognitive mechanisms involved in non-symbolic number estimation. Several studies have suggested the existence of the same cognitive system for estimation of time, space, and number, called "a theory of magnitude" (ATOM). In addition, researchers have proposed the theory that non-symbolic number abilities might support our mathematical skills. Despite the large number of studies carried out, no firm conclusions can be drawn on either topic. In the present study, we correlated the performance of adults on non-symbolic magnitude estimations and symbolic numerical tasks. Non-symbolic magnitude abilities were assessed by asking participants to estimate which auditory tone lasted longer (time), which line was longer (space), and which group of dots was more numerous (number). To assess symbolic numerical abilities, participants were required to perform mental calculations and mathematical reasoning. We found a positive correlation between non-symbolic and symbolic numerical abilities. On the other hand, no correlation was found among non-symbolic estimations of time, space, and number. Our study supports the idea that mathematical abilities rely on rudimentary numerical skills that predate verbal language. By contrast, the lack of correlation among non-symbolic estimations of time, space, and number is incompatible with the idea that these magnitudes are entirely processed by the same cognitive system.

  3. Estimating Vehicle Fuel Consumption and Emissions Using GPS Big Data

    PubMed Central

    Kan, Zihan; Zhang, Xia

    2018-01-01

    The energy consumption and emissions from vehicles adversely affect human health and urban sustainability. Analysis of GPS big data collected from vehicles can provide useful insights about the quantity and distribution of such energy consumption and emissions. Previous studies, which estimated fuel consumption/emissions from traffic based on GPS sampled data, have not sufficiently considered vehicle activities and may have led to erroneous estimations. By adopting the analytical construct of the space-time path in time geography, this study proposes methods that more accurately estimate and visualize vehicle energy consumption/emissions based on analysis of vehicles’ mobile activities (MA) and stationary activities (SA). First, we build space-time paths of individual vehicles, extract moving parameters, and identify MA and SA from each space-time path segment (STPS). Then we present an N-Dimensional framework for estimating and visualizing fuel consumption/emissions. For each STPS, fuel consumption, hot emissions, and cold start emissions are estimated based on activity type, i.e., MA, SA with engine-on and SA with engine-off. In the case study, fuel consumption and emissions of a single vehicle and a road network are estimated and visualized with GPS data. The estimation accuracy of the proposed approach is 88.6%. We also analyze the types of activities that produced fuel consumption on each road segment to explore the patterns and mechanisms of fuel consumption in the study area. The results not only show the effectiveness of the proposed approaches in estimating fuel consumption/emissions but also indicate their advantages for uncovering the relationships between fuel consumption and vehicles’ activities in road networks. PMID:29561813

  4. Estimating Vehicle Fuel Consumption and Emissions Using GPS Big Data.

    PubMed

    Kan, Zihan; Tang, Luliang; Kwan, Mei-Po; Zhang, Xia

    2018-03-21

    The energy consumption and emissions from vehicles adversely affect human health and urban sustainability. Analysis of GPS big data collected from vehicles can provide useful insights about the quantity and distribution of such energy consumption and emissions. Previous studies, which estimated fuel consumption/emissions from traffic based on GPS sampled data, have not sufficiently considered vehicle activities and may have led to erroneous estimations. By adopting the analytical construct of the space-time path in time geography, this study proposes methods that more accurately estimate and visualize vehicle energy consumption/emissions based on analysis of vehicles' mobile activities ( MA ) and stationary activities ( SA ). First, we build space-time paths of individual vehicles, extract moving parameters, and identify MA and SA from each space-time path segment (STPS). Then we present an N-Dimensional framework for estimating and visualizing fuel consumption/emissions. For each STPS, fuel consumption, hot emissions, and cold start emissions are estimated based on activity type, i.e., MA , SA with engine-on and SA with engine-off. In the case study, fuel consumption and emissions of a single vehicle and a road network are estimated and visualized with GPS data. The estimation accuracy of the proposed approach is 88.6%. We also analyze the types of activities that produced fuel consumption on each road segment to explore the patterns and mechanisms of fuel consumption in the study area. The results not only show the effectiveness of the proposed approaches in estimating fuel consumption/emissions but also indicate their advantages for uncovering the relationships between fuel consumption and vehicles' activities in road networks.

  5. Maximum likelihood estimation of correction for dilution bias in simple linear regression using replicates from subjects with extreme first measurements.

    PubMed

    Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn

    2008-09-30

    The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.

  6. Estimation of two-dimensional motion velocity using ultrasonic signals beamformed in Cartesian coordinate for measurement of cardiac dynamics

    NASA Astrophysics Data System (ADS)

    Kaburaki, Kaori; Mozumi, Michiya; Hasegawa, Hideyuki

    2018-07-01

    Methods for the estimation of two-dimensional (2D) velocity and displacement of physiological tissues are necessary for quantitative diagnosis. In echocardiography with a phased array probe, the accuracy in the estimation of the lateral motion is lower than that of the axial motion. To improve the accuracy in the estimation of the lateral motion, in the present study, the coordinate system for ultrasonic beamforming was changed from the conventional polar coordinate to the Cartesian coordinate. In a basic experiment, the motion velocity of a phantom, which was moved at a constant speed, was estimated by the conventional and proposed methods. The proposed method reduced the bias error and standard deviation in the estimated motion velocities. In an in vivo measurement, intracardiac blood flow was analyzed by the proposed method.

  7. Apprentices and Trainees: Early Trend Estimates. December Quarter 2010

    ERIC Educational Resources Information Center

    National Centre for Vocational Education Research (NCVER), 2011

    2011-01-01

    This publication presents early estimates of apprentice and trainee commencements for the December quarter 2010. Indicative information about this quarter is presented here; the most recent figures are estimated, taking into account reporting lags that occur at the time of data collection. The early trend estimates are derived from the National…

  8. [The estimation of clinical efficacy of eurespal in the postoperative period following surgical intervention on paranasal sinuses].

    PubMed

    2011-01-01

    The objective of the present study was to estimate the influence of Eurespal (Fenspiride) on the changes of fibrous coating in the nasal cavities, transport function of ciliated epithelium, and dynamics of electrochemical properties of the nasal secretion by means of direct joulemetry following surgical intervention on paranasal sinuses (PNS). The study included 30 patients aged from 18 to 65 years presenting with chronic purulent sinusitis in the phase of exacerbation, polypous rhinosinusitis, and acute pyogenic process in PNS. The results of the study indicate that the use of Eurespal significantly accelerates the recovery of the transport function of intranasal mucociliated epithelium which results in a faster regression of the fibrous coat on intranasal mucosa and mucosal oedema. These changes lead to the improvement of drainage and aeration of the paranasal sinuses.

  9. Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

    NASA Astrophysics Data System (ADS)

    Gao, Yun; Kontoyiannis, Ioannis; Bienenstock, Elie

    2008-06-01

    Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.

  10. Estimation of monthly water yields and flows for 1951-2012 for the United States portion of the Great Lakes Basin with AFINCH

    USGS Publications Warehouse

    Luukkonen, Carol L.; Holtschlag, David J.; Reeves, Howard W.; Hoard, Christopher J.; Fuller, Lori M.

    2015-01-01

    Monthly water yields from 105,829 catchments and corresponding flows in 107,691 stream segments were estimated for water years 1951–2012 in the Great Lakes Basin in the United States. Both sets of estimates were computed by using the Analysis of Flows In Networks of CHannels (AFINCH) application within the NHDPlus geospatial data framework. AFINCH provides an environment to develop constrained regression models to integrate monthly streamflow and water-use data with monthly climatic data and fixed basin characteristics data available within NHDPlus or supplied by the user. For this study, the U.S. Great Lakes Basin was partitioned into seven study areas by grouping selected hydrologic subregions and adjoining cataloguing units. This report documents the regression models and data used to estimate monthly water yields and flows in each study area. Estimates of monthly water yields and flows are presented in a Web-based mapper application. Monthly flow time series for individual stream segments can be retrieved from the Web application and used to approximate monthly flow-duration characteristics and to identify possible trends.

  11. Active life expectancy from annual follow-up data with missing responses.

    PubMed

    Izmirlian, G; Brock, D; Ferrucci, L; Phillips, C

    2000-03-01

    Active life expectancy (ALE) at a given age is defined as the expected remaining years free of disability. In this study, three categories of health status are defined according to the ability to perform activities of daily living independently. Several studies have used increment-decrement life tables to estimate ALE, without error analysis, from only a baseline and one follow-up interview. The present work conducts an individual-level covariate analysis using a three-state Markov chain model for multiple follow-up data. Using a logistic link, the model estimates single-year transition probabilities among states of health, accounting for missing interviews. This approach has the advantages of smoothing subsequent estimates and increased power by using all follow-ups. We compute ALE and total life expectancy from these estimated single-year transition probabilities. Variance estimates are computed using the delta method. Data from the Iowa Established Population for the Epidemiologic Study of the Elderly are used to test the effects of smoking on ALE on all 5-year age groups past 65 years, controlling for sex and education.

  12. Estimation of postmortem interval through albumin in CSF by simple dye binding method.

    PubMed

    Parmar, Ankita K; Menon, Shobhana K

    2015-12-01

    Estimation of postmortem interval is a very important question in some medicolegal investigations. For the precise estimation of postmortem interval, there is a need of a method which can give accurate estimation. Bromocresol green (BCG) is a simple dye binding method and widely used in routine practice. Application of this method in forensic practice may bring revolutionary changes. In this study, cerebrospinal fluid was aspirated from cisternal puncture from 100 autopsies. A study was carried out on concentration of albumin with respect to postmortem interval. After death, albumin present in CSF undergoes changes, after 72 h of death, concentration of albumin has become 0.012 mM, and this decrease was linear from 2 h to 72 h. An important relationship was found between albumin concentration and postmortem interval with an error of ± 1-4h. The study concludes that CSF albumin can be a useful and significant parameter in estimation of postmortem interval. Copyright © 2015 The Chartered Society of Forensic Sciences. Published by Elsevier Ireland Ltd. All rights reserved.

  13. Dropout from cognitive-behavioral therapy for eating disorders: A meta-analysis of randomized, controlled trials.

    PubMed

    Linardon, Jake; Hindle, Annemarie; Brennan, Leah

    2018-05-01

    Cognitive-behavioral therapy (CBT) is efficacious for a range of eating disorder presentations, yet premature dropout is one factor that might limit CBTs effectiveness. Improved understanding of dropout from CBT for eating disorders is important. This meta-analysis aimed to study dropout from CBT for eating disorders in randomized controlled trials (RCTs), by (a) identifying the types of dropout definitions applied, (b) providing estimates of dropout, (c) comparing dropout rates from CBT to non-CBT interventions for eating disorders, and (d) testing moderators of dropout. RCTs of CBT for eating disorders that reported rates of dropout were searched. Ninety-nine RCTs (131 CBT conditions) were included. Dropout definitions varied widely across studies. The overall dropout estimate was 24% (95% CI = 22-27%). Diagnostic type, type of dropout definition, baseline symptom severity, study quality, and sample age did not moderate this estimate. Dropout was highest among studies that delivered internet-based CBT and was lowest in studies that delivered transdiagnostic enhanced CBT. There was some evidence that longer treatment protocols were associated with lower dropout. No significant differences in dropout rates were observed between CBT and non-CBT interventions for all eating disorder subtypes. Present study dropout estimates are hampered by the use of disparate dropout definitions applied. This meta-analysis highlights the urgency for RCTs to utilize a standardized dropout definition and to report as much information on patient dropout as possible, so that strategies designed to minimize dropout can be developed, and factors predictive of CBT dropout can be more easily identified. © 2018 Wiley Periodicals, Inc.

  14. A new method for predicting response in complex linear systems. II. [under random or deterministic steady state excitation

    NASA Technical Reports Server (NTRS)

    Bogdanoff, J. L.; Kayser, K.; Krieger, W.

    1977-01-01

    The paper describes convergence and response studies in the low frequency range of complex systems, particularly with low values of damping of different distributions, and reports on the modification of the relaxation procedure required under these conditions. A new method is presented for response estimation in complex lumped parameter linear systems under random or deterministic steady state excitation. The essence of the method is the use of relaxation procedures with a suitable error function to find the estimated response; natural frequencies and normal modes are not computed. For a 45 degree of freedom system, and two relaxation procedures, convergence studies and frequency response estimates were performed. The low frequency studies are considered in the framework of earlier studies (Kayser and Bogdanoff, 1975) involving the mid to high frequency range.

  15. A Longitudinal Study of Voice before and after Phonosurgery for Removal of a Polyp

    ERIC Educational Resources Information Center

    Stajner-Katusic, Smiljka; Horga, Damir; Zrinski, Karolina Vrban

    2008-01-01

    The aim of the present investigation was to evaluate the acoustic parameters, perceptual estimation, and self-estimation of voice before, 1 month after, and 6 years after surgical removal of a vocal fold polyp. Subjects were five male patients who came to the Phoniatric Clinic because of breathiness. For all patients, a polyp of one vocal fold was…

  16. Forest growth of Mississippi's north unit - A case study of the Southern Forest surveys growth estimation procedures

    Treesearch

    Dennis M. May

    1988-01-01

    This report presents the procedures by which the Southern Forest Inventory and Analysis unit estimates forest growth from permanent horizontal point samples. Inventory data from the 1977-87 survey of Mississippi's north unit were used to demonstrate how trees on the horizontal point samples are classified into one of eight components of growth and, in turn, how...

  17. Private Returns to Vocational Education and Training Qualifications. A National Vocational Education and Training Research and Evaluation Program Report

    ERIC Educational Resources Information Center

    Long, Michael; Shah, Chandra

    2008-01-01

    This report presents estimates of the private rates of return for students studying for vocational education and training (VET) qualifications in Australia. Estimates of rates of return are commonly used by governments, businesses and others to compare the merits of different forms of investment where costs or benefits or both are distributed over…

  18. Projections of emissions from burning of biomass foruse in studies of global climate and atmospheric chemistry

    Treesearch

    Darold E. Ward; Weimin Hao

    1991-01-01

    Emissions of trace gases and particulate matter from burning of biomass are generally factored into global climate models. Models for improving the estimates of the global annual release of emissions from biomass fires are presented. Estimates of total biomass consumed on a global basis range from 2 to 10 Pg (1 petagram = 1015 g) per year. New...

  19. Spatial Statistical and Modeling Strategy for Inventorying and Monitoring Ecosystem Resources at Multiple Scales and Resolution Levels

    Treesearch

    Robin M. Reich; C. Aguirre-Bravo; M.S. Williams

    2006-01-01

    A statistical strategy for spatial estimation and modeling of natural and environmental resource variables and indicators is presented. This strategy is part of an inventory and monitoring pilot study that is being carried out in the Mexican states of Jalisco and Colima. Fine spatial resolution estimates of key variables and indicators are outputs that will allow the...

  20. Estimation of strength parameters of small-bore metal-polymer pipes

    NASA Astrophysics Data System (ADS)

    Shaydakov, V. V.; Chernova, K. V.; Penzin, A. V.

    2018-03-01

    The paper presents results from a set of laboratory studies of strength parameters of small-bore metal-polymer pipes of type TG-5/15. A wave method was used to estimate the provisional modulus of elasticity of the metal-polymer material of the pipes. Longitudinal deformation, transverse deformation and leak-off pressure were determined experimentally, with considerations for mechanical damage and pipe bend.

  1. HIV, HCV, HBV, and syphilis among transgender women from Brazil

    PubMed Central

    Bastos, Francisco I.; Bastos, Leonardo Soares; Coutinho, Carolina; Toledo, Lidiane; Mota, Jurema Corrêa; Velasco-de-Castro, Carlos Augusto; Sperandei, Sandro; Brignol, Sandra; Travassos, Tamiris Severino; dos Santos, Camila Mattos; Malta, Monica Siqueira

    2018-01-01

    Abstract Different sampling strategies, analytic alternatives, and estimators have been proposed to better assess the characteristics of different hard-to-reach populations and their respective infection rates (as well as their sociodemographic characteristics, associated harms, and needs) in the context of studies based on respondent-driven sampling (RDS). Despite several methodological advances and hundreds of empirical studies implemented worldwide, some inchoate findings and methodological challenges remain. The in-depth assessment of the local structure of networks and the performance of the available estimators are particularly relevant when the target populations are sparse and highly stigmatized. In such populations, bottlenecks as well as other sources of biases (for instance, due to homophily and/or too sparse or fragmented groups of individuals) may be frequent, affecting the estimates. In the present study, data were derived from a cross-sectional, multicity RDS study, carried out in 12 Brazilian cities with transgender women (TGW). Overall, infection rates for HIV and syphilis were very high, with some variation between different cities. Notwithstanding, findings are of great concern, considering the fact that female TGW are not only very hard-to-reach but also face deeply-entrenched prejudice and have been out of the reach of most therapeutic and preventive programs and projects. We cross-compared findings adjusted using 2 estimators (the classic estimator usually known as estimator II, originally proposed by Volz and Heckathorn) and a brand new strategy to adjust data generated by RDS, partially based on Bayesian statistics, called for the sake of this paper, the RDS-B estimator. Adjusted prevalence was cross-compared with estimates generated by non-weighted analyses, using what has been called by us a naïve estimator or rough estimates. PMID:29794601

  2. HIV, HCV, HBV, and syphilis among transgender women from Brazil: Assessing different methods to adjust infection rates of a hard-to-reach, sparse population.

    PubMed

    Bastos, Francisco I; Bastos, Leonardo Soares; Coutinho, Carolina; Toledo, Lidiane; Mota, Jurema Corrêa; Velasco-de-Castro, Carlos Augusto; Sperandei, Sandro; Brignol, Sandra; Travassos, Tamiris Severino; Dos Santos, Camila Mattos; Malta, Monica Siqueira

    2018-05-01

    Different sampling strategies, analytic alternatives, and estimators have been proposed to better assess the characteristics of different hard-to-reach populations and their respective infection rates (as well as their sociodemographic characteristics, associated harms, and needs) in the context of studies based on respondent-driven sampling (RDS). Despite several methodological advances and hundreds of empirical studies implemented worldwide, some inchoate findings and methodological challenges remain. The in-depth assessment of the local structure of networks and the performance of the available estimators are particularly relevant when the target populations are sparse and highly stigmatized. In such populations, bottlenecks as well as other sources of biases (for instance, due to homophily and/or too sparse or fragmented groups of individuals) may be frequent, affecting the estimates.In the present study, data were derived from a cross-sectional, multicity RDS study, carried out in 12 Brazilian cities with transgender women (TGW). Overall, infection rates for HIV and syphilis were very high, with some variation between different cities. Notwithstanding, findings are of great concern, considering the fact that female TGW are not only very hard-to-reach but also face deeply-entrenched prejudice and have been out of the reach of most therapeutic and preventive programs and projects.We cross-compared findings adjusted using 2 estimators (the classic estimator usually known as estimator II, originally proposed by Volz and Heckathorn) and a brand new strategy to adjust data generated by RDS, partially based on Bayesian statistics, called for the sake of this paper, the RDS-B estimator. Adjusted prevalence was cross-compared with estimates generated by non-weighted analyses, using what has been called by us a naïve estimator or rough estimates.

  3. Lidar-derived estimate and uncertainty of carbon sink in successional phases of woody encroachment

    NASA Astrophysics Data System (ADS)

    Sankey, Temuulen; Shrestha, Rupesh; Sankey, Joel B.; Hardegree, Stuart; Strand, Eva

    2013-07-01

    encroachment is a globally occurring phenomenon that contributes to the global carbon sink. The magnitude of this contribution needs to be estimated at regional and local scales to address uncertainties present in the global- and continental-scale estimates, and guide regional policy and management in balancing restoration activities, including removal of woody plants, with greenhouse gas mitigation goals. The objective of this study was to estimate carbon stored in various successional phases of woody encroachment. Using lidar measurements of individual trees, we present high-resolution estimates of aboveground carbon storage in juniper woodlands. Segmentation analysis of lidar point cloud data identified a total of 60,628 juniper tree crowns across four watersheds. Tree heights, canopy cover, and density derived from lidar were strongly correlated with field measurements of 2613 juniper stems measured in 85 plots (30 × 30 m). Aboveground total biomass of individual trees was estimated using a regression model with lidar-derived height and crown area as predictors (Adj. R2 = 0.76, p < 0.001, RMSE = 0.58 kg). The predicted mean aboveground woody carbon storage for the study area was 677 g/m2. Uncertainty in carbon storage estimates was examined with a Monte Carlo approach that addressed major error sources. Ranges predicted with uncertainty analysis in the mean, individual tree, aboveground woody C, and associated standard deviation were 0.35 - 143.6 kg and 0.5 - 1.25 kg, respectively. Later successional phases of woody encroachment had, on average, twice the aboveground carbon relative to earlier phases. Woody encroachment might be more successfully managed and balanced with carbon storage goals by identifying priority areas in earlier phases of encroachment where intensive treatments are most effective.

  4. Preliminary Upper Estimate of Peak Currents in Transcranial Magnetic Stimulation at Distant Locations from a TMS Coil

    PubMed Central

    Makarov, Sergey N.; Yanamadala, Janakinadh; Piazza, Matthew W.; Helderman, Alex M.; Thang, Niang S.; Burnham, Edward H.; Pascual-Leone, Alvaro

    2016-01-01

    Goals Transcranial magnetic stimulation (TMS) is increasingly used as a diagnostic and therapeutic tool for numerous neuropsychiatric disorders. The use of TMS might cause whole-body exposure to undesired induced currents in patients and TMS operators. The aim of the present study is to test and justify a simple analytical model known previously, which may be helpful as an upper estimate of eddy current density at a particular distant observation point for any body composition and any coil setup. Methods We compare the analytical solution with comprehensive adaptive mesh refinement-based FEM simulations of a detailed full-body human model, two coil types, five coil positions, about 100,000 observation points, and two distinct pulse rise times, thus providing a representative number of different data sets for comparison, while also using other numerical data. Results Our simulations reveal that, after a certain modification, the analytical model provides an upper estimate for the eddy current density at any location within the body. In particular, it overestimates the peak eddy currents at distant locations from a TMS coil by a factor of 10 on average. Conclusion The simple analytical model tested in the present study may be valuable as a rapid method to safely estimate levels of TMS currents at different locations within a human body. Significance At present, safe limits of general exposure to TMS electric and magnetic fields are an open subject, including fetal exposure for pregnant women. PMID:26685221

  5. A Meta-analytical Synthesis and Examination of Pathological and Problem Gambling Rates and Associated Moderators Among College Students, 1987-2016.

    PubMed

    Nowak, Donald E

    2018-06-01

    The problem of gambling addiction is especially noteworthy among college students, many of whom have the resources, proximity, free time, and desire to become involved in the myriad options of gambling now available. Although limited attention has been paid specifically to college student gambling in the body of literature, there have been three published meta-analyses estimating the prevalence of probable pathological gambling among college students. The research presented is the largest and most comprehensive, presenting an up-to-date proportion of those students worldwide exhibiting gambling pathology as assessed by the South Oaks Gambling Screen, and is the first to include estimates of sub-clinical problem gambling. A thorough literature review and coding procedure resulted in 124 independent data estimates retrieved from 72 studies conducted between 1987 and the present, surveying 41,989 university students worldwide. The estimated proportion of probable pathological gamblers among students was computed at 6.13%, while the rate of problem gambling was computed at 10.23%. Statistical significance was found in the influence of the percentage of non-white students on pathological gambling rates. The implications of this and other moderator analyses such as age and year of studies, as well as recommendations for future practice in dealing with college students and gambling disorder on campus are outlined and described in detail. Suggestions and rationales for future avenues of research in the area are also described.

  6. Ice Age Sea Level Change on a Dynamic Earth

    NASA Astrophysics Data System (ADS)

    Austermann, J.; Mitrovica, J. X.; Latychev, K.; Rovere, A.; Moucha, R.

    2014-12-01

    Changes in global mean sea level (GMSL) are a sensitive indicator of climate variability during the current ice age. Reconstructions are largely based on local sea level records, and the mapping to GMSL is computed from simulations of glacial isostatic adjustment (GIA) on 1-D Earth models. We argue, using two case studies, that resolving important, outstanding issues in ice age paleoclimate requires a more sophisticated consideration of mantle structure and dynamics. First, we consider the coral record from Barbados, which is widely used to constrain global ice volume changes since the Last Glacial Maximum (LGM, ~21 ka). Analyses of the record using 1-D viscoelastic Earth models have estimated a GMSL change since LGM of ~120 m, a value at odds with analyses of other far field records, which range from 130-135 m. We revisit the Barbados case using a GIA model that includes laterally varying Earth structure (Austermann et al., Nature Geo., 2013) and demonstrate that neglecting this structure, in particular the high-viscosity slab in the mantle linked to the subduction of the South American plate, has biased (low) previous estimates of GMSL change since LGM by ~10 m. Our analysis brings the Barbados estimate into accord with studies from other far-field sites. Second, we revisit estimates of GMSL during the mid-Pliocene warm period (MPWP, ~3 Ma), which was characterized by temperatures 2-3°C higher than present. The ice volume deficit during this period is a source of contention, with estimates ranging from 0-40 m GMSL equivalent. We argue that refining estimates of ice volume during MPWP requires a correction for mantle flow induced dynamic topography (DT; Rowley et al., Science, 2013), a signal neglected in previous studies of ice age sea level change. We present estimates of GIA- and DT-corrected elevations of MPWP shorelines from the U.S. east coast, Australia and South Africa in an attempt to reconcile these records with a single GMSL value.

  7. Satellite-based high-resolution mapping of rainfall over southern Africa

    NASA Astrophysics Data System (ADS)

    Meyer, Hanna; Drönner, Johannes; Nauss, Thomas

    2017-06-01

    A spatially explicit mapping of rainfall is necessary for southern Africa for eco-climatological studies or nowcasting but accurate estimates are still a challenging task. This study presents a method to estimate hourly rainfall based on data from the Meteosat Second Generation (MSG) Spinning Enhanced Visible and Infrared Imager (SEVIRI). Rainfall measurements from about 350 weather stations from 2010-2014 served as ground truth for calibration and validation. SEVIRI and weather station data were used to train neural networks that allowed the estimation of rainfall area and rainfall quantities over all times of the day. The results revealed that 60 % of recorded rainfall events were correctly classified by the model (probability of detection, POD). However, the false alarm ratio (FAR) was high (0.80), leading to a Heidke skill score (HSS) of 0.18. Estimated hourly rainfall quantities were estimated with an average hourly correlation of ρ = 0. 33 and a root mean square error (RMSE) of 0.72. The correlation increased with temporal aggregation to 0.52 (daily), 0.67 (weekly) and 0.71 (monthly). The main weakness was the overestimation of rainfall events. The model results were compared to the Integrated Multi-satellitE Retrievals for GPM (IMERG) of the Global Precipitation Measurement (GPM) mission. Despite being a comparably simple approach, the presented MSG-based rainfall retrieval outperformed GPM IMERG in terms of rainfall area detection: GPM IMERG had a considerably lower POD. The HSS was not significantly different compared to the MSG-based retrieval due to a lower FAR of GPM IMERG. There were no further significant differences between the MSG-based retrieval and GPM IMERG in terms of correlation with the observed rainfall quantities. The MSG-based retrieval, however, provides rainfall in a higher spatial resolution. Though estimating rainfall from satellite data remains challenging, especially at high temporal resolutions, this study showed promising results towards improved spatio-temporal estimates of rainfall over southern Africa.

  8. The diagnostic accuracy of screening questionnaires for the identification of adults with epilepsy: a systematic review.

    PubMed

    Keezer, Mark R; Bouma, Hanni K; Wolfson, Christina

    2014-11-01

    To describe the diagnostic accuracy of screening questionnaires to identify epilepsy in adults, we performed a systematic review of diagnostic studies that assessed the sensitivity and specificity of such screening questionnaires as compared to a physician's clinical assessment. We searched Ovid MEDLINE (1946 to present) and Ovid EMBASE (1947 to present) for studies that estimated the sensitivity and specificity of nonphysician administered screening questionnaires for adults with epilepsy. Both telephone and in-person administered questionnaires were included, whether applied to population or hospital/clinic-based cohorts. The risk of bias was assessed using the Quality Assessment of Diagnostic Studies-2 (QUADAS-2) tool. Our initial search strategy resulted in 917 records. We found nine studies eligible for inclusion. The estimated sensitivity and specificity of the questionnaires used to identify persons with a lifetime history of epilepsy ranged from 81.5% to 100% and 65.6% to 99.2%, respectively. The sensitivity and specificity of these questionnaires in identifying persons with active epilepsy ranged from 48.6% to 100% and 73.9% to 99.9%, respectively. Overall we found a high risk of bias in patient selection and study flow in the majority of studies. We identified nine validation studies of epilepsy screening questionnaires, summarized their study characteristics, presented their results, and performed a rigorous quality assessment. This review serves as a basis for future studies by providing a systematic review of existing work. Future research addressing previous limitations will ultimately allow us to more accurately estimate the burden and risk of epilepsy in the general population. Wiley Periodicals, Inc. © 2014 International League Against Epilepsy.

  9. Estimating outcomes and cost effectiveness using a single-arm clinical trial: ofatumumab for double-refractory chronic lymphocytic leukemia.

    PubMed

    Hatswell, Anthony J; Thompson, Gwilym J; Maroudas, Penny A; Sofrygin, Oleg; Delea, Thomas E

    2017-01-01

    Ofatumumab (Arzerra ® , Novartis) is a treatment for chronic lymphocytic leukemia refractory to fludarabine and alemtuzumab [double refractory (DR-CLL)]. Ofatumumab was licensed on the basis of an uncontrolled Phase II study, Hx-CD20-406, in which patients receiving ofatumumab survived for a median of 13.9 months. However, the lack of an internal control arm presents an obstacle for the estimation of comparative effectiveness. The objective of the study was to present a method to estimate the cost effectiveness of ofatumumab in the treatment of DR-CLL. As no suitable historical control was available for modelling, the outcomes from non-responders to ofatumumab were used to model the effect of best supportive care (BSC). This was done via a Cox regression to control for differences in baseline characteristics between groups. This analysis was included in a partitioned survival model built in Microsoft ® Excel with utilities and costs taken from published sources, with costs and quality-adjusted life years (QALYs) were discounted at a rate of 3.5% per annum. Using the outcomes seen in non-responders, ofatumumab is expected to add approximately 0.62 life years (1.50 vs. 0.88). Using published utility values this translates to an additional 0.30 QALYs (0.77 vs. 0.47). At the list price, ofatumumab had a cost per QALY of £130,563, and a cost per life year of £63,542. The model was sensitive to changes in assumptions regarding overall survival estimates and utility values. This study demonstrates the potential of using data for non-responders to model outcomes for BSC in cost-effectiveness evaluations based on single-arm trials. Further research is needed on the estimation of comparative effectiveness using uncontrolled clinical studies.

  10. Estimation and Spatiotemporal Analysis of Methane Emissions from Agriculture in China

    NASA Astrophysics Data System (ADS)

    Fu, Chao; Yu, Guirui

    2010-10-01

    Estimating and analyzing the temporal and spatial patterns of methane emissions from agriculture (MEA) will help China formulate mitigation and adaptation strategies for the nation’s agricultural sector. Based on the Tier 2 method presented in the 2006 guidelines of the Intergovernmental Panel on Climate Change (IPCC) and on existing reports, this article presents a systematic estimation of MEA in China from 1990 to 2006, with a particular emphasis on trends and spatial distribution. Results from our study indicate that China’s MEA rose from 16.37 Tg yr-1 in 1990 to 19.31 Tg yr-1 in 2006, with an average annual increase of 1.04%. Over the study period, while emissions from field burning of crop residues remained rather low, those from rice cultivation and from livestock typically decreased and increased, respectively, showing extremely opposite trends that chiefly resulted from changes in the cultivated areas for different rice seasons and changes in the populations of different animal species. Over the study period, China’s high-MEA regions shifted generally northward, chiefly as a result of reduced emissions from rice cultivation in most of China’s southern provinces and a substantial growth in emissions from livestock enteric fermentation in most of China’s northern, northeastern, and northwestern provinces. While this article provides significant information on estimates of MEA in China, it also includes some uncertainties in terms of estimating emissions from each source category. We conclude that China’s MEA will likely continue to increase in the future and recommend a demonstration study on MEA mitigation along the middle and lower reaches of the Yellow River. We further recommend enhanced data monitoring and statistical analysis, which will be essential for preparation of the national greenhouse gas (GHG) inventory.

  11. Do bacterial cell numbers follow a theoretical Poisson distribution? Comparison of experimentally obtained numbers of single cells with random number generation via computer simulation.

    PubMed

    Koyama, Kento; Hokunan, Hidekazu; Hasegawa, Mayumi; Kawamura, Shuso; Koseki, Shigenobu

    2016-12-01

    We investigated a bacterial sample preparation procedure for single-cell studies. In the present study, we examined whether single bacterial cells obtained via 10-fold dilution followed a theoretical Poisson distribution. Four serotypes of Salmonella enterica, three serotypes of enterohaemorrhagic Escherichia coli and one serotype of Listeria monocytogenes were used as sample bacteria. An inoculum of each serotype was prepared via a 10-fold dilution series to obtain bacterial cell counts with mean values of one or two. To determine whether the experimentally obtained bacterial cell counts follow a theoretical Poisson distribution, a likelihood ratio test between the experimentally obtained cell counts and Poisson distribution which parameter estimated by maximum likelihood estimation (MLE) was conducted. The bacterial cell counts of each serotype sufficiently followed a Poisson distribution. Furthermore, to examine the validity of the parameters of Poisson distribution from experimentally obtained bacterial cell counts, we compared these with the parameters of a Poisson distribution that were estimated using random number generation via computer simulation. The Poisson distribution parameters experimentally obtained from bacterial cell counts were within the range of the parameters estimated using a computer simulation. These results demonstrate that the bacterial cell counts of each serotype obtained via 10-fold dilution followed a Poisson distribution. The fact that the frequency of bacterial cell counts follows a Poisson distribution at low number would be applied to some single-cell studies with a few bacterial cells. In particular, the procedure presented in this study enables us to develop an inactivation model at the single-cell level that can estimate the variability of survival bacterial numbers during the bacterial death process. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A spatially explicit representation of conservation agriculture for application in global change studies.

    PubMed

    Prestele, Reinhard; Hirsch, Annette L; Davin, Edouard L; Seneviratne, Sonia I; Verburg, Peter H

    2018-05-10

    Conservation agriculture (CA) is widely promoted as a sustainable agricultural management strategy with the potential to alleviate some of the adverse effects of modern, industrial agriculture such as large-scale soil erosion, nutrient leaching and overexploitation of water resources. Moreover, agricultural land managed under CA is proposed to contribute to climate change mitigation and adaptation through reduced emission of greenhouse gases, increased solar radiation reflection, and the sustainable use of soil and water resources. Due to the lack of official reporting schemes, the amount of agricultural land managed under CA systems is uncertain and spatially explicit information about the distribution of CA required for various modeling studies is missing. Here, we present an approach to downscale present-day national-level estimates of CA to a 5 arcminute regular grid, based on multicriteria analysis. We provide a best estimate of CA distribution and an uncertainty range in the form of a low and high estimate of CA distribution, reflecting the inconsistency in CA definitions. We also design two scenarios of the potential future development of CA combining present-day data and an assessment of the potential for implementation using biophysical and socioeconomic factors. By our estimates, 122-215 Mha or 9%-15% of global arable land is currently managed under CA systems. The lower end of the range represents CA as an integrated system of permanent no-tillage, crop residue management and crop rotations, while the high estimate includes a wider range of areas primarily devoted to temporary no-tillage or reduced tillage operations. Our scenario analysis suggests a future potential of CA in the range of 533-1130 Mha (38%-81% of global arable land). Our estimates can be used in various ecosystem modeling applications and are expected to help identifying more realistic climate mitigation and adaptation potentials of agricultural practices. © 2018 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.

  13. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study.

    PubMed

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-21

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed 'MPD-AwTTV'. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  14. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study

    NASA Astrophysics Data System (ADS)

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  15. Exploration of the utility of ancestry informative markers for genetic association studies of African Americans with type 2 diabetes and end stage renal disease.

    PubMed

    Keene, Keith L; Mychaleckyj, Josyf C; Leak, Tennille S; Smith, Shelly G; Perlegas, Peter S; Divers, Jasmin; Langefeld, Carl D; Freedman, Barry I; Bowden, Donald W; Sale, Michèle M

    2008-09-01

    Admixture and population stratification are major concerns in genetic association studies. We wished to evaluate the impact of admixture using empirically derived data from genetic association studies of African Americans (AA) with type 2 diabetes (T2DM) and end-stage renal disease (ESRD). Seventy ancestry informative markers (AIMs) were genotyped in 577 AA with T2DM-ESRD, 596 AA controls, 44 Yoruba Nigerian (YRI) and 39 European American (EA) controls. Genotypic data and association results for eight T2DM candidate gene studies in our AA population were included. Ancestral estimates were calculated using FRAPPE, ADMIXMAP and STRUCTURE for all AA samples, using varying numbers of AIMs (25, 50, and 70). Ancestry estimates varied significantly across all three programs with the highest estimates obtained using STRUCTURE, followed by ADMIXMAP; while FRAPPE estimates were the lowest. FRAPPE estimates were similar using varying numbers of AIMs, while STRUCTURE estimates using 25 AIMs differed from estimates using 50 and 70 AIMs. Female T2DM-ESRD cases showed higher mean African proportions as compared to female controls, male cases, and male controls. Age showed a weak but significant correlation with individual ancestral estimates in AA cases (r2 = 0.101; P = 0.019) and in the combined set (r2 = 0.131; P = 3.57 x 10(-5)). The absolute difference between frequencies in parental populations, absolute delta, was correlated with admixture impact for dominant, additive, and recessive genotypic models of association. This study presents exploratory analyses of the impact of admixture on studies of AA with T2DM-ESRD and supports the use of ancestral proportions as a means of reducing confounding effects due to admixture.

  16. Exploration of the utility of ancestry informative markers for genetic association studies of African Americans with type 2 diabetes and end stage renal disease

    PubMed Central

    Keene, Keith L.; Mychaleckyj, Josyf C.; Leak, Tennille S.; Smith, Shelly G.; Perlegas, Peter S.; Divers, Jasmin; Langefeld, Carl D.; Freedman, Barry I.; Bowden, Donald W.; Sale, Michèle M.

    2009-01-01

    Admixture and population stratification are major concerns in genetic association studies. We wished to evaluate the impact of admixture using empirically derived data from genetic association studies of African Americans (AA) with type 2 diabetes (T2DM) and end-stage renal disease (ESRD). Seventy ancestry informative markers (AIMs) were genotyped in 577 AA with T2DM-ESRD, 596 AA controls, 44 Yoruba Nigerian (YRI) and 39 European American (EA) controls. Genotypic data and association results for eight T2DM candidate gene studies in our AA population were included. Ancestral estimates were calculated using FRAPPE, ADMIXMAP and STRUCTURE for all AA samples, using varying numbers of AIMs (25, 50, and 70). Ancestry estimates varied significantly across all three programs with the highest estimates obtained using STRUCTURE, followed by ADMIXMAP; while FRAPPE estimates were the lowest. FRAPPE estimates were similar using varying numbers of AIMs, while STRUCTURE estimates using 25 AIMs differed from estimates using 50 and 70 AIMs. Female T2DM-ESRD cases showed higher mean African proportions as compared to female controls, male cases, and male controls. Age showed a weak but significant correlation with individual ancestral estimates in AA cases (r2=0.101; P=0.019) and in the combined set (r2=0.131; P=3.57×10−5). The absolute difference between frequencies in parental populations, absolute δ, was correlated with admixture impact for dominant, additive, and recessive genotypic models of association. This study presents exploratory analyses of the impact of admixture on studies of AA with T2DM-ESRD and supports the use of ancestral proportions as a means of reducing confounding effects due to admixture. PMID:18654799

  17. Techniques for estimating health care costs with censored data: an overview for the health services researcher

    PubMed Central

    Wijeysundera, Harindra C; Wang, Xuesong; Tomlinson, George; Ko, Dennis T; Krahn, Murray D

    2012-01-01

    Objective The aim of this study was to review statistical techniques for estimating the mean population cost using health care cost data that, because of the inability to achieve complete follow-up until death, are right censored. The target audience is health service researchers without an advanced statistical background. Methods Data were sourced from longitudinal heart failure costs from Ontario, Canada, and administrative databases were used for estimating costs. The dataset consisted of 43,888 patients, with follow-up periods ranging from 1 to 1538 days (mean 576 days). The study was designed so that mean health care costs over 1080 days of follow-up were calculated using naïve estimators such as full-sample and uncensored case estimators. Reweighted estimators – specifically, the inverse probability weighted estimator – were calculated, as was phase-based costing. Costs were adjusted to 2008 Canadian dollars using the Bank of Canada consumer price index (http://www.bankofcanada.ca/en/cpi.html). Results Over the restricted follow-up of 1080 days, 32% of patients were censored. The full-sample estimator was found to underestimate mean cost ($30,420) compared with the reweighted estimators ($36,490). The phase-based costing estimate of $37,237 was similar to that of the simple reweighted estimator. Conclusion The authors recommend against the use of full-sample or uncensored case estimators when censored data are present. In the presence of heavy censoring, phase-based costing is an attractive alternative approach. PMID:22719214

  18. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  19. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    PubMed

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  20. The Cost of Crime to Society: New Crime-Specific Estimates for Policy and Program Evaluation

    PubMed Central

    French, Michael T.; Fang, Hai

    2010-01-01

    Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than ten years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost of society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. PMID:20071107

  1. Regionalising MUSLE factors for application to a data-scarce catchment

    NASA Astrophysics Data System (ADS)

    Gwapedza, David; Slaughter, Andrew; Hughes, Denis; Mantel, Sukhmani

    2018-04-01

    The estimation of soil loss and sediment transport is important for effective management of catchments. A model for semi-arid catchments in southern Africa has been developed; however, simplification of the model parameters and further testing are required. Soil loss is calculated through the Modified Universal Soil Loss Equation (MUSLE). The aims of the current study were to: (1) regionalise the MUSLE erodibility factors and; (2) perform a sensitivity analysis and validate the soil loss outputs against independently-estimated measures. The regionalisation was developed using Geographic Information Systems (GIS) coverages. The model was applied to a high erosion semi-arid region in the Eastern Cape, South Africa. Sensitivity analysis indicated model outputs to be more sensitive to the vegetation cover factor. The simulated soil loss estimates of 40 t ha-1 yr-1 were within the range of estimates by previous studies. The outcome of the present research is a framework for parameter estimation for the MUSLE through regionalisation. This is part of the ongoing development of a model which can estimate soil loss and sediment delivery at broad spatial and temporal scales.

  2. The cost of crime to society: new crime-specific estimates for policy and program evaluation.

    PubMed

    McCollister, Kathryn E; French, Michael T; Fang, Hai

    2010-04-01

    Estimating the cost to society of individual crimes is essential to the economic evaluation of many social programs, such as substance abuse treatment and community policing. A review of the crime-costing literature reveals multiple sources, including published articles and government reports, which collectively represent the alternative approaches for estimating the economic losses associated with criminal activity. Many of these sources are based upon data that are more than 10 years old, indicating a need for updated figures. This study presents a comprehensive methodology for calculating the cost to society of various criminal acts. Tangible and intangible losses are estimated using the most current data available. The selected approach, which incorporates both the cost-of-illness and the jury compensation methods, yields cost estimates for more than a dozen major crime categories, including several categories not found in previous studies. Updated crime cost estimates can help government agencies and other organizations execute more prudent policy evaluations, particularly benefit-cost analyses of substance abuse treatment or other interventions that reduce crime. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  3. Computation of nonlinear least squares estimator and maximum likelihood using principles in matrix calculus

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi; Balasiddamuni, P.

    2017-11-01

    This paper uses matrix calculus techniques to obtain Nonlinear Least Squares Estimator (NLSE), Maximum Likelihood Estimator (MLE) and Linear Pseudo model for nonlinear regression model. David Pollard and Peter Radchenko [1] explained analytic techniques to compute the NLSE. However the present research paper introduces an innovative method to compute the NLSE using principles in multivariate calculus. This study is concerned with very new optimization techniques used to compute MLE and NLSE. Anh [2] derived NLSE and MLE of a heteroscedatistic regression model. Lemcoff [3] discussed a procedure to get linear pseudo model for nonlinear regression model. In this research article a new technique is developed to get the linear pseudo model for nonlinear regression model using multivariate calculus. The linear pseudo model of Edmond Malinvaud [4] has been explained in a very different way in this paper. David Pollard et.al used empirical process techniques to study the asymptotic of the LSE (Least-squares estimation) for the fitting of nonlinear regression function in 2006. In Jae Myung [13] provided a go conceptual for Maximum likelihood estimation in his work “Tutorial on maximum likelihood estimation

  4. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2015-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  5. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  6. Estimates of ground-water discharge as determined from measurements of evapotranspiration, Ash Meadows area, Nye County, Nevada

    USGS Publications Warehouse

    Laczniak, R.J.; DeMeo, G.A.; Reiner, S.R.; Smith, J. LaRue; Nylund, W.E.

    1999-01-01

    Ash Meadows is one of the major discharge areas within the regional Death Valley ground-water flow system of southern Nevada and adjacent California. Ground water discharging at Ash Meadows is replenished from inflow derived from an extensive recharge area that includes the eastern part of the Nevada Test Site (NTS). Currently, contaminants introduced into the subsurface by past nuclear testing at NTS are the subject of study by the U.S. Department of Energy's Environmental Restoration Program. The transport of any contaminant in contact with ground water is controlled in part by the rate and direction of ground-water flow, which itself depends on the location and quantity of ground water discharging from the flow system. To best evaluate any potential risk associated with these test-generated contaminants, studies were undertaken to accurately quantify discharge from areas downgradient from the NTS. This report presents results of a study to refine the estimate of ground-water discharge at Ash Meadows. The study estimates ground-water discharge from the Ash Meadows area through a rigorous quantification of evapotranspiration (ET). To accomplish this objective, the study identifies areas of ongoing ground-water ET, delineates unique areas of ET defined on the basis of similarities in vegetation and soil-moisture conditions, and computes ET rates for each of the delineated areas. A classification technique using spectral-reflectance characteristics determined from satellite images recorded in 1992 identified seven unique units representing areas of ground-water ET. The total area classified encompasses about 10,350 acres dominated primarily by lush desert vegetation. Each unique area, referred to as an ET unit, generally consists of one or more assemblages of local phreatophytes. The ET units identified range from sparse grasslands to open water. Annual ET rates are computed by energy-budget methods from micrometeorological measurements made at 10 sites within six of the seven identified ET units. Micrometeorological data were collected for a minimum of 1 year at each site during 1994 through 1997. Evapotranspiration ranged from 0.6 foot per year in a sparse, dry saltgrass environment to 8.6 feet per year over open water. Ancillary data, including water levels, were collected during this same period to gain additional insight into the evapotranspiration process. Water levels measured in shallow wells showed annual declines of more than 10 feet and daily declines as high as 0.3 foot attributed to water losses associated with evapotranspiration. Mean annual ET from the Ash Meadows area is estimated at 21,000 acre-feet. An estimate of ground-water discharge, based on this ET estimate, is presented as a range to account for uncertainties in the contribution of local precipitation. The estimates given for mean annual ground-water discharge range from 18,000 to 21,000 acre-feet. The low estimate assumes a large contribution from local precipitation in computed ET rates; whereas, the high estimate assumes no contribution from local precipitation. The range presented is only slightly higher than previous estimates of ground-water discharge from the Ash Meadows area based primarily on springflow measurements.

  7. An interview technique for recording work postures in epidemiological studies. Music-Norrtälje Study Group.

    PubMed

    Wiktorin, C; Selin, K; Ekenvall, L; Alfredsson, L

    1996-02-01

    The aim of the study was to present and evaluate a work-task-oriented interview technique focusing on the placement of the hands relative to the body and assessing per cent time spent in five standard work postures during a working day. The reproducibility of estimated time spent in each work posture was tested by the test-retest method in 32 subjects; 16 were interviewed by the same interviewer and 16 were interviewed by another one at the retest. The validity concerning estimated time spent in th five standard work postures was tested in relation to observations in 58 male blue-collar workers. The mean registration (assessment) time was 6 hours and 15 minutes. No evident differences in the reproducibility depending on same or different interviewers at test and retest could be observed. The linear relationship between times estimated by the interview and by observations was high for four of the work postures: 'sitting' (r = 0.86), 'standing with hands above shoulder level' (r = 0.87), 'between shoulder and knuckle level' (r = 0.75), and 'below knuckle level' (r = 0.93). When the work posture 'standing with hands between shoulder and knuckle level' was divided into 'hands fixed' (r = 0.62) and 'hands not fixed' (r = 0.50) the correlations were weak. Current musculoskeletal complaints did not influence the accuracy of the estimations. The present task-oriented interview technique may be the best available method to estimate these work postures in a way that requires few resources compared to observations and technical measurements.

  8. Hedonic approaches based on spatial econometrics and spatial statistics: application to evaluation of project benefits

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Morito; Seya, Hajime

    2009-12-01

    This study discusses the theoretical foundation of the application of spatial hedonic approaches—the hedonic approach employing spatial econometrics or/and spatial statistics—to benefits evaluation. The study highlights the limitations of the spatial econometrics approach since it uses a spatial weight matrix that is not employed by the spatial statistics approach. Further, the study presents empirical analyses by applying the Spatial Autoregressive Error Model (SAEM), which is based on the spatial econometrics approach, and the Spatial Process Model (SPM), which is based on the spatial statistics approach. SPMs are conducted based on both isotropy and anisotropy and applied to different mesh sizes. The empirical analysis reveals that the estimated benefits are quite different, especially between isotropic and anisotropic SPM and between isotropic SPM and SAEM; the estimated benefits are similar for SAEM and anisotropic SPM. The study demonstrates that the mesh size does not affect the estimated amount of benefits. Finally, the study provides a confidence interval for the estimated benefits and raises an issue with regard to benefit evaluation.

  9. Long Term Exposure to NO2 and Diabetes Incidence in the Black Women's Health Study

    PubMed Central

    Coogan, Patricia F.; White, Laura F.; Yu, Jeffrey; Burnett, Richard T.; Marshall, Julian D.; Seto, Edmund; Brook, Robert D.; Palmer, Julie R.; Rosenberg, Lynn; Jerrett, Michael

    2016-01-01

    While laboratory studies show that air pollutants can potentiate insulin resistance, the epidemiologic evidence regarding the association of air pollution with diabetes incidence is conflicting. The purpose of the present study was to assess the association of the traffic-related nitrogen dioxide (NO2) with the incidence of diabetes in a longitudinal cohort study of African American women. We used Cox proportional hazards models to calculate hazard ratios and 95% confidence intervals (CI) for diabetes associated with exposure to NO2 among 43,003 participants in the Black Women's Health Study (BWHS). Pollutant levels at participant residential locations were estimated with 1) a land use regression model for participants living in 56 metropolitan areas, and 2) a dispersion model for participants living in 27 of the cities. From 1995-2011, 4387 cases of diabetes occurred. The hazard ratios per interquartile range of NO2 (9.7 ppb), adjusted for age, metropolitan area, education, vigorous exercise, body mass index, smoking, and diet, were 0.96 (95% CI 0.88-1.06) using the land use regression model estimates and 0.94 (95% CI 0.80, 1.10) using the dispersion model estimates. The present results do not support the hypothesis that exposure to NO2 contributes to diabetes incidence in African American women. PMID:27124624

  10. Effectiveness and cost-effectiveness of an awareness campaign for colorectal cancer: a mathematical modeling study.

    PubMed

    Whyte, Sophie; Harnan, Susan

    2014-06-01

    A campaign to increase the awareness of the signs and symptoms of colorectal cancer (CRC) and encourage self-presentation to a GP was piloted in two regions of England in 2011. Short-term data from the pilot evaluation on campaign cost and changes in GP attendances/referrals, CRC incidence, and CRC screening uptake were available. The objective was to estimate the effectiveness and cost-effectiveness of a CRC awareness campaign by using a mathematical model which extrapolates short-term outcomes to predict long-term impacts on cancer mortality, quality-adjusted life-years (QALYs), and costs. A mathematical model representing England (aged 30+) for a lifetime horizon was developed. Long-term changes to cancer incidence, cancer stage distribution, cancer mortality, and QALYs were estimated. Costs were estimated incorporating costs associated with delivering the campaign, additional GP attendances, and changes in CRC treatment. Data from the pilot campaign suggested that the awareness campaign caused a 1-month 10 % increase in presentation rates. Based on this, the model predicted the campaign to cost £5.5 million, prevent 66 CRC deaths and gain 404 QALYs. The incremental cost-effectiveness ratio compared to "no campaign" was £13,496 per QALY. Results were sensitive to the magnitude and duration of the increase in presentation rates and to disease stage. The effectiveness and cost-effectiveness of a cancer awareness campaign can be estimated based on short-term data. Such predictions will aid policy makers in prioritizing between cancer control strategies. Future cost-effectiveness studies would benefit from campaign evaluations reporting as follows: data completeness, duration of impact, impact on emergency presentations, and comparison with non-intervention regions.

  11. Robust estimation for partially linear models with large-dimensional covariates

    PubMed Central

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2014-01-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of o(n), where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures. PMID:24955087

  12. Robust estimation for partially linear models with large-dimensional covariates.

    PubMed

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  13. Quaternion normalization in spacecraft attitude determination

    NASA Technical Reports Server (NTRS)

    Deutschmann, J.; Markley, F. L.; Bar-Itzhack, Itzhack Y.

    1993-01-01

    Attitude determination of spacecraft usually utilizes vector measurements such as Sun, center of Earth, star, and magnetic field direction to update the quaternion which determines the spacecraft orientation with respect to some reference coordinates in the three dimensional space. These measurements are usually processed by an extended Kalman filter (EKF) which yields an estimate of the attitude quaternion. Two EKF versions for quaternion estimation were presented in the literature; namely, the multiplicative EKF (MEKF) and the additive EKF (AEKF). In the multiplicative EKF, it is assumed that the error between the correct quaternion and its a-priori estimate is, by itself, a quaternion that represents the rotation necessary to bring the attitude which corresponds to the a-priori estimate of the quaternion into coincidence with the correct attitude. The EKF basically estimates this quotient quaternion and then the updated quaternion estimate is obtained by the product of the a-priori quaternion estimate and the estimate of the difference quaternion. In the additive EKF, it is assumed that the error between the a-priori quaternion estimate and the correct one is an algebraic difference between two four-tuple elements and thus the EKF is set to estimate this difference. The updated quaternion is then computed by adding the estimate of the difference to the a-priori quaternion estimate. If the quaternion estimate converges to the correct quaternion, then, naturally, the quaternion estimate has unity norm. This fact was utilized in the past to obtain superior filter performance by applying normalization to the filter measurement update of the quaternion. It was observed for the AEKF that when the attitude changed very slowly between measurements, normalization merely resulted in a faster convergence; however, when the attitude changed considerably between measurements, without filter tuning or normalization, the quaternion estimate diverged. However, when the quaternion estimate was normalized, the estimate converged faster and to a lower error than with tuning only. In last years, symposium we presented three new AEKF normalization techniques and we compared them to the brute force method presented in the literature. The present paper presents the issue of normalization of the MEKF and examines several MEKF normalization techniques.

  14. Land-use change and costs to rural households: a case study in groundwater nitrate contamination

    NASA Astrophysics Data System (ADS)

    Keeler, Bonnie L.; Polasky, Stephen

    2014-07-01

    Loss of grassland from conversion to agriculture threatens water quality and other valuable ecosystem services. Here we estimate how land-use change affects the probability of groundwater contamination by nitrate in private drinking water wells. We find that conversion of grassland to agriculture from 2007 to 2012 in Southeastern Minnesota is expected to increase the future number of wells exceeding 10 ppm nitrate-nitrogen by 45% (from 888 to 1292 wells). We link outputs of the groundwater well contamination model to cost estimates for well remediation, well replacement, and avoidance behaviors to estimate the potential economic value lost due to nitrate contamination from observed land-use change. We estimate 0.7-12 million in costs (present values over a 20 year horizon) to address the increased risk of nitrate contamination of private wells. Our study demonstrates how biophysical models and economic valuation can be integrated to estimate the welfare consequences of land-use change.

  15. A Solution to Separation and Multicollinearity in Multiple Logistic Regression

    PubMed Central

    Shen, Jianzhao; Gao, Sujuan

    2010-01-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27–38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth’s penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study. PMID:20376286

  16. A Solution to Separation and Multicollinearity in Multiple Logistic Regression.

    PubMed

    Shen, Jianzhao; Gao, Sujuan

    2008-10-01

    In dementia screening tests, item selection for shortening an existing screening test can be achieved using multiple logistic regression. However, maximum likelihood estimates for such logistic regression models often experience serious bias or even non-existence because of separation and multicollinearity problems resulting from a large number of highly correlated items. Firth (1993, Biometrika, 80(1), 27-38) proposed a penalized likelihood estimator for generalized linear models and it was shown to reduce bias and the non-existence problems. The ridge regression has been used in logistic regression to stabilize the estimates in cases of multicollinearity. However, neither solves the problems for each other. In this paper, we propose a double penalized maximum likelihood estimator combining Firth's penalized likelihood equation with a ridge parameter. We present a simulation study evaluating the empirical performance of the double penalized likelihood estimator in small to moderate sample sizes. We demonstrate the proposed approach using a current screening data from a community-based dementia study.

  17. A Bayesian Approach to More Stable Estimates of Group-Level Effects in Contextual Studies.

    PubMed

    Zitzmann, Steffen; Lüdtke, Oliver; Robitzsch, Alexander

    2015-01-01

    Multilevel analyses are often used to estimate the effects of group-level constructs. However, when using aggregated individual data (e.g., student ratings) to assess a group-level construct (e.g., classroom climate), the observed group mean might not provide a reliable measure of the unobserved latent group mean. In the present article, we propose a Bayesian approach that can be used to estimate a multilevel latent covariate model, which corrects for the unreliable assessment of the latent group mean when estimating the group-level effect. A simulation study was conducted to evaluate the choice of different priors for the group-level variance of the predictor variable and to compare the Bayesian approach with the maximum likelihood approach implemented in the software Mplus. Results showed that, under problematic conditions (i.e., small number of groups, predictor variable with a small ICC), the Bayesian approach produced more accurate estimates of the group-level effect than the maximum likelihood approach did.

  18. 1977 Nationwide Personal Transportation Study : household vehicle utilization

    DOT National Transportation Integrated Search

    1981-04-01

    This report is part of a series that presents findings from the 1977 Nationwide Personal Transportation Study (NPTS). This report describes patterns of utilization of private vehicles (annual miles driven) in 1977. Utilization is keyed to estimates p...

  19. How much should I eat? Estimation of meal portions in anorexia nervosa.

    PubMed

    Milos, Gabriella; Kuenzli, Cornelia; Soelch, Chantal Martin; Schumacher, Sonja; Moergeli, Hanspeter; Mueller-Pfeiffer, Christoph

    2013-04-01

    Pathological concern regarding one's weight and weight gain is a crucial feature of anorexia nervosa. Consequently, anorexia nervosa patients often claim that they are uncertain regarding the amount of food they should eat. The present study investigated whether individuals with anorexia nervosa show an altered estimation of meal portion sizes and whether this estimation is modulated by an intent-to-eat instruction (where patients are asked to imagine having to eat the presented meal), meal type and meal portion size. Twenty-four women with anorexia nervosa and 27 healthy women estimated, using a visual analogue scale, the size of six different portions of three different meals, with and without intent-to-eat instructions. Subjects with anorexia nervosa estimated the size of small and medium meal portions (but not large meal servings) as being significantly larger, compared to estimates of healthy controls. The overestimation of small meal portions by anorexia nervosa subjects was significantly greater in the intent-to-eat, compared to general, condition. These findings suggest that disturbed perceptions associated with anorexia nervosa not only include interoceptive awareness (i.e., body weight and shape), but also extend to external disorder-related objects such as meal portion size. Specific therapeutic interventions, such as training regarding meal portion evaluation, could address these difficulties. Copyright © 2013. Published by Elsevier Ltd.

  20. Estimation of Filling and Afterload Conditions by Pump Intrinsic Parameters in a Pulsatile Total Artificial Heart.

    PubMed

    Cuenca-Navalon, Elena; Laumen, Marco; Finocchiaro, Thomas; Steinseifer, Ulrich

    2016-07-01

    A physiological control algorithm is being developed to ensure an optimal physiological interaction between the ReinHeart total artificial heart (TAH) and the circulatory system. A key factor for that is the long-term, accurate determination of the hemodynamic state of the cardiovascular system. This study presents a method to determine estimation models for predicting hemodynamic parameters (pump chamber filling and afterload) from both left and right cardiovascular circulations. The estimation models are based on linear regression models that correlate filling and afterload values with pump intrinsic parameters derived from measured values of motor current and piston position. Predictions for filling lie in average within 5% from actual values, predictions for systemic afterload (AoPmean , AoPsys ) and mean pulmonary afterload (PAPmean ) lie in average within 9% from actual values. Predictions for systolic pulmonary afterload (PAPsys ) present an average deviation of 14%. The estimation models show satisfactory prediction and confidence intervals and are thus suitable to estimate hemodynamic parameters. This method and derived estimation models are a valuable alternative to implanted sensors and are an essential step for the development of a physiological control algorithm for a fully implantable TAH. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  1. Reinforcement learning state estimator.

    PubMed

    Morimoto, Jun; Doya, Kenji

    2007-03-01

    In this study, we propose a novel use of reinforcement learning for estimating hidden variables and parameters of nonlinear dynamical systems. A critical issue in hidden-state estimation is that we cannot directly observe estimation errors. However, by defining errors of observable variables as a delayed penalty, we can apply a reinforcement learning frame-work to state estimation problems. Specifically, we derive a method to construct a nonlinear state estimator by finding an appropriate feedback input gain using the policy gradient method. We tested the proposed method on single pendulum dynamics and show that the joint angle variable could be successfully estimated by observing only the angular velocity, and vice versa. In addition, we show that we could acquire a state estimator for the pendulum swing-up task in which a swing-up controller is also acquired by reinforcement learning simultaneously. Furthermore, we demonstrate that it is possible to estimate the dynamics of the pendulum itself while the hidden variables are estimated in the pendulum swing-up task. Application of the proposed method to a two-linked biped model is also presented.

  2. Precipitation Data Merging over Mountainous Areas Using Satellite Estimates and Sparse Gauge Observations (PDMMA-USESGO) for Hydrological Modeling — A Case Study over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Hsu, K. L.; Sorooshian, S.; Xu, X.

    2017-12-01

    Precipitation in mountain regions generally occurs with high-frequency-intensity, whereas it is not well-captured by sparsely distributed rain-gauges imposing a great challenge on water management. Satellite-based Precipitation Estimation (SPE) provides global high-resolution alternative data for hydro-climatic studies, but are subject to considerable biases. In this study, a model named PDMMA-USESGO for Precipitation Data Merging over Mountainous Areas Using Satellite Estimates and Sparse Gauge Observations is developed to support precipitation mapping and hydrological modeling in mountainous catchments. The PDMMA-USESGO framework includes two calculating steps—adjusting SPE biases and merging satellite-gauge estimates—using the quantile mapping approach, a two-dimensional Gaussian weighting scheme (considering elevation effect), and an inverse root mean square error weighting method. The model is applied and evaluated over the Tibetan Plateau (TP) with the PERSIANN-CCS precipitation retrievals (daily, 0.04°×0.04°) and sparse observations from 89 gauges, for the 11-yr period of 2003-2013. To assess the data merging effects on streamflow modeling, a hydrological evaluation is conducted over a watershed in southeast TP based on the Soil and Water Assessment Tool (SWAT). Evaluation results indicate effectiveness of the model in generating high-resolution-accuracy precipitation estimates over mountainous terrain, with the merged estimates (Mer-SG) presenting consistently improved correlation coefficients, root mean square errors and absolute mean biases from original satellite estimates (Ori-CCS). It is found the Mer-SG forced streamflow simulations exhibit great improvements from those simulations using Ori-CCS, with coefficient of determination (R2) and Nash-Sutcliffe efficiency reach to 0.8 and 0.65, respectively. The presented model and case study serve as valuable references for the hydro-climatic applications using remote sensing-gauge information in other mountain areas of the world.

  3. Energy recovery from organic fractions of municipal solid waste: A case study of Hyderabad city, Pakistan.

    PubMed

    Safar, Korai M; Bux, Mahar R; Aslam, Uqaili M; Ahmed, Memon S; Ahmed, Lashari I

    2016-04-01

    Non-renewable energy sources have remained the choice of the world for centuries. Rapid growth in population and industrialisation have caused their shortage and environmental degradation by using them. Thus, at the present rate of consumption, they will not last very long. In this prospective, this study has been conducted. The estimation of energy in terms of biogas and heat from various organic fractions of municipal solid waste is presented and discussed. The results show that organic fractions of municipal solid waste possess methane potential in the range of 3%-22% and their heat capacity ranges from 3007 to 20,099 kJ kg(-1) Also, theoretical biogas potential of different individual fruit as well as vegetable components and mixed food waste are analysed and estimated in the range of 608-1244 m(3) t(-1) Further, the share of bioenergy from municipal solid waste in the total primary energy supply in Pakistan has been estimated to be 1.82%. About 8.43% of present energy demand of the country could be met from municipal solid waste. The study leads us to the conclusion that the share of imported energy (i.e. 0.1% of total energy supply) and reduction in the amount of energy from fossil fuels can be achieved by adopting a waste-to-energy system in the country. © The Author(s) 2016.

  4. Epidemiology of chronic obstructive pulmonary disease in the global HIV-infected population: a systematic review and meta-analysis protocol.

    PubMed

    Bigna, Jean Joel R; Kenne, Angeladine Malaha; Asangbeh, Serra Lem

    2017-03-29

    Evidence suggests a relationship between human immunodeficiency virus (HIV) infection and chronic obstructive pulmonary disease (COPD). Although the high burden of COPD and the HIV disease is clearly demonstrated, to the best of our knowledge, there is a lack of summary and meta-analysis data on the epidemiology of COPD in the global HIV-infected population to date. The present protocol for a systematic review and meta-analysis intends to summarize existing data on the prevalence, incidence, and risk factors of COPD in the global HIV-infected population. The present review will include cohort, cross-sectional, and case-control studies conducted among HIV infected people, which report prevalence, incidence, and factors associated with COPD or enough data for their estimation. We will consider published and unpublished studies in English and French language, regardless of geographical location. Relevant records will be searched using PubMed/Medline, and Scopus from inception to December 31st, 2016. Reference lists of eligible papers and relevant review articles will be screened. Two investigators will independently screen, select studies, and extract data, with discrepancies resolved by consensus or arbitrarily by a third investigator. Risk of bias and methodological quality of the included studies will be assessed using the Newcastle-Ottawa Scale. Funnel-plots and Egger's test will be used to determine publication bias. The study-specific estimates will be pooled through a random-effects meta-analysis model to obtain an overall summary estimate. To keep the effect of studies with extremely small or extremely large estimates on the overall estimate to a minimum, the variance of the study-specific prevalence/incidence will be stabilized with the Freeman-Tukey single arc-sine transformation. The heterogeneity will be evaluated by the χ 2 test on Cochrane's Q statistic. Results will be presented by geographic region and by antiretroviral therapy status. We plan to summarize data on factors associated with COPD in narrative format. This systematic review and meta-analysis will give an overview of the epidemiology of COPD in the global HIV population to inform policy-makers and to provide accurate data that can underpin effective interventions for optimizing their detection and management. PROSPERO CRD42016052639 .

  5. Research protocol for an epidemiological study on estimating disease burden of pediatric HIV in Belgaum district, India.

    PubMed

    Sinha, Anju; Nath, Anita; Sethumadhavan, Rajeev; Isac, Shajy; Washington, Reynold

    2016-05-26

    Pediatric HIV is poised to become a major public health problem in India with the rising trend of HIV infection in pregnant women (Department of AIDS Control, Ministry of Health and Family Welfare, http://www.naco.gov.in). There is lack of information on the epidemiology of pediatric HIV infection in India. Existing surveillance systems tend to underestimate the Pediatric burden. The overall aim of the present study is to estimate the disease burden of pediatric HIV among children in Belgaum district in the state of Karnataka in Southern India. An innovative multipronged epidemiological approach to comb the district is proposed. The primary objectives of the study would be attained under three strategies. A prospective cohort design for objective (i) to determine the incidence rate of HIV by early case detection in infants and toddlers (0-18 months) born to HIV infected pregnant women; and cross sectional design for objectives (ii) to determine the prevalence of HIV infection in children (0-14 years) of HIV infected parents and (iii) to determine the prevalence of HIV in sick children (0-14 years) presenting with suspected signs and symptoms using age specific criteria for screening. Burden of pediatric HIV will be calculated as a product of cases detected in each strategy multiplied by a net inflation factor for each strategy. Study participants (i) (ii) (iii): HIV infected pregnant women and their live born children (ii) Any HIV-infected man/woman, of age 18-49 years, having a biological child of age 0-14 years (iii) Sick children of age 0-14 years presenting with suspected signs and symptoms and satisfying age-specific criteria for screening. Setting and conduct: Belgaum district which is a Category 'A' district (with more than 1 % antenatal prevalence in the district over the last 3 years before the study). Age-appropriate testing is used to detect HIV infection. There is a need to strengthen existing pediatric HIV estimation methods in India and other developing countries. We hope that the novel methodology emanating from this study would be applicable for estimating the burden of HIV in other settings and it would be adaptable for estimating the burden of other infectious/chronic diseases. Findings from this study will give future direction to the national program for prevention and control of HIV in India and other developing countries.

  6. Air Quality Modeling of Traffic-related Air Pollutants for the NEXUS Study

    EPA Science Inventory

    The paper presents the results of the model applications to estimate exposure metrics in support of an epidemiologic study in Detroit, Michigan. A major challenge in traffic-related air pollution exposure studies is the lack of information regarding pollutant exposure characteriz...

  7. Estimates of the volume of water in five coal aquifers, Northern Cheyenne Indian Reservation, southeastern Montana

    USGS Publications Warehouse

    Tuck, L.K.; Pearson, Daniel K.; Cannon, M.R.; Dutton, DeAnn M.

    2013-01-01

    The Tongue River Member of the Tertiary Fort Union Formation is the primary source of groundwater in the Northern Cheyenne Indian Reservation in southeastern Montana. Coal beds within this formation generally contain the most laterally extensive aquifers in much of the reservation. The U.S. Geological Survey, in cooperation with the Northern Cheyenne Tribe, conducted a study to estimate the volume of water in five coal aquifers. This report presents estimates of the volume of water in five coal aquifers in the eastern and southern parts of the Northern Cheyenne Indian Reservation: the Canyon, Wall, Pawnee, Knobloch, and Flowers-Goodale coal beds in the Tongue River Member of the Tertiary Fort Union Formation. Only conservative estimates of the volume of water in these coal aquifers are presented. The volume of water in the Canyon coal was estimated to range from about 10,400 acre-feet (75 percent saturated) to 3,450 acre-feet (25 percent saturated). The volume of water in the Wall coal was estimated to range from about 14,200 acre-feet (100 percent saturated) to 3,560 acre-feet (25 percent saturated). The volume of water in the Pawnee coal was estimated to range from about 9,440 acre-feet (100 percent saturated) to 2,360 acre-feet (25 percent saturated). The volume of water in the Knobloch coal was estimated to range from about 38,700 acre-feet (100 percent saturated) to 9,680 acre-feet (25 percent saturated). The volume of water in the Flowers-Goodale coal was estimated to be about 35,800 acre-feet (100 percent saturated). Sufficient data are needed to accurately characterize coal-bed horizontal and vertical variability, which is highly complex both locally and regionally. Where data points are widely spaced, the reliability of estimates of the volume of coal beds is decreased. Additionally, reliable estimates of the volume of water in coal aquifers depend heavily on data about water levels and data about coal-aquifer characteristics. Because the data needed to define the volume of water were sparse, only conservative estimates of the volume of water in the five coal aquifers are presented in this report. These estimates need to be used with caution and mindfulness of the uncertainty associated with them.

  8. The Ability of Atmospheric Data to Reduce Disagreements in Wetland Methane Flux Estimates over North America

    NASA Astrophysics Data System (ADS)

    Miller, S. M.; Andrews, A. E.; Benmergui, J. S.; Commane, R.; Dlugokencky, E. J.; Janssens-Maenhout, G.; Melton, J. R.; Michalak, A. M.; Sweeney, C.; Worthy, D. E. J.

    2015-12-01

    Existing estimates of methane fluxes from wetlands differ in both magnitude and distribution across North America. We discuss seven different bottom-up methane estimates in the context of atmospheric methane data collected across the US and Canada. In the first component of this study, we explore whether the observation network can even detect a methane pattern from wetlands. We find that the observation network can identify a methane pattern from Canadian wetlands but not reliably from US wetlands. Over Canada, the network can even identify spatial patterns at multi-provence scales. Over the US, by contrast, anthropogenic emissions and modeling errors obscure atmospheric patterns from wetland fluxes. In the second component of the study, we then use these observations to reconcile disagreements in the magnitude, seasonal cycle, and spatial distribution of existing estimates. Most existing estimates predict fluxes that are too large with a seasonal cycle that is too narrow. A model known as LPJ-Bern has a spatial distribution most consistent with atmospheric observations. By contrast, a spatially-constant model outperforms the distribution of most existing flux estimates across Canada. The results presented here provide several pathways to reduce disagreements among existing wetland flux estimates across North America.

  9. Estimation of premorbid general fluid intelligence using traditional Chinese reading performance in Taiwanese samples.

    PubMed

    Chen, Ying-Jen; Ho, Meng-Yang; Chen, Kwan-Ju; Hsu, Chia-Fen; Ryu, Shan-Jin

    2009-08-01

    The aims of the present study were to (i) investigate if traditional Chinese word reading ability can be used for estimating premorbid general intelligence; and (ii) to provide multiple regression equations for estimating premorbid performance on Raven's Standard Progressive Matrices (RSPM), using age, years of education and Chinese Graded Word Reading Test (CGWRT) scores as predictor variables. Four hundred and twenty-six healthy volunteers (201 male, 225 female), aged 16-93 years (mean +/- SD, 41.92 +/- 18.19 years) undertook the tests individually under supervised conditions. Seventy percent of subjects were randomly allocated to the derivation group (n = 296), and the rest to the validation group (n = 130). RSPM score was positively correlated with CGWRT score and years of education. RSPM and CGWRT scores and years of education were also inversely correlated with age, but the declining trend for RSPM performance against age was steeper than that for CGWRT performance. Separate multiple regression equations were derived for estimating RSPM scores using different combinations of age, years of education, and CGWRT score for both groups. The multiple regression coefficient of each equation ranged from 0.71 to 0.80 with the standard error of estimate between 7 and 8 RSPM points. When fitting the data of one group to the equations derived from its counterpart group, the cross-validation multiple regression coefficients ranged from 0.71 to 0.79. There were no significant differences in the 'predicted-obtained' RSPM discrepancies between any equations. The regression equations derived in the present study may provide a basis for estimating premorbid RSPM performance.

  10. Age estimation standards for a Western Australian population using the coronal pulp cavity index.

    PubMed

    Karkhanis, Shalmira; Mack, Peter; Franklin, Daniel

    2013-09-10

    Age estimation is a vital aspect in creating a biological profile and aids investigators by narrowing down potentially matching identities from the available pool. In addition to routine casework, in the present global political scenario, age estimation in living individuals is required in cases of refugees, asylum seekers, human trafficking and to ascertain age of criminal responsibility. Thus robust methods that are simple, non-invasive and ethically viable are required. The aim of the present study is, therefore, to test the reliability and applicability of the coronal pulp cavity index method, for the purpose of developing age estimation standards for an adult Western Australian population. A total of 450 orthopantomograms (220 females and 230 males) of Australian individuals were analyzed. Crown and coronal pulp chamber heights were measured in the mandibular left and right premolars, and the first and second molars. These measurements were then used to calculate the tooth coronal index. Data was analyzed using paired sample t-tests to assess bilateral asymmetry followed by simple linear and multiple regressions to develop age estimation models. The most accurate age estimation based on simple linear regression model was with mandibular right first molar (SEE ±8.271 years). Multiple regression models improved age prediction accuracy considerably and the most accurate model was with bilateral first and second molars (SEE ±6.692 years). This study represents the first investigation of this method in a Western Australian population and our results indicate that the method is suitable for forensic application. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Estimating the prevalence of infertility in Canada

    PubMed Central

    Bushnik, Tracey; Cook, Jocelynn L.; Yuzpe, A. Albert; Tough, Suzanne; Collins, John

    2012-01-01

    BACKGROUND Over the past 10 years, there has been a significant increase in the use of assisted reproductive technologies in Canada, however, little is known about the overall prevalence of infertility in the population. The purpose of the present study was to estimate the prevalence of current infertility in Canada according to three definitions of the risk of conception. METHODS Data from the infertility component of the 2009–2010 Canadian Community Health Survey were analyzed for married and common-law couples with a female partner aged 18–44. The three definitions of the risk of conception were derived sequentially starting with birth control use in the previous 12 months, adding reported sexual intercourse in the previous 12 months, then pregnancy intent. Prevalence and odds ratios of current infertility were estimated by selected characteristics. RESULTS Estimates of the prevalence of current infertility ranged from 11.5% (95% CI 10.2, 12.9) to 15.7% (95% CI 14.2, 17.4). Each estimate represented an increase in current infertility prevalence in Canada when compared with previous national estimates. Couples with lower parity (0 or 1 child) had significantly higher odds of experiencing current infertility when the female partner was aged 35–44 years versus 18–34 years. Lower odds of experiencing current infertility were observed for multiparous couples regardless of age group of the female partner, when compared with nulliparous couples. CONCLUSIONS The present study suggests that the prevalence of current infertility has increased since the last time it was measured in Canada, and is associated with the age of the female partner and parity. PMID:22258658

  12. Estimating the seasonal carbon source-sink geography of a natural, steady-state terrestrial biosphere

    NASA Technical Reports Server (NTRS)

    Box, Elgene O.

    1988-01-01

    The estimation of the seasonal dynamics of biospheric-carbon sources and sinks to be used as an input to global atmospheric CO2 studies and models is discussed. An ecological biosphere model is given and the advantages of the model are examined. Monthly maps of estimated biospheric carbon source and sink regions and estimates of total carbon fluxes are presented for an equilibrium terrestrial biosphere. The results are compared with those from other models. It is suggested that, despite maximum variations of atmospheric CO2 in boreal latitudes, the enormous contributions of tropical wet-dry regions to global atmospheric CO2 seasonality can not be ignored.

  13. An empirical approach for estimating stress-coupling lengths for marine-terminating glaciers

    USGS Publications Warehouse

    Enderlin, Ellyn; Hamilton, Gordon S.; O'Neel, Shad; Bartholomaus, Timothy C.; Morlighem, Mathieu; Holt, John W.

    2016-01-01

    Here we present a new empirical method to estimate the SCL for marine-terminating glaciers using high-resolution observations. We use the empirically-determined periodicity in resistive stress oscillations as a proxy for the SCL. Application of our empirical method to two well-studied tidewater glaciers (Helheim Glacier, SE Greenland, and Columbia Glacier, Alaska, USA) demonstrates that SCL estimates obtained using this approach are consistent with theory (i.e., can be parameterized as a function of the ice thickness) and with prior, independent SCL estimates. In order to accurately resolve stress variations, we suggest that similar empirical stress-coupling parameterizations be employed in future analyses of glacier dynamics.

  14. Estimation of Supercapacitor Energy Storage Based on Fractional Differential Equations.

    PubMed

    Kopka, Ryszard

    2017-12-22

    In this paper, new results on using only voltage measurements on supercapacitor terminals for estimation of accumulated energy are presented. For this purpose, a study based on application of fractional-order models of supercapacitor charging/discharging circuits is undertaken. Parameter estimates of the models are then used to assess the amount of the energy accumulated in supercapacitor. The obtained results are compared with energy determined experimentally by measuring voltage and current on supercapacitor terminals. All the tests are repeated for various input signal shapes and parameters. Very high consistency between estimated and experimental results fully confirm suitability of the proposed approach and thus applicability of the fractional calculus to modelling of supercapacitor energy storage.

  15. Area estimation of crops by digital analysis of Landsat data

    NASA Technical Reports Server (NTRS)

    Bauer, M. E.; Hixson, M. M.; Davis, B. J.

    1978-01-01

    The study for which the results are presented had these objectives: (1) to use Landsat data and computer-implemented pattern recognition to classify the major crops from regions encompassing different climates, soils, and crops; (2) to estimate crop areas for counties and states by using crop identification data obtained from the Landsat identifications; and (3) to evaluate the accuracy, precision, and timeliness of crop area estimates obtained from Landsat data. The paper describes the method of developing the training statistics and evaluating the classification accuracy. Landsat MSS data were adequate to accurately identify wheat in Kansas; corn and soybean estimates for Indiana were less accurate. Systematic sampling of entire counties made possible by computer classification methods resulted in very precise area estimates at county, district, and state levels.

  16. A tool to estimate the Fermi Large Area Telescope background for short-duration observations

    DOE PAGES

    Vasileiou, Vlasios

    2013-07-25

    Here, the proper estimation of the background is a crucial component of data analyses in astrophysics, such as source detection, temporal studies, spectroscopy, and localization. For the case of the Large Area Telescope (LAT) on board the Fermi spacecraft, approaches to estimate the background for short (≲1000 s duration) observations fail if they ignore the strong dependence of the LAT background on the continuously changing observational conditions. We present a (to be) publicly available background-estimation tool created and used by the LAT Collaboration in several analyses of Gamma Ray Bursts. This tool can accurately estimate the expected LAT background formore » any observational conditions, including, for example, observations with rapid variations of the Fermi spacecraft’s orientation occurring during automatic repointings.« less

  17. Shallow Remineralization in the Sargasso Sea Estimated from Seasonal Variations in Oxygen and Dissolved Inorganic Carbon

    NASA Technical Reports Server (NTRS)

    Ono, S.; Ennyu, A.; Najjar, R. G.; Bates, N.

    1998-01-01

    A diagnostic model of the mean annual cycles of dissolved inorganic carbon (DIC) and oxygen below the mixed layer at the Bermuda Atlantic Time-series Study (BATS) site is presented and used to estimate organic carbon remineralization in the seasonal thermocline. The model includes lateral and vertical advection as well as vertical, diffusion. Very good agreement is found for the remineralization estimates based on oxygen and DIC. Net remineralization averaged from mid-spring to early fall is found to be a maximum between 120 and 140 in. Remineralization integrated between 100 (the compensation depth) and 250 m during this period is estimated to be about 1 mol C/sq m. This flux is consistent with independent estimates of the loss of particulate and dissolved organic carbon.

  18. A critique of the exposure assessment in the epidemiologic study of benzene-exposed workers in China conducted by the Chinese Academy of Preventive Medicine and the US National Cancer Institute.

    PubMed

    Wong, O

    1999-12-01

    As reviewed in some detail in the present paper, workers employed in a wide variety of industries were included in the Chinese benzene study, and were exposed to not only benzene but also a wide range of other industrial chemicals. To attribute any or all health effects observed in the exposed cohort to benzene without examining other concomitant exposures is not appropriate. Although it was stated that one of the major objectives of the expanded study was to examine the effects of other risk factors, no such examination was made in any of the analyses in the expanded CAPM-NCI study. The CAPM-NCI study suffered from a number of limitations. One of the most serious limitations of the study involved the exposure estimates developed by the US NCI team. Comparing the assumptions used in the development of estimates and the exposure estimates themselves to actual data reported previously by the Chinese investigators revealed numerous inconsistencies and, in many cases, large discrepancies. It appeared that the exposure estimates were consistently lower than the actual exposure data. The so-called indirect validation conducted by the NCI team served no useful purpose, since by definition it could not validate the absolute values of the estimates. NCI was fully aware of some of the inadequacies of its exposure estimates. Although in a 1994 paper, the NCI team recognized that little confidence could be attached to the estimated (e.g., only 2% of the estimates for the time interval 1949-1959 and only 6% of the estimates prior to 1975 were rated in the high confidence category), the inadequacy of the estimates was never mentioned or discussed in any subsequent analyses or in the latest report (Hayes et al., 1998). Instead, the exposure of the workers was hailed as "well characterized" (Hayes et al., 1998). In conclusion both CAPM and NCI have made substantial efforts in studying the relationship between benzene exposure and various malignancies. Unfortunately, there were many inherent problems in the data as well as serious limitations in the exposure estimates. Because of these unresolved problems and limitations, many of the results in the CAPM-NCI study are unreliable. Therefore, the conclusions of the study, particularly those involving exposure estimates, are not justified.

  19. DOD/NASA system impact analysis (study 2.1). Volume 2: Study results

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Results of the tug turnaround cost study and the space transportation system (STS) abort modes and effects study are presented for DOD/NASA system impact analysis. Cost estimates are given for tug turnabout; and vehicle description, abort assessment, and abort performance capability are given for the STS.

  20. Performance Study of Earth Networks Total Lightning Network using Rocket-Triggered Lightning Data in 2014

    NASA Astrophysics Data System (ADS)

    Heckman, S.

    2015-12-01

    Modern lightning locating systems (LLS) provide real-time monitoring and early warning of lightningactivities. In addition, LLS provide valuable data for statistical analysis in lightning research. It isimportant to know the performance of such LLS. In the present study, the performance of the EarthNetworks Total Lightning Network (ENTLN) is studied using rocket-triggered lightning data acquired atthe International Center for Lightning Research and Testing (ICLRT), Camp Blanding, Florida.In the present study, 18 flashes triggered at ICLRT in 2014 were analyzed and they comprise of 78negative cloud-to-ground return strokes. The geometric mean, median, minimum, and maximum for thepeak currents of the 78 return strokes are 13.4 kA, 13.6 kA, 3.7 kA, and 38.4 kA, respectively. The peakcurrents represent typical subsequent return strokes in natural cloud-to-ground lightning.Earth Networks has developed a new data processor to improve the performance of their network. Inthis study, results are presented for the ENTLN data using the old processor (originally reported in 2014)and the ENTLN data simulated using the new processor. The flash detection efficiency, stroke detectionefficiency, percentage of misclassification, median location error, median peak current estimation error,and median absolute peak current estimation error for the originally reported data from old processorare 100%, 94%, 49%, 271 m, 5%, and 13%, respectively, and those for the simulated data using the newprocessor are 100%, 99%, 9%, 280 m, 11%, and 15%, respectively. The use of new processor resulted inhigher stroke detection efficiency and lower percentage of misclassification. It is worth noting that theslight differences in median location error, median peak current estimation error, and median absolutepeak current estimation error for the two processors are due to the fact that the new processordetected more number of return strokes than the old processor.

  1. Reliability of Nationwide Prevalence Estimates of Dementia: A Critical Appraisal Based on Brazilian Surveys

    PubMed Central

    2015-01-01

    Background The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries’ populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil. Methods and Findings We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815). Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%. Conclusions The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations. PMID:26131563

  2. [Required manpower for health care and nursing services for the aged at home public health nurses, visiting nurses, dental hygienists, dietitians, physical therapists, and occupational therapists].

    PubMed

    Ojima, T; Saito, E; Kanagawa, K; Sakata, K; Yanagawa, H

    1997-04-01

    The purpose of this study was to estimate the manpower required for the health care and nursing services for the aged at home. For prefectural health care and welfare planning for the aged, data such as the proportion of the aged who need help, service demand, and required frequency of services were obtained. The means and "mean +/- 2 x standard deviations" were calculated to obtain various parameters. Calculated figures were those which can be obtained with some effort. The results are as follows (middle level estimation (low level estimation-high level estimation)): requirements are 1.9 (0.61-5.7) public health nurses, 2.6 (0.63-14) visiting nurses, 0.20 (0.084-0.42) dental hygienists, 0.35 (0.17-0.66) dietitians, and 0.25 (0.014-1.27) physical and occupational therapists per population 10,000. For the national total, requirements are 23 (7.3-67) thousand public health nurses, 31 (7.5-160) thousand visiting nurses, 2.4 (1.0-5.0) thousand dental hygienists, 3.9 (2.0-7.8) thousand dietitians, and 3.0 (0.17-15) thousand physical and occupational therapists. By population sizes, for example in the municipalities which has 10-30 thousand people, required are 4.2 (1.7-11) public health nurses, 5.3 (1.3-27) visiting nurses, 0.4 (0.2-0.8) dental hygienists, 0.5 (0.3-0.9) dietitians, and 0.5 (0.0-2.5) physical and occupational therapists. Comparison of the present numbers with estimated manpower needs show that, the present number of public health personnel is almost the same as the low level estimation. But the present numbers of other manpower is lower than the low level estimation. Considering other services such as maternal and child health, it seems that the municipalities which has 10+ thousand population should employ full-time dietitians and dental hygienists. For policy making in a municipality, the policies of other municipalities should be considered. Because it is based on means for municipalities, the results of this study should be useful for application by other municipalities.

  3. Reducing random measurement error in assessing postural load on the back in epidemiologic surveys.

    PubMed

    Burdorf, A

    1995-02-01

    The goal of this study was to design strategies to assess postural load on the back in occupational epidemiology by taking into account the reliability of measurement methods and the variability of exposure among the workers under study. Intermethod reliability studies were evaluated to estimate the systematic bias (accuracy) and random measurement error (precision) of various methods to assess postural load on the back. Intramethod reliability studies were reviewed to estimate random variability of back load over time. Intermethod surveys have shown that questionnaires have a moderate reliability for gross activities such as sitting, whereas duration of trunk flexion and rotation should be assessed by observation methods or inclinometers. Intramethod surveys indicate that exposure variability can markedly affect the reliability of estimates of back load if the estimates are based upon a single measurement over a certain time period. Equations have been presented to evaluate various study designs according to the reliability of the measurement method, the optimum allocation of the number of repeated measurements per subject, and the number of subjects in the study. Prior to a large epidemiologic study, an exposure-oriented survey should be conducted to evaluate the performance of measurement instruments and to estimate sources of variability for back load. The strategy for assessing back load can be optimized by balancing the number of workers under study and the number of repeated measurements per worker.

  4. Massive superclusters as a probe of the nature and amplitude of primordial density fluctuations

    NASA Technical Reports Server (NTRS)

    Kaiser, N.; Davis, M.

    1985-01-01

    It is pointed out that correlation studies of galaxy positions have been widely used in the search for information about the large-scale matter distribution. The study of rare condensations on large scales provides an approach to extend the existing knowledge of large-scale structure into the weakly clustered regime. Shane (1975) provides a description of several apparent massive condensations within the Shane-Wirtanen catalog, taking into account the Serpens-Virgo cloud and the Corona cloud. In the present study, a description is given of a model for estimating the frequency of condensations which evolve from initially Gaussian fluctuations. This model is applied to the Corona cloud to estimate its 'rareness' and thereby estimate the rms density contrast on this mass scale. An attempt is made to find a conflict between the density fluctuations derived from the Corona cloud and independent constraints. A comparison is conducted of the estimate and the density fluctuations predicted to arise in a universe dominated by cold dark matter.

  5. Real-Time Stability and Control Derivative Extraction From F-15 Flight Data

    NASA Technical Reports Server (NTRS)

    Smith, Mark S.; Moes, Timothy R.; Morelli, Eugene A.

    2003-01-01

    A real-time, frequency-domain, equation-error parameter identification (PID) technique was used to estimate stability and control derivatives from flight data. This technique is being studied to support adaptive control system concepts currently being developed by NASA (National Aeronautics and Space Administration), academia, and industry. This report describes the basic real-time algorithm used for this study and implementation issues for onboard usage as part of an indirect-adaptive control system. A confidence measures system for automated evaluation of PID results is discussed. Results calculated using flight data from a modified F-15 aircraft are presented. Test maneuvers included pilot input doublets and automated inputs at several flight conditions. Estimated derivatives are compared to aerodynamic model predictions. Data indicate that the real-time PID used for this study performs well enough to be used for onboard parameter estimation. For suitable test inputs, the parameter estimates converged rapidly to sufficient levels of accuracy. The devised confidence measures used were moderately successful.

  6. Estimation of parameters of constant elasticity of substitution production functional model

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi

    2017-11-01

    Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.

  7. Understanding global tropospheric ozone and its impacts on human health

    NASA Astrophysics Data System (ADS)

    West, J. J.

    2017-12-01

    Ozone is an important air pollutant for human health, one that has proven difficult to manage locally, nationally, and globally. Here I will present research on global ozone and its impacts on human health, highlighting several studies from my lab over the past decade. I will discuss the drivers of global tropospheric ozone, and the importance of the equatorward shift of emissions over recent decades. I will review estimates of the global burden of ozone on premature mortality, the contributions of different emission sectors to that burden, estimates of how the ozone health burden will change in the future under the Representative Concentration Pathway scenarios, and estimates of the contribution of projected climate change to ozone-related deaths. I will also discuss the importance of the intercontinental transport of ozone, and of methane as a driver of global ozone, from the human health perspective. I will present estimates of trends in the ozone mortality burden in the United States since 1990. Finally, I will discuss our project currently underway to estimate global ozone concentrations at the surface based on data gathered by the Tropospheric Ozone Assessment Report, combined statistically with atmospheric modeling results.

  8. Estimation of the longitudinal and lateral-directional aerodynamic parameters from flight data for the NASA F/A-18 HARV

    NASA Technical Reports Server (NTRS)

    Napolitano, Marcello R.

    1996-01-01

    This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.

  9. Estimation of Human Arm Joints Using Two Wireless Sensors in Robotic Rehabilitation Tasks.

    PubMed

    Bertomeu-Motos, Arturo; Lledó, Luis D; Díez, Jorge A; Catalan, Jose M; Ezquerro, Santiago; Badesa, Francisco J; Garcia-Aracil, Nicolas

    2015-12-04

    This paper presents a novel kinematic reconstruction of the human arm chain with five degrees of freedom and the estimation of the shoulder location during rehabilitation therapy assisted by end-effector robotic devices. This algorithm is based on the pseudoinverse of the Jacobian through the acceleration of the upper arm, measured using an accelerometer, and the orientation of the shoulder, estimated with a magnetic angular rate and gravity (MARG) device. The results show a high accuracy in terms of arm joints and shoulder movement with respect to the real arm measured through an optoelectronic system. Furthermore, the range of motion (ROM) of 50 healthy subjects is studied from two different trials, one trying to avoid shoulder movements and the second one forcing them. Moreover, the shoulder movement in the second trial is also estimated accurately. Besides the fact that the posture of the patient can be corrected during the exercise, the therapist could use the presented algorithm as an objective assessment tool. In conclusion, the joints' estimation enables a better adjustment of the therapy, taking into account the needs of the patient, and consequently, the arm motion improves faster.

  10. On the estimation of intracluster correlation for time-to-event outcomes in cluster randomized trials.

    PubMed

    Kalia, Sumeet; Klar, Neil; Donner, Allan

    2016-12-30

    Cluster randomized trials (CRTs) involve the random assignment of intact social units rather than independent subjects to intervention groups. Time-to-event outcomes often are endpoints in CRTs. Analyses of such data need to account for the correlation among cluster members. The intracluster correlation coefficient (ICC) is used to assess the similarity among binary and continuous outcomes that belong to the same cluster. However, estimating the ICC in CRTs with time-to-event outcomes is a challenge because of the presence of censored observations. The literature suggests that the ICC may be estimated using either censoring indicators or observed event times. A simulation study explores the effect of administrative censoring on estimating the ICC. Results show that ICC estimators derived from censoring indicators or observed event times are negatively biased. Analytic work further supports these results. Observed event times are preferred to estimate the ICC under minimum frequency of administrative censoring. To our knowledge, the existing literature provides no practical guidance on the estimation of ICC when substantial amount of administrative censoring is present. The results from this study corroborate the need for further methodological research on estimating the ICC for correlated time-to-event outcomes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Resource management and nonmarket valuation research

    USGS Publications Warehouse

    Douglas, A.J.; Taylor, J.G.

    1999-01-01

    Survey based nonmarket valuation research is often regarded as economics research. However, resource economists need to be aware of and acknowledge the manifold information sources that they employ in order to enhance the policy credibility of their studies. Communication between resource economists and practitioners of allied disciplines including chemistry, civil engineering, sociology, and anthropology are often neglected. Recent resource allocation policy debates have given rise to an extensive discussion of methodological issues that narrow the scope of the subject. The present paper provides a format for the presentation of nonmarket valuation research results that emphasizes the manifold links between economics studies that employ different methodologies to estimate nonmarket resource values. A more robust emphasis on the interlocking features of the different approaches for estimating nonmarket benefits should foster appreciation of the transdisciplinary aspects of the subject.

  12. Parametric study of transport aircraft systems cost and weight

    NASA Technical Reports Server (NTRS)

    Beltramo, M. N.; Trapp, D. L.; Kimoto, B. W.; Marsh, D. P.

    1977-01-01

    The results of a NASA study to develop production cost estimating relationships (CERs) and weight estimating relationships (WERs) for commercial and military transport aircraft at the system level are presented. The systems considered correspond to the standard weight groups defined in Military Standard 1374 and are listed. These systems make up a complete aircraft exclusive of engines. The CER for each system (or CERs in several cases) utilize weight as the key parameter. Weights may be determined from detailed weight statements, if available, or by using the WERs developed, which are based on technical and performance characteristics generally available during preliminary design. The CERs that were developed provide a very useful tool for making preliminary estimates of the production cost of an aircraft. Likewise, the WERs provide a very useful tool for making preliminary estimates of the weight of aircraft based on conceptual design information.

  13. Impact and Estimation of Balance Coordinate System Rotations and Translations in Wind-Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Toro, Kenneth G.; Parker, Peter A.

    2017-01-01

    Discrepancies between the model and balance coordinate systems lead to biases in the aerodynamic measurements during wind-tunnel testing. The reference coordinate system relative to the calibration coordinate system at which the forces and moments are resolved is crucial to the overall accuracy of force measurements. This paper discusses sources of discrepancies and estimates of coordinate system rotation and translation due to machining and assembly differences. A methodology for numerically estimating the coordinate system biases will be discussed and developed. Two case studies are presented using this methodology to estimate the model alignment. Examples span from angle measurement system shifts on the calibration system to discrepancies in actual wind-tunnel data. The results from these case-studies will help aerodynamic researchers and force balance engineers to better the understand and identify potential differences in calibration systems due to coordinate system rotation and translation.

  14. Body mass and stature estimation based on the first metatarsal in humans.

    PubMed

    De Groote, Isabelle; Humphrey, Louise T

    2011-04-01

    Archaeological assemblages often lack the complete long bones needed to estimate stature and body mass. The most accurate estimates of body mass and stature are produced using femoral head diameter and femur length. Foot bones including the first metatarsal preserve relatively well in a range of archaeological contexts. In this article we present regression equations using the first metatarsal to estimate femoral head diameter, femoral length, and body mass in a diverse human sample. The skeletal sample comprised 87 individuals (Andamanese, Australasians, Africans, Native Americans, and British). Results show that all first metatarsal measurements correlate moderately to highly (r = 0.62-0.91) with femoral head diameter and length. The proximal articular dorsoplantar diameter is the best single measurement to predict both femoral dimensions. Percent standard errors of the estimate are below 5%. Equations using two metatarsal measurements show a small increase in accuracy. Direct estimations of body mass (calculated from measured femoral head diameter using previously published equations) have an error of just over 7%. No direct stature estimation equations were derived due to the varied linear body proportions represented in the sample. The equations were tested on a sample of 35 individuals from Christ Church Spitalfields. Percentage differences in estimated and measured femoral head diameter and length were less than 1%. This study demonstrates that it is feasible to use the first metatarsal in the estimation of body mass and stature. The equations presented here are particularly useful for assemblages where the long bones are either missing or fragmented, and enable estimation of these fundamental population parameters in poorly preserved assemblages. Copyright © 2011 Wiley-Liss, Inc.

  15. Regression estimators for generic health-related quality of life and quality-adjusted life years.

    PubMed

    Basu, Anirban; Manca, Andrea

    2012-01-01

    To develop regression models for outcomes with truncated supports, such as health-related quality of life (HRQoL) data, and account for features typical of such data such as a skewed distribution, spikes at 1 or 0, and heteroskedasticity. Regression estimators based on features of the Beta distribution. First, both a single equation and a 2-part model are presented, along with estimation algorithms based on maximum-likelihood, quasi-likelihood, and Bayesian Markov-chain Monte Carlo methods. A novel Bayesian quasi-likelihood estimator is proposed. Second, a simulation exercise is presented to assess the performance of the proposed estimators against ordinary least squares (OLS) regression for a variety of HRQoL distributions that are encountered in practice. Finally, the performance of the proposed estimators is assessed by using them to quantify the treatment effect on QALYs in the EVALUATE hysterectomy trial. Overall model fit is studied using several goodness-of-fit tests such as Pearson's correlation test, link and reset tests, and a modified Hosmer-Lemeshow test. The simulation results indicate that the proposed methods are more robust in estimating covariate effects than OLS, especially when the effects are large or the HRQoL distribution has a large spike at 1. Quasi-likelihood techniques are more robust than maximum likelihood estimators. When applied to the EVALUATE trial, all but the maximum likelihood estimators produce unbiased estimates of the treatment effect. One and 2-part Beta regression models provide flexible approaches to regress the outcomes with truncated supports, such as HRQoL, on covariates, after accounting for many idiosyncratic features of the outcomes distribution. This work will provide applied researchers with a practical set of tools to model outcomes in cost-effectiveness analysis.

  16. NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States

    NASA Astrophysics Data System (ADS)

    Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.

    2013-12-01

    NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.

  17. NOAA Atlas 14: Updated Precipitation Frequency Estimates for the United States

    NASA Astrophysics Data System (ADS)

    Pavlovic, S.; Perica, S.; Martin, D.; Roy, I.; StLaurent, M.; Trypaluk, C.; Unruh, D.; Yekta, M.; Bonnin, G. M.

    2011-12-01

    NOAA Atlas 14 precipitation frequency estimates, developed by the National Weather Service's Hydrometeorological Design Studies Center, serve as the de-facto standards for a wide variety of design and planning activities under federal, state, and local regulations. Precipitation frequency estimates are used in the design of drainage for highways, culverts, bridges, parking lots, as well as in sizing sewer and stormwater infrastructure. Water resources engineers use them to estimate the amount of runoff, to estimate the volume of detention basins and size detention-basin outlet structures, and to estimate the volume of sediment or the amount of erosion. They are also used by floodplain managers to delineate floodplains and regulate the development in floodplains, which is crucial for all communities in the National Flood Insurance Program. Hydrometeorological Design Studies Center now provides more than 35,000 downloads per month to its Precipitation Frequency Data Server. Precipitation frequency estimates are often used in engineering design without any understanding how these estimates have been developed or without any understanding of the uncertainties associated with these estimates. This presentation will describe novel tools and techniques that have being developed in the last years to determine precipitation frequency estimates in NOAA Atlas 14. Particular attention will be given to the regional frequency analysis approach based on L-moment statistics calculated from annual maximum series, selected statistics obtained in determining and parameterizing the probability distribution functions, and the potential implication for engineering design of recently published estimates.

  18. Parameter Estimation in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark; Colarco, Peter

    2004-01-01

    In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .

  19. Fundamental Properties of Co-moving Stars Observed by Gaia

    NASA Astrophysics Data System (ADS)

    Bochanski, John J.; Faherty, Jacqueline K.; Gagné, Jonathan; Nelson, Olivia; Coker, Kristina; Smithka, Iliya; Desir, Deion; Vasquez, Chelsea

    2018-04-01

    We have estimated fundamental parameters for a sample of co-moving stars observed by Gaia and identified by Oh et al. We matched the Gaia observations to the 2MASS and Wide-Field Infrared Survey Explorer catalogs and fit MIST isochrones to the data, deriving estimates of the mass, radius, [Fe/H], age, distance, and extinction to 9754 stars in the original sample of 10606 stars. We verify these estimates by comparing our new results to previous analyses of nearby stars, examining fiducial cluster properties, and estimating the power-law slope of the local present-day mass function. A comparison to previous studies suggests that our mass estimates are robust, while metallicity and age estimates are increasingly uncertain. We use our calculated masses to examine the properties of binaries in the sample and show that separation of the pairs dominates the observed binding energies and expected lifetimes.

  20. An exploration of emergency department presentations related to high heel footwear in Victoria, Australia, 2006-2010.

    PubMed

    Williams, Cylie M; Haines, Terry P

    2014-01-23

    Many women are warned against the dangers of wearing high heel footwear however there is limited empirical evidence demonstrating an association between wearing high heel with injury. Gait laboratory testing has found a higher heel height placed the foot in a position that increases the risk of ankle sprain. Women have also been surveyed about wearing high heels and approximately half of those reported inconvenience and pain after wearing a high heel shoe. This study aims to explore emergency department presentations of injuries and the estimated costs that have been directly attributed to wearing high heeled footwear within Victoria, Australia during 2006-2010. The Victorian Emergency Minimum Dataset (VEMD) was searched for all injuries attributed to wearing high heel footwear presenting to emergency departments in Victoria Australia, between the years of 2006-2010. The VEMD produced a report detailing sex, age at presentation, month of presentation, time of day of presentation, day of presentation, location that injury occurred and type of injury for presentation. Monash Health in Victoria Australia, provided emergency department estimates for injury types to calculate an estimated cost of an acute injury related to wearing high heel footwear. There were 240 injuries presenting to Victorian emergency departments directly attributed to wearing high heeled footwear. The majority of people injured were women (n = 236) and all were less than 55 years of age. More injuries presented on a Sunday (n = 83) and more in the 8 am-12 pm time bracket (n = 64). There were also more injuries presenting in the months of November, December and January (n = 80). The most commonly injured body part was the ankle (n = 123). The emergency department estimate of the cost of these injuries over this time-frame was almost $72,000 (mean of $316.72 per presentation). People who wear high heel footwear on weekends appear to be at higher risk for injury that leads to emergency department presentation. However, there was not a large cost associated with emergency department presentations attributable to wearing high heel footwear over a 5 year period.

Top