Sample records for provide precise estimates

  1. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Treesearch

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  2. Using age on clothes size label to estimate weight in emergency paediatric patients.

    PubMed

    Elgie, Laura D; Williams, Andrew R

    2012-10-01

    To study formulae that estimate children's weight using their actual age. To determine whether using the age on their clothes size label in these formulae can estimate weight when their actual age is unknown. The actual age and age on the clothes labels of 188 children were inserted into formulae that estimate children's weight. These estimates were compared with their actual weight. Bland-Altman plots calculated the precision and accuracy of each of these estimates. In all formulae, using age on the clothes sizes label provided a more precise estimate than the child's actual age. In emergencies where a child's age is unknown, use of the age on their clothes label in weight-estimating formulae yields acceptable weight estimates. Even in situations where a child's age is known, the age on their clothes label may provide a more accurate and precise weight estimate than the actual age.

  3. Probabilistic metrology or how some measurement outcomes render ultra-precise estimates

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Gendra, B.; Muñoz-Tapia, R.; Bagan, E.

    2016-10-01

    We show on theoretical grounds that, even in the presence of noise, probabilistic measurement strategies (which have a certain probability of failure or abstention) can provide, upon a heralded successful outcome, estimates with a precision that exceeds the deterministic bounds for the average precision. This establishes a new ultimate bound on the phase estimation precision of particular measurement outcomes (or sequence of outcomes). For probe systems subject to local dephasing, we quantify such precision limit as a function of the probability of failure that can be tolerated. Our results show that the possibility of abstaining can set back the detrimental effects of noise.

  4. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  5. Evaluation of the procedure 1A component of the 1980 US/Canada wheat and barley exploratory experiment

    NASA Technical Reports Server (NTRS)

    Chapman, G. M. (Principal Investigator); Carnes, J. G.

    1981-01-01

    Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.

  6. Application of square-root filtering for spacecraft attitude control

    NASA Technical Reports Server (NTRS)

    Sorensen, J. A.; Schmidt, S. F.; Goka, T.

    1978-01-01

    Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.

  7. Demonstration of precise estimation of polar motion parameters with the global positioning system: Initial results

    NASA Technical Reports Server (NTRS)

    Lichten, S. M.

    1991-01-01

    Data from the Global Positioning System (GPS) were used to determine precise polar motion estimates. Conservatively calculated formal errors of the GPS least squares solution are approx. 10 cm. The GPS estimates agree with independently determined polar motion values from very long baseline interferometry (VLBI) at the 5 cm level. The data were obtained from a partial constellation of GPS satellites and from a sparse worldwide distribution of ground stations. The accuracy of the GPS estimates should continue to improve as more satellites and ground receivers become operational, and eventually a near real time GPS capability should be available. Because the GPS data are obtained and processed independently from the large radio antennas at the Deep Space Network (DSN), GPS estimation could provide very precise measurements of Earth orientation for calibration of deep space tracking data and could significantly relieve the ever growing burden on the DSN radio telescopes to provide Earth platform calibrations.

  8. Quantitative imaging biomarkers: Effect of sample size and bias on confidence interval coverage.

    PubMed

    Obuchowski, Nancy A; Bullen, Jennifer

    2017-01-01

    Introduction Quantitative imaging biomarkers (QIBs) are being increasingly used in medical practice and clinical trials. An essential first step in the adoption of a quantitative imaging biomarker is the characterization of its technical performance, i.e. precision and bias, through one or more performance studies. Then, given the technical performance, a confidence interval for a new patient's true biomarker value can be constructed. Estimating bias and precision can be problematic because rarely are both estimated in the same study, precision studies are usually quite small, and bias cannot be measured when there is no reference standard. Methods A Monte Carlo simulation study was conducted to assess factors affecting nominal coverage of confidence intervals for a new patient's quantitative imaging biomarker measurement and for change in the quantitative imaging biomarker over time. Factors considered include sample size for estimating bias and precision, effect of fixed and non-proportional bias, clustered data, and absence of a reference standard. Results Technical performance studies of a quantitative imaging biomarker should include at least 35 test-retest subjects to estimate precision and 65 cases to estimate bias. Confidence intervals for a new patient's quantitative imaging biomarker measurement constructed under the no-bias assumption provide nominal coverage as long as the fixed bias is <12%. For confidence intervals of the true change over time, linearity must hold and the slope of the regression of the measurements vs. true values should be between 0.95 and 1.05. The regression slope can be assessed adequately as long as fixed multiples of the measurand can be generated. Even small non-proportional bias greatly reduces confidence interval coverage. Multiple lesions in the same subject can be treated as independent when estimating precision. Conclusion Technical performance studies of quantitative imaging biomarkers require moderate sample sizes in order to provide robust estimates of bias and precision for constructing confidence intervals for new patients. Assumptions of linearity and non-proportional bias should be assessed thoroughly.

  9. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    NASA Astrophysics Data System (ADS)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  10. Achieving metrological precision limits through postselection

    NASA Astrophysics Data System (ADS)

    Alves, G. Bié; Pimentel, A.; Hor-Meyll, M.; Walborn, S. P.; Davidovich, L.; Filho, R. L. de Matos

    2017-01-01

    Postselection strategies have been proposed with the aim of amplifying weak signals, which may help to overcome detection thresholds associated with technical noise in high-precision measurements. Here we use an optical setup to experimentally explore two different postselection protocols for the estimation of a small parameter: a weak-value amplification procedure and an alternative method that does not provide amplification but nonetheless is shown to be more robust for the sake of parameter estimation. Each technique leads approximately to the saturation of quantum limits for the estimation precision, expressed by the Cramér-Rao bound. For both situations, we show that parameter estimation is improved when the postselection statistics are considered together with the measurement device.

  11. Influence of sectioning location on age estimates from common carp dorsal spines

    USGS Publications Warehouse

    Watkins, Carson J.; Klein, Zachary B.; Terrazas, Marc M.; Quist, Michael C.

    2015-01-01

    Dorsal spines have been shown to provide precise age estimates for Common CarpCyprinus carpio and are commonly used by management agencies to gain information on Common Carp populations. However, no previous studies have evaluated variation in the precision of age estimates obtained from different sectioning locations along Common Carp dorsal spines. We evaluated the precision, relative readability, and distribution of age estimates obtained from various sectioning locations along Common Carp dorsal spines. Dorsal spines from 192 Common Carp were sectioned at the base (section 1), immediately distal to the basal section (section 2), and at 25% (section 3), 50% (section 4), and 75% (section 5) of the total length of the dorsal spine. The exact agreement and within-1-year agreement among readers was highest and the coefficient of variation lowest for section 2. In general, age estimates derived from sections 2 and 3 had similar age distributions and displayed the highest concordance in age estimates with section 1. Our results indicate that sections taken at ≤ 25% of the total length of the dorsal spine can be easily interpreted and provide precise estimates of Common Carp age. The greater consistency in age estimates obtained from section 2 indicates that by using a standard sectioning location, fisheries scientists can expect age-based estimates of population metrics to be more comparable and thus more useful for understanding Common Carp population dynamics.

  12. Leveraging prognostic baseline variables to gain precision in randomized trials

    PubMed Central

    Colantuoni, Elizabeth; Rosenblum, Michael

    2015-01-01

    We focus on estimating the average treatment effect in a randomized trial. If baseline variables are correlated with the outcome, then appropriately adjusting for these variables can improve precision. An example is the analysis of covariance (ANCOVA) estimator, which applies when the outcome is continuous, the quantity of interest is the difference in mean outcomes comparing treatment versus control, and a linear model with only main effects is used. ANCOVA is guaranteed to be at least as precise as the standard unadjusted estimator, asymptotically, under no parametric model assumptions and also is locally semiparametric efficient. Recently, several estimators have been developed that extend these desirable properties to more general settings that allow any real-valued outcome (e.g., binary or count), contrasts other than the difference in mean outcomes (such as the relative risk), and estimators based on a large class of generalized linear models (including logistic regression). To the best of our knowledge, we give the first simulation study in the context of randomized trials that compares these estimators. Furthermore, our simulations are not based on parametric models; instead, our simulations are based on resampling data from completed randomized trials in stroke and HIV in order to assess estimator performance in realistic scenarios. We provide practical guidance on when these estimators are likely to provide substantial precision gains and describe a quick assessment method that allows clinical investigators to determine whether these estimators could be useful in their specific trial contexts. PMID:25872751

  13. [Estimation of desert vegetation coverage based on multi-source remote sensing data].

    PubMed

    Wan, Hong-Mei; Li, Xia; Dong, Dao-Rui

    2012-12-01

    Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study areaAbstract: Taking the lower reaches of Tarim River in Xinjiang of Northwest China as study area and based on the ground investigation and the multi-source remote sensing data of different resolutions, the estimation models for desert vegetation coverage were built, with the precisions of different estimation methods and models compared. The results showed that with the increasing spatial resolution of remote sensing data, the precisions of the estimation models increased. The estimation precision of the models based on the high, middle-high, and middle-low resolution remote sensing data was 89.5%, 87.0%, and 84.56%, respectively, and the precisions of the remote sensing models were higher than that of vegetation index method. This study revealed the change patterns of the estimation precision of desert vegetation coverage based on different spatial resolution remote sensing data, and realized the quantitative conversion of the parameters and scales among the high, middle, and low spatial resolution remote sensing data of desert vegetation coverage, which would provide direct evidence for establishing and implementing comprehensive remote sensing monitoring scheme for the ecological restoration in the study area.

  14. Quantum metrology and estimation of Unruh effect

    PubMed Central

    Wang, Jieci; Tian, Zehua; Jing, Jiliang; Fan, Heng

    2014-01-01

    We study the quantum metrology for a pair of entangled Unruh-Dewitt detectors when one of them is accelerated and coupled to a massless scalar field. Comparing with previous schemes, our model requires only local interaction and avoids the use of cavities in the probe state preparation process. We show that the probe state preparation and the interaction between the accelerated detector and the external field have significant effects on the value of quantum Fisher information, correspondingly pose variable ultimate limit of precision in the estimation of Unruh effect. We find that the precision of the estimation can be improved by a larger effective coupling strength and a longer interaction time. Alternatively, the energy gap of the detector has a range that can provide us a better precision. Thus we may adjust those parameters and attain a higher precision in the estimation. We also find that an extremely high acceleration is not required in the quantum metrology process. PMID:25424772

  15. Skeletal Correlates for Body Mass Estimation in Modern and Fossil Flying Birds

    PubMed Central

    Field, Daniel J.; Lynner, Colton; Brown, Christian; Darroch, Simon A. F.

    2013-01-01

    Scaling relationships between skeletal dimensions and body mass in extant birds are often used to estimate body mass in fossil crown-group birds, as well as in stem-group avialans. However, useful statistical measurements for constraining the precision and accuracy of fossil mass estimates are rarely provided, which prevents the quantification of robust upper and lower bound body mass estimates for fossils. Here, we generate thirteen body mass correlations and associated measures of statistical robustness using a sample of 863 extant flying birds. By providing robust body mass regressions with upper- and lower-bound prediction intervals for individual skeletal elements, we address the longstanding problem of body mass estimation for highly fragmentary fossil birds. We demonstrate that the most precise proxy for estimating body mass in the overall dataset, measured both as coefficient determination of ordinary least squares regression and percent prediction error, is the maximum diameter of the coracoid’s humeral articulation facet (the glenoid). We further demonstrate that this result is consistent among the majority of investigated avian orders (10 out of 18). As a result, we suggest that, in the majority of cases, this proxy may provide the most accurate estimates of body mass for volant fossil birds. Additionally, by presenting statistical measurements of body mass prediction error for thirteen different body mass regressions, this study provides a much-needed quantitative framework for the accurate estimation of body mass and associated ecological correlates in fossil birds. The application of these regressions will enhance the precision and robustness of many mass-based inferences in future paleornithological studies. PMID:24312392

  16. LAI-2000 Accuracy, Precision, and Application to Visual Estimation of Leaf Area Index of Loblolly Pine

    Treesearch

    Jason A. Gatch; Timothy B. Harrington; James P. Castleberry

    2002-01-01

    Leaf area index (LAI) is an important parameter of forest stand productivity that has been used to diagnose stand vigor and potential fertilizer response of southern pines. The LAI-2000 was tested for its ability to provide accurate and precise estimates of LAI of loblolly pine (Pinus taeda L.). To test instrument accuracy, regression was used to...

  17. The ACS statistical analyzer

    DOT National Transportation Integrated Search

    2010-03-01

    This document provides guidance for using the ACS Statistical Analyzer. It is an Excel-based template for users of estimates from the American Community Survey (ACS) to assess the precision of individual estimates and to compare pairs of estimates fo...

  18. Age estimation of burbot using pectoral fin rays, brachiostegal rays, and otoliths

    USGS Publications Warehouse

    Klein, Zachary B.; Terrazas, Marc M.; Quist, Michael C.

    2014-01-01

    Throughout much of its native distribution, burbot (Lota lota) is a species of conservation concern. Understanding dynamic rate functions is critical for the effective management of sensitive burbot populations, which necessitates accurate and precise age estimates. Managing sensitive burbot populations requires an accurate and precise non-lethal alternative. In an effort to identify a non-lethal ageing structure, we compared the precision of age estimates obtained from otoliths, pectoral fin rays, dorsal fin rays and branchiostegal rays from 208 burbot collected from the Green River drainage, Wyoming. Additionally, we compared the accuracy of age estimates from pectoral fin rays, dorsal fin rays and branchiostegal rays to those of otoliths. Dorsal fin rays were immediately deemed a poor ageing structure and removed from further analysis. Age-bias plots of consensus ages derived from branchiostegal rays and pectoral fin rays were appreciably different from those obtained from otoliths. Exact agreement between readers and reader confidence was highest for otoliths and lowest for branchiostegal rays. Age-bias plots indicated that age estimates obtained from branchiostegal rays and pectoral fin rays were substantially different from age estimates obtained from otoliths. Our results indicate that otoliths provide the most precise age estimates for burbot.

  19. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Precision estimate for Odin-OSIRIS limb scatter retrievals

    NASA Astrophysics Data System (ADS)

    Bourassa, A. E.; McLinden, C. A.; Bathgate, A. F.; Elash, B. J.; Degenstein, D. A.

    2012-02-01

    The limb scatter measurements made by the Optical Spectrograph and Infrared Imaging System (OSIRIS) instrument on the Odin spacecraft are used to routinely produce vertically resolved trace gas and aerosol extinction profiles. Version 5 of the ozone and stratospheric aerosol extinction retrievals, which are available for download, are performed using a multiplicative algebraic reconstruction technique (MART). The MART inversion is a type of relaxation method, and as such the covariance of the retrieved state is estimated numerically, which, if done directly, is a computationally heavy task. Here we provide a methodology for the derivation of a numerical estimate of the covariance matrix for the retrieved state using the MART inversion that is sufficiently efficient to perform for each OSIRIS measurement. The resulting precision is compared with the variability in a large set of pairs of OSIRIS measurements that are close in time and space in the tropical stratosphere where the natural atmospheric variability is weak. These results are found to be highly consistent and thus provide confidence in the numerical estimate of the precision in the retrieved profiles.

  1. Comparison of sampling designs for estimating deforestation from landsat TM and MODIS imagery: a case study in Mato Grosso, Brazil.

    PubMed

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block.

  2. Comparison of Sampling Designs for Estimating Deforestation from Landsat TM and MODIS Imagery: A Case Study in Mato Grosso, Brazil

    PubMed Central

    Zhu, Shanyou; Zhang, Hailong; Liu, Ronggao; Cao, Yun; Zhang, Guixin

    2014-01-01

    Sampling designs are commonly used to estimate deforestation over large areas, but comparisons between different sampling strategies are required. Using PRODES deforestation data as a reference, deforestation in the state of Mato Grosso in Brazil from 2005 to 2006 is evaluated using Landsat imagery and a nearly synchronous MODIS dataset. The MODIS-derived deforestation is used to assist in sampling and extrapolation. Three sampling designs are compared according to the estimated deforestation of the entire study area based on simple extrapolation and linear regression models. The results show that stratified sampling for strata construction and sample allocation using the MODIS-derived deforestation hotspots provided more precise estimations than simple random and systematic sampling. Moreover, the relationship between the MODIS-derived and TM-derived deforestation provides a precise estimate of the total deforestation area as well as the distribution of deforestation in each block. PMID:25258742

  3. Effect of survey design and catch rate estimation on total catch estimates in Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2012-01-01

    Roving–roving and roving–access creel surveys are the primary techniques used to obtain information on harvest of Chinook salmon Oncorhynchus tshawytscha in Idaho sport fisheries. Once interviews are conducted using roving–roving or roving–access survey designs, mean catch rate can be estimated with the ratio-of-means (ROM) estimator, the mean-of-ratios (MOR) estimator, or the MOR estimator with exclusion of short-duration (≤0.5 h) trips. Our objective was to examine the relative bias and precision of total catch estimates obtained from use of the two survey designs and three catch rate estimators for Idaho Chinook salmon fisheries. Information on angling populations was obtained by direct visual observation of portions of Chinook salmon fisheries in three Idaho river systems over an 18-d period. Based on data from the angling populations, Monte Carlo simulations were performed to evaluate the properties of the catch rate estimators and survey designs. Among the three estimators, the ROM estimator provided the most accurate and precise estimates of mean catch rate and total catch for both roving–roving and roving–access surveys. On average, the root mean square error of simulated total catch estimates was 1.42 times greater and relative bias was 160.13 times greater for roving–roving surveys than for roving–access surveys. Length-of-stay bias and nonstationary catch rates in roving–roving surveys both appeared to affect catch rate and total catch estimates. Our results suggest that use of the ROM estimator in combination with an estimate of angler effort provided the least biased and most precise estimates of total catch for both survey designs. However, roving–access surveys were more accurate than roving–roving surveys for Chinook salmon fisheries in Idaho.

  4. Spatial distribution, sampling precision and survey design optimisation with non-normal variables: The case of anchovy (Engraulis encrasicolus) recruitment in Spanish Mediterranean waters

    NASA Astrophysics Data System (ADS)

    Tugores, M. Pilar; Iglesias, Magdalena; Oñate, Dolores; Miquel, Joan

    2016-02-01

    In the Mediterranean Sea, the European anchovy (Engraulis encrasicolus) displays a key role in ecological and economical terms. Ensuring stock sustainability requires the provision of crucial information, such as species spatial distribution or unbiased abundance and precision estimates, so that management strategies can be defined (e.g. fishing quotas, temporal closure areas or marine protected areas MPA). Furthermore, the estimation of the precision of global abundance at different sampling intensities can be used for survey design optimisation. Geostatistics provide a priori unbiased estimations of the spatial structure, global abundance and precision for autocorrelated data. However, their application to non-Gaussian data introduces difficulties in the analysis in conjunction with low robustness or unbiasedness. The present study applied intrinsic geostatistics in two dimensions in order to (i) analyse the spatial distribution of anchovy in Spanish Western Mediterranean waters during the species' recruitment season, (ii) produce distribution maps, (iii) estimate global abundance and its precision, (iv) analyse the effect of changing the sampling intensity on the precision of global abundance estimates and, (v) evaluate the effects of several methodological options on the robustness of all the analysed parameters. The results suggested that while the spatial structure was usually non-robust to the tested methodological options when working with the original dataset, it became more robust for the transformed datasets (especially for the log-backtransformed dataset). The global abundance was always highly robust and the global precision was highly or moderately robust to most of the methodological options, except for data transformation.

  5. Regression discontinuity was a valid design for dichotomous outcomes in three randomized trials.

    PubMed

    van Leeuwen, Nikki; Lingsma, Hester F; Mooijaart, Simon P; Nieboer, Daan; Trompet, Stella; Steyerberg, Ewout W

    2018-06-01

    Regression discontinuity (RD) is a quasi-experimental design that may provide valid estimates of treatment effects in case of continuous outcomes. We aimed to evaluate validity and precision in the RD design for dichotomous outcomes. We performed validation studies in three large randomized controlled trials (RCTs) (Corticosteroid Randomization After Significant Head injury [CRASH], the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries [GUSTO], and PROspective Study of Pravastatin in elderly individuals at risk of vascular disease [PROSPER]). To mimic the RD design, we selected patients above and below a cutoff (e.g., age 75 years) randomized to treatment and control, respectively. Adjusted logistic regression models using restricted cubic splines (RCS) and polynomials and local logistic regression models estimated the odds ratio (OR) for treatment, with 95% confidence intervals (CIs) to indicate precision. In CRASH, treatment increased mortality with OR 1.22 [95% CI 1.06-1.40] in the RCT. The RD estimates were 1.42 (0.94-2.16) and 1.13 (0.90-1.40) with RCS adjustment and local regression, respectively. In GUSTO, treatment reduced mortality (OR 0.83 [0.72-0.95]), with more extreme estimates in the RD analysis (OR 0.57 [0.35; 0.92] and 0.67 [0.51; 0.86]). In PROSPER, similar RCT and RD estimates were found, again with less precision in RD designs. We conclude that the RD design provides similar but substantially less precise treatment effect estimates compared with an RCT, with local regression being the preferred method of analysis. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Fidelity of the ensemble code for visual motion in primate retina.

    PubMed

    Frechette, E S; Sher, A; Grivich, M I; Petrusca, D; Litke, A M; Chichilnisky, E J

    2005-07-01

    Sensory experience typically depends on the ensemble activity of hundreds or thousands of neurons, but little is known about how populations of neurons faithfully encode behaviorally important sensory information. We examined how precisely speed of movement is encoded in the population activity of magnocellular-projecting parasol retinal ganglion cells (RGCs) in macaque monkey retina. Multi-electrode recordings were used to measure the activity of approximately 100 parasol RGCs simultaneously in isolated retinas stimulated with moving bars. To examine how faithfully the retina signals motion, stimulus speed was estimated directly from recorded RGC responses using an optimized algorithm that resembles models of motion sensing in the brain. RGC population activity encoded speed with a precision of approximately 1%. The elementary motion signal was conveyed in approximately 10 ms, comparable to the interspike interval. Temporal structure in spike trains provided more precise speed estimates than time-varying firing rates. Correlated activity between RGCs had little effect on speed estimates. The spatial dispersion of RGC receptive fields along the axis of motion influenced speed estimates more strongly than along the orthogonal direction, as predicted by a simple model based on RGC response time variability and optimal pooling. on and off cells encoded speed with similar and statistically independent variability. Simulation of downstream speed estimation using populations of speed-tuned units showed that peak (winner take all) readout provided more precise speed estimates than centroid (vector average) readout. These findings reveal how faithfully the retinal population code conveys information about stimulus speed and the consequences for motion sensing in the brain.

  7. Using simulation to improve wildlife surveys: Wintering mallards in Mississippi, USA

    USGS Publications Warehouse

    Pearse, A.T.; Reinecke, K.J.; Dinsmore, S.J.; Kaminski, R.M.

    2009-01-01

    Wildlife conservation plans generally require reliable data about population abundance and density. Aerial surveys often can provide these data; however, associated costs necessitate designing and conducting surveys efficiently. We developed methods to simulate population distributions of mallards (Anas platyrhynchos) wintering in western Mississippi, USA, by combining bird observations from three previous strip-transect surveys and habitat data from three sets of satellite images representing conditions when surveys were conducted. For each simulated population distribution, we compared 12 primary survey designs and two secondary design options by using coefficients of variation (CV) of population indices as the primary criterion for assessing survey performance. In all, 3 of the 12 primary designs provided the best precision (CV???11.7%) and performed equally well (WR08082E1d.gif diff???0.6%). Features of the designs that provided the largest gains in precision were optimal allocation of sample effort among strata and configuring the study area into five rather than four strata, to more precisely estimate mallard indices in areas of consistently high density. Of the two secondary design options, we found including a second observer to double the size of strip transects increased precision or decreased costs, whereas ratio estimation using auxiliary habitat data from satellite images did not increase precision appreciably. We recommend future surveys of mallard populations in our study area use the strata we developed, optimally allocate samples among strata, employ PPS or EPS sampling, and include two observers when qualified staff are available. More generally, the methods we developed to simulate population distributions from prior survey data provide a cost-effective method to assess performance of alternative wildlife surveys critical to informing management decisions, and could be extended to account for effects of detectability on estimates of true abundance. ?? 2009 CSIRO.

  8. Optimal estimation of entanglement in optical qubit systems

    NASA Astrophysics Data System (ADS)

    Brida, Giorgio; Degiovanni, Ivo P.; Florio, Angela; Genovese, Marco; Giorda, Paolo; Meda, Alice; Paris, Matteo G. A.; Shurupov, Alexander P.

    2011-05-01

    We address the experimental determination of entanglement for systems made of a pair of polarization qubits. We exploit quantum estimation theory to derive optimal estimators, which are then implemented to achieve ultimate bound to precision. In particular, we present a set of experiments aimed at measuring the amount of entanglement for states belonging to different families of pure and mixed two-qubit two-photon states. Our scheme is based on visibility measurements of quantum correlations and achieves the ultimate precision allowed by quantum mechanics in the limit of Poissonian distribution of coincidence counts. Although optimal estimation of entanglement does not require the full tomography of the states we have also performed state reconstruction using two different sets of tomographic projectors and explicitly shown that they provide a less precise determination of entanglement. The use of optimal estimators also allows us to compare and statistically assess the different noise models used to describe decoherence effects occurring in the generation of entanglement.

  9. SALT - a better way of estimating suspended sediment

    Treesearch

    R. B. Thomas

    1984-01-01

    Hardware and software supporting a sediment sampling procedure--Sampling At List Time (SALT) have been perfected. SALT provides estimates of sediment discharge having improved accuracy and estimable precision. Although the greatest benefit of SALT may accrue to those attempting to monitor ""flashy"" small streams, its superior statistical...

  10. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.

  11. Cluster mass estimators from CMB temperature and polarization lensing

    NASA Astrophysics Data System (ADS)

    Hu, Wayne; DeDeo, Simon; Vale, Chris

    2007-12-01

    Upcoming Sunyaev Zel'dovich surveys are expected to return ~104 intermediate mass clusters at high redshift. Their average masses must be known to the same accuracy as desired for the dark energy properties. Internal to the surveys, the cosmic microwave background (CMB) potentially provides a source for lensing mass measurements whose distance is precisely known and behind all clusters. We develop statistical mass estimators from six quadratic combinations of CMB temperature and polarization fields that can simultaneously recover large-scale structure and cluster mass profiles. The performance of these estimators on idealized Navarro Frenk White (NFW) clusters suggests that surveys with a ~1' beam and 10\\,\\muK^{\\prime} noise in uncontaminated temperature maps can make a ~10σ detection, or equivalently a ~10% mass measurement for each 103 set of clusters. With internal or external acoustic scale E-polarization measurements, the ET cross-correlation estimator can provide a stringent test for contaminants on a first detection at ~1/3 the significance. For surveys that reach below 3\\,\\muK^{\\prime}, the EB cross-correlation estimator should provide the most precise measurements and potentially the strongest control over contaminants.

  12. Control-enhanced multiparameter quantum estimation

    NASA Astrophysics Data System (ADS)

    Liu, Jing; Yuan, Haidong

    2017-10-01

    Most studies in multiparameter estimation assume the dynamics is fixed and focus on identifying the optimal probe state and the optimal measurements. In practice, however, controls are usually available to alter the dynamics, which provides another degree of freedom. In this paper we employ optimal control methods, particularly the gradient ascent pulse engineering (GRAPE), to design optimal controls for the improvement of the precision limit in multiparameter estimation. We show that the controlled schemes are not only capable to provide a higher precision limit, but also have a higher stability to the inaccuracy of the time point performing the measurements. This high time stability will benefit the practical metrology, where it is hard to perform the measurement at a very accurate time point due to the response time of the measurement apparatus.

  13. Optimal feedback scheme and universal time scaling for Hamiltonian parameter estimation.

    PubMed

    Yuan, Haidong; Fung, Chi-Hang Fred

    2015-09-11

    Time is a valuable resource and it is expected that a longer time period should lead to better precision in Hamiltonian parameter estimation. However, recent studies in quantum metrology have shown that in certain cases more time may even lead to worse estimations, which puts this intuition into question. In this Letter we show that by including feedback controls this intuition can be restored. By deriving asymptotically optimal feedback controls we quantify the maximal improvement feedback controls can provide in Hamiltonian parameter estimation and show a universal time scaling for the precision limit under the optimal feedback scheme. Our study reveals an intriguing connection between noncommutativity in the dynamics and the gain of feedback controls in Hamiltonian parameter estimation.

  14. PRECISION OF ATMOSPHERIC DRY DEPOSITION DATA FROM THE CLEAN AIR STATUS AND TRENDS NETWORK (CASTNET)

    EPA Science Inventory

    A collocated, dry deposition sampling program was begun in January 1987 by the US Environmental Protection Agency to provide ongoing estimates of the overall precision of dry deposition and supporting data entering the Clean Air Status and Trends Network (CASTNet) archives Duplic...

  15. Estimation of the interior parameters from Mars nutations and from Doppler measurements

    NASA Astrophysics Data System (ADS)

    Yseboodt, M.; Rivoldini, A.; Le Maistre, S.; Dehant, V. M. A.

    2017-12-01

    The presence of a liquid core inside Mars changes the nutations: the nutation amplitudes can be resonantly amplified because of a free mode, called the free core nutation (FCN).We quantify how the internal structure, in particular the size of the core, affects the nutation amplifications and the Doppler observable between a Martian lander and the Earth.Present day core size estimates suggest that the effect is the largest on the prograde semi-annual and retrograde ter-annual nutation.We solve the inverse problem assuming a given precision on the nutation amplifications provided by an extensive set of geodesy measurements and we estimate the precision on the core properties. Such measurements will be available in the near future thanks to the geodesy experiments RISE (InSight mission) and LaRa (ExoMars mission).We find that the precision on the core properties is very dependent on the proximity of the FCN period to the ter-annual forcing (-229 days) and the assumed a priori precision on the nutations.

  16. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  17. Parameter estimation using weighted total least squares in the two-compartment exchange model.

    PubMed

    Garpebring, Anders; Löfstedt, Tommy

    2018-01-01

    The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. Estimation of the uncertainty of analyte concentration from the measurement uncertainty.

    PubMed

    Brown, Simon; Cooke, Delwyn G; Blackwell, Leonard F

    2015-09-01

    Ligand-binding assays, such as immunoassays, are usually analysed using standard curves based on the four-parameter and five-parameter logistic models. An estimate of the uncertainty of an analyte concentration obtained from such curves is needed for confidence intervals or precision profiles. Using a numerical simulation approach, it is shown that the uncertainty of the analyte concentration estimate becomes significant at the extremes of the concentration range and that this is affected significantly by the steepness of the standard curve. We also provide expressions for the coefficient of variation of the analyte concentration estimate from which confidence intervals and the precision profile can be obtained. Using three examples, we show that the expressions perform well.

  19. Radio Science from an Optical Communications Signal

    NASA Technical Reports Server (NTRS)

    Moision, Bruce; Asmar, Sami; Oudrhiri, Kamal

    2013-01-01

    NASA is currently developing the capability to deploy deep space optical communications links. This creates the opportunity to utilize the optical link to obtain range, doppler, and signal intensity estimates. These may, in turn, be used to complement or extend the capabilities of current radio science. In this paper we illustrate the achievable precision in estimating range, doppler, and received signal intensity of an non-coherent optical link (the current state-of-the-art for a deep-space link). We provide a joint estimation algorithm with performance close to the bound. We draw comparisons to estimates based on a coherent radio frequency signal, illustrating that large gains in either precision or observation time are possible with an optical link.

  20. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  1. Monitoring gray wolf populations using multiple survey methods

    USGS Publications Warehouse

    Ausband, David E.; Rich, Lindsey N.; Glenn, Elizabeth M.; Mitchell, Michael S.; Zager, Pete; Miller, David A.W.; Waits, Lisette P.; Ackerman, Bruce B.; Mack, Curt M.

    2013-01-01

    The behavioral patterns and large territories of large carnivores make them challenging to monitor. Occupancy modeling provides a framework for monitoring population dynamics and distribution of territorial carnivores. We combined data from hunter surveys, howling and sign surveys conducted at predicted wolf rendezvous sites, and locations of radiocollared wolves to model occupancy and estimate the number of gray wolf (Canis lupus) packs and individuals in Idaho during 2009 and 2010. We explicitly accounted for potential misidentification of occupied cells (i.e., false positives) using an extension of the multi-state occupancy framework. We found agreement between model predictions and distribution and estimates of number of wolf packs and individual wolves reported by Idaho Department of Fish and Game and Nez Perce Tribe from intensive radiotelemetry-based monitoring. Estimates of individual wolves from occupancy models that excluded data from radiocollared wolves were within an average of 12.0% (SD = 6.0) of existing statewide minimum counts. Models using only hunter survey data generally estimated the lowest abundance, whereas models using all data generally provided the highest estimates of abundance, although only marginally higher. Precision across approaches ranged from 14% to 28% of mean estimates and models that used all data streams generally provided the most precise estimates. We demonstrated that an occupancy model based on different survey methods can yield estimates of the number and distribution of wolf packs and individual wolf abundance with reasonable measures of precision. Assumptions of the approach including that average territory size is known, average pack size is known, and territories do not overlap, must be evaluated periodically using independent field data to ensure occupancy estimates remain reliable. Use of multiple survey methods helps to ensure that occupancy estimates are robust to weaknesses or changes in any 1 survey method. Occupancy modeling may be useful for standardizing estimates across large landscapes, even if survey methods differ across regions, allowing for inferences about broad-scale population dynamics of wolves.

  2. Estimation of chaotic coupled map lattices using symbolic vector dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya

    2010-01-01

    In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.

  3. Informed spectral analysis: audio signal parameter estimation using side information

    NASA Astrophysics Data System (ADS)

    Fourer, Dominique; Marchand, Sylvain

    2013-12-01

    Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.

  4. Population Estimates for Chum Salmon Spawning in the Mainstem Columbia River, 2002 Technical Report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rawding, Dan; Hillson, Todd D.

    2003-11-15

    Accurate and precise population estimates of chum salmon (Oncorhynchus keta) spawning in the mainstem Columbia River are needed to provide a basis for informed water allocation decisions, to determine the status of chum salmon listed under the Endangered Species Act, and to evaluate the contribution of the Duncan Creek re-introduction program to mainstem spawners. Currently, mark-recapture experiments using the Jolly-Seber model provide the only framework for this type of estimation. In 2002, a study was initiated to estimate mainstem Columbia River chum salmon populations using seining data collected while capturing broodstock as part of the Duncan Creek re-introduction. The fivemore » assumptions of the Jolly-Seber model were examined using hypothesis testing within a statistical framework, including goodness of fit tests and secondary experiments. We used POPAN 6, an integrated computer system for the analysis of capture-recapture data, to obtain maximum likelihood estimates of standard model parameters, derived estimates, and their precision. A more parsimonious final model was selected using Akaike Information Criteria. Final chum salmon escapement estimates and (standard error) from seining data for the Ives Island, Multnomah, and I-205 sites are 3,179 (150), 1,269 (216), and 3,468 (180), respectively. The Ives Island estimate is likely lower than the total escapement because only the largest two of four spawning sites were sampled. The accuracy and precision of these estimates would improve if seining was conducted twice per week instead of weekly, and by incorporating carcass recoveries into the analysis. Population estimates derived from seining mark-recapture data were compared to those obtained using the current mainstem Columbia River salmon escapement methodologies. The Jolly-Seber population estimate from carcass tagging in the Ives Island area was 4,232 adults with a standard error of 79. This population estimate appears reasonable and precise but batch marks and lack of secondary studies made it difficult to test Jolly-Seber assumptions, necessary for unbiased estimates. We recommend that individual tags be applied to carcasses to provide a statistical basis for goodness of fit tests and ultimately model selection. Secondary or double marks should be applied to assess tag loss and male and female chum salmon carcasses should be enumerated separately. Carcass tagging population estimates at the two other sites were biased low due to limited sampling. The Area-Under-the-Curve escapement estimates at all three sites were 36% to 76% of Jolly-Seber estimates. Area-Under-the Curve estimates are likely biased low because previous assumptions that observer efficiency is 100% and residence time is 10 days proved incorrect. If managers continue to rely on Area-Under-the-Curve to estimate mainstem Columbia River spawners, a methodology is provided to develop annual estimates of observer efficiency and residence time, and to incorporate uncertainty into the Area-Under-the-Curve escapement estimate.« less

  5. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India—An application of small area estimation techniques

    PubMed Central

    Aditya, Kaustav; Sud, U. C.

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011–12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable. PMID:29879202

  6. Localised estimates and spatial mapping of poverty incidence in the state of Bihar in India-An application of small area estimation techniques.

    PubMed

    Chandra, Hukum; Aditya, Kaustav; Sud, U C

    2018-01-01

    Poverty affects many people, but the ramifications and impacts affect all aspects of society. Information about the incidence of poverty is therefore an important parameter of the population for policy analysis and decision making. In order to provide specific, targeted solutions when addressing poverty disadvantage small area statistics are needed. Surveys are typically designed and planned to produce reliable estimates of population characteristics of interest mainly at higher geographic area such as national and state level. Sample sizes are usually not large enough to provide reliable estimates for disaggregated analysis. In many instances estimates are required for areas of the population for which the survey providing the data was unplanned. Then, for areas with small sample sizes, direct survey estimation of population characteristics based only on the data available from the particular area tends to be unreliable. This paper describes an application of small area estimation (SAE) approach to improve the precision of estimates of poverty incidence at district level in the State of Bihar in India by linking data from the Household Consumer Expenditure Survey 2011-12 of NSSO and the Population Census 2011. The results show that the district level estimates generated by SAE method are more precise and representative. In contrast, the direct survey estimates based on survey data alone are less stable.

  7. Reference interval estimation: Methodological comparison using extensive simulations and empirical data.

    PubMed

    Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S

    2017-12-01

    To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. Comparison study on disturbance estimation techniques in precise slow motion control

    NASA Astrophysics Data System (ADS)

    Fan, S.; Nagamune, R.; Altintas, Y.; Fan, D.; Zhang, Z.

    2010-08-01

    Precise low speed motion control is important for the industrial applications of both micro-milling machine tool feed drives and electro-optical tracking servo systems. It calls for precise position and instantaneous velocity measurement and disturbance, which involves direct drive motor force ripple, guide way friction and cutting force etc., estimation. This paper presents a comparison study on dynamic response and noise rejection performance of three existing disturbance estimation techniques, including the time-delayed estimators, the state augmented Kalman Filters and the conventional disturbance observers. The design technique essentials of these three disturbance estimators are introduced. For designing time-delayed estimators, it is proposed to substitute Kalman Filter for Luenberger state observer to improve noise suppression performance. The results show that the noise rejection performances of the state augmented Kalman Filters and the time-delayed estimators are much better than the conventional disturbance observers. These two estimators can give not only the estimation of the disturbance but also the low noise level estimations of position and instantaneous velocity. The bandwidth of the state augmented Kalman Filters is wider than the time-delayed estimators. In addition, the state augmented Kalman Filters can give unbiased estimations of the slow varying disturbance and the instantaneous velocity, while the time-delayed estimators can not. The simulation and experiment conducted on X axis of a 2.5-axis prototype micro milling machine are provided.

  9. Estimating ages of Utah chubs by use of pectoral fin rays, otoliths, and scales

    USGS Publications Warehouse

    Griffin, Kayla M; Beard, Zachary S.; Flinders, John M.; Quist, Michael C.

    2017-01-01

    Utah chub Gila atraria is native to the Upper Snake River system in Wyoming and Idaho and to the Lake Bonneville Basin in Utah and southeastern Idaho. However, the Utah chub has been introduced into many other waterbodies in the western United States, where it competes with ecologically and economically important species. The objectives of this study were to evaluate between-reader precision and reader confidence in age estimates obtained from pectoral fin rays, lapilli (otoliths), asterisci (otoliths), and scales for Utah chubs collected from Henrys Lake, Idaho. Lapilli have been previously shown to provide accurate age estimates for Utah chubs; therefore, we sought to compare age estimates from fin rays, asterisci, and scales to those from lapilli. The between-reader coefficient of variation (CV) in age estimates was lowest and the percent of exact reader agreement (PA-0) was highest for pectoral fin rays (CV = 4.7, PA-0 = 74%), followed by scales (CV = 10.3, PA-0 = 52.3%), lapilli (CV = 11.6, PA-0 = 48.2%), and asterisci (CV = 13.0, PA-0 = 41.7%). Consensus age estimates from pectoral fin rays showed high concordance with consensus age estimates from lapilli. Our results indicate that pectoral fin rays provide the most precise age estimates for Utah chub. Pectoral fin rays are easily collected and processed and also provide age estimates without requiring fish sacrifice.

  10. Statistical inference for the within-device precision of quantitative measurements in assay validation.

    PubMed

    Liu, Jen-Pei; Lu, Li-Tien; Liao, C T

    2009-09-01

    Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.

  11. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  12. Plot intensity and cycle-length effects on growth and removals estimates from forest inventories

    Treesearch

    Paul C. Van Deusen; Francis A. Roesch

    2015-01-01

    Continuous forest inventory planners can allocate the budget to more plots per acre or a shorter remeasurement cycle. A higher plot intensity benefits small area estimation and allows for more precision in current status estimates. Shorter cycles may provide better estimates of growth, removals and mortality. On a fixed budget, the planner can't have both greater...

  13. On improving the speed and reliability of T2-Relaxation-Under-Spin-Tagging (TRUST) MRI

    PubMed Central

    Xu, Feng; Uh, Jinsoo; Liu, Peiying; Lu, Hanzhang

    2011-01-01

    A T2-Relaxation-Under-Spin-Tagging (TRUST) technique was recently developed to estimate cerebral blood oxygenation, providing potentials for non-invasive assessment of the brain's oxygen consumption. A limitation of the current sequence is the need for long TR, as shorter TR causes an over-estimation in blood R2. The present study proposes a post-saturation TRUST by placing a non-selective 90° pulse after the signal acquisition to reset magnetization in the whole brain. This scheme was found to eliminate estimation bias at a slight cost of precision. To improve the precision, TE of the sequence was optimized and it was found that a modest TE shortening of 3.4ms can reduce the estimation error by 49%. We recommend the use of post-saturation TRUST sequence with a TR of 3000ms and a TE of 3.6ms, which allows the determination of global venous oxygenation with scan duration of 1 minute 12 seconds and an estimation precision of ±1% (in units of oxygen saturation percentage). PMID:22127845

  14. Precise Ionosphere Monitoring via a DSFH Satellite TT&C Link

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Li, Guangxia; Li, Zhiqiang; Yue, Chao

    2014-11-01

    A phase-coherent and frequency-hopped PN ranging system was developed, originally for the purpose of anti-jamming TT&C (tracking, telemetry and telecommand) of military satellites of China, including the Beidou-2 navigation satellites. The key innovation in the synchronization of this system is the unambiguous phase recovery of direct sequence and frequency hopping (DSFH) spread spectrum signal and the correction of frequency-dependent phase rotation caused by ionosphere. With synchronization achieved, a TEC monitoring algorithm based on maximum likelihood (ML) principle is proposed and its measuring precision is analyzed through ground simulation, onboard confirmation tests will be performed when transionosphere DSFH links are established in 2014. The measuring precision of TEC exceeds that obtained from GPS receiver data because the measurement is derived from unambiguous carrier phase estimates, not pseudorange estimates. The observation results from TT&C stations can provide real time regional ionosphere TEC estimation.

  15. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  16. Walleye age estimation using otoliths and dorsal spines: Preparation techniques and sampling guidelines based on sex and total length

    USGS Publications Warehouse

    Dembkowski, Daniel J.; Isermann, Daniel A.; Koenigs, Ryan P.

    2017-01-01

    We used dorsal spines and otoliths from 735 Walleye Sander vitreus collected from 35 Wisconsin water bodies to evaluate whether 1) otolith and dorsal spine cross sections provided age estimates similar to simpler methods of preparation (e.g., whole otoliths and dorsal spines, cracked otoliths); and 2) between-reader precision and differences between spine and otolith ages varied in relation to total length (TL), sex, and growth rate. Ages estimated from structures prepared using simpler techniques were generally similar to ages estimated using thin sections of dorsal spines and otoliths, suggesting that, in some instances, much of the additional processing time and specialized equipment associated with thin sectioning could be avoided. Overall, between-reader precision was higher for sectioned otoliths (mean coefficient of variation [CV] = 3.28%; standard error [SE] = 0.33%) than for sectioned dorsal spines (mean CV = 9.20%; SE = 0.56%). When using sectioned otoliths for age assignment, between-reader precision did not vary between sexes or growth categories (i.e., fast, moderate, slow), but between-reader precision was higher for females than males when using sectioned dorsal spines. Dorsal spines were generally effective at replicating otolith ages for male Walleye <450 mm TL and female Walleye <600 mm TL, suggesting that dorsal spines can be used to estimate ages for male Walleye <450 mm TL and female Walleye <600 mm TL. If sex is unknown, we suggest dorsal spines be used to estimate ages for Walleye <450 mm TL, but that otoliths be used for fish >450 mm TL. Our results provide useful guidance on structure and preparation technique selection for Walleye age estimation, thereby allowing biologists to develop sampling guidelines that could be implemented using information that is always (TL) or often (sex) available at the time of fish collection.

  17. More noise does not mean more precision: A review of Aldenberg and Rorije (2013).

    PubMed

    Fox, David R

    2015-09-01

    This paper provides a critical review of recently published work that suggests that the precision of hazardous concentration estimates from Species Sensitivity Distributions (SSDs) is improved when the uncertainty in the input data is taken into account. Our review confirms that this counter-intuitive result is indeed incorrect. 2015 FRAME.

  18. Direct single-cell biomass estimates for marine bacteria via Archimedes' principle

    PubMed Central

    Cermak, Nathan; Becker, Jamie W; Knudsen, Scott M; Chisholm, Sallie W; Manalis, Scott R; Polz, Martin F

    2017-01-01

    Microbes are an essential component of marine food webs and biogeochemical cycles, and therefore precise estimates of their biomass are of significant value. Here, we measured single-cell biomass distributions of isolates from several numerically abundant marine bacterial groups, including Pelagibacter (SAR11), Prochlorococcus and Vibrio using a microfluidic mass sensor known as a suspended microchannel resonator (SMR). We show that the SMR can provide biomass (dry mass) measurements for cells spanning more than two orders of magnitude and that these estimates are consistent with other independent measures. We find that Pelagibacterales strain HTCC1062 has a median biomass of 11.9±0.7 fg per cell, which is five- to twelve-fold smaller than the median Prochlorococcus cell's biomass (depending upon strain) and nearly 100-fold lower than that of rapidly growing V. splendidus strain 13B01. Knowing the biomass contributions from various taxonomic groups will provide more precise estimates of total marine biomass, aiding models of nutrient flux in the ocean. PMID:27922599

  19. Direct single-cell biomass estimates for marine bacteria via Archimedes' principle.

    PubMed

    Cermak, Nathan; Becker, Jamie W; Knudsen, Scott M; Chisholm, Sallie W; Manalis, Scott R; Polz, Martin F

    2017-03-01

    Microbes are an essential component of marine food webs and biogeochemical cycles, and therefore precise estimates of their biomass are of significant value. Here, we measured single-cell biomass distributions of isolates from several numerically abundant marine bacterial groups, including Pelagibacter (SAR11), Prochlorococcus and Vibrio using a microfluidic mass sensor known as a suspended microchannel resonator (SMR). We show that the SMR can provide biomass (dry mass) measurements for cells spanning more than two orders of magnitude and that these estimates are consistent with other independent measures. We find that Pelagibacterales strain HTCC1062 has a median biomass of 11.9±0.7 fg per cell, which is five- to twelve-fold smaller than the median Prochlorococcus cell's biomass (depending upon strain) and nearly 100-fold lower than that of rapidly growing V. splendidus strain 13B01. Knowing the biomass contributions from various taxonomic groups will provide more precise estimates of total marine biomass, aiding models of nutrient flux in the ocean.

  20. Estimating migratory game-bird productivity by integrating age ratio and banding data

    USGS Publications Warehouse

    Zimmerman, G.S.; Link, W.A.; Conroy, M.J.; Sauer, J.R.; Richkus, K.D.; Boomer, G. Scott

    2010-01-01

    Implications: Several national and international management strategies for migratory game birds in North America rely on measures of productivity from harvest survey parts collections, without a justification of the estimator or providing estimates of precision. We derive an estimator of productivity with realistic measures of uncertainty that can be directly incorporated into management plans or ecological studies across large spatial scales.

  1. Integrating chronological uncertainties for annually laminated lake sediments using layer counting, independent chronologies and Bayesian age modelling (Lake Ohau, South Island, New Zealand)

    NASA Astrophysics Data System (ADS)

    Vandergoes, Marcus J.; Howarth, Jamie D.; Dunbar, Gavin B.; Turnbull, Jocelyn C.; Roop, Heidi A.; Levy, Richard H.; Li, Xun; Prior, Christine; Norris, Margaret; Keller, Liz D.; Baisden, W. Troy; Ditchburn, Robert; Fitzsimons, Sean J.; Bronk Ramsey, Christopher

    2018-05-01

    Annually resolved (varved) lake sequences are important palaeoenvironmental archives as they offer a direct incremental dating technique for high-frequency reconstruction of environmental and climate change. Despite the importance of these records, establishing a robust chronology and quantifying its precision and accuracy (estimations of error) remains an essential but challenging component of their development. We outline an approach for building reliable independent chronologies, testing the accuracy of layer counts and integrating all chronological uncertainties to provide quantitative age and error estimates for varved lake sequences. The approach incorporates (1) layer counts and estimates of counting precision; (2) radiometric and biostratigrapic dating techniques to derive independent chronology; and (3) the application of Bayesian age modelling to produce an integrated age model. This approach is applied to a case study of an annually resolved sediment record from Lake Ohau, New Zealand. The most robust age model provides an average error of 72 years across the whole depth range. This represents a fractional uncertainty of ∼5%, higher than the <3% quoted for most published varve records. However, the age model and reported uncertainty represent the best fit between layer counts and independent chronology and the uncertainties account for both layer counting precision and the chronological accuracy of the layer counts. This integrated approach provides a more representative estimate of age uncertainty and therefore represents a statistically more robust chronology.

  2. Comparison of the precision of age estimates generated from fin rays, scales, and otoliths of Blue Sucker

    USGS Publications Warehouse

    Acre, Matthew R.; Alejandrez, Celeste; East, Jessica; Massure, Wade A.; Miyazono, S.; Pease, Jessica E.; Roesler, Elizabeth L.; Williams, H.M.; Grabowski, Timothy B.

    2017-01-01

    Evaluating the precision of age estimates generated by different readers and different calcified structures is an important part of generating reliable estimations of growth, recruitment, and mortality for fish populations. Understanding the potential loss of precision associated with using structures harvested without sacrificing individuals, such as scales or fin rays, is particularly important when working with imperiled species, such as Cycleptus elongatus (Blue Sucker). We collected otoliths (lapilli), scales, and the first fin rays of the dorsal, anal, pelvic, and pectoral fins of 9 Blue Suckers. We generated age estimates from each structure by both experienced (n = 5) and novice (n = 4) readers. We found that, independent of the structure used to generate the age estimates, the mean coefficient of variation (CV) of experienced readers was approximately 29% lower than that of novice readers. Further, the mean CV of age estimates generated from pectoral-fin rays, pelvic-fin rays, and scales were statistically indistinguishable and less than those of dorsal-fin rays, anal-fin rays, and otoliths. Anal-, dorsal-, and pelvic-fin rays and scales underestimated age compared to otoliths, but age estimates from pectoral-fin rays were comparable to those from otoliths. Skill level, structure, and fish total-length influenced reader precision between subsequent reads of the same aging structure from a particular fish. Using structures that can be harvested non-lethally to estimate the age of Blue Sucker can provide reliable and reproducible results, similar to those that would be expected from using otoliths. Therefore, we recommend the use of pectoral-fin rays as a non-lethal method to obtain age estimates for Blue Suckers.

  3. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    USGS Publications Warehouse

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  4. Coherence in quantum estimation

    NASA Astrophysics Data System (ADS)

    Giorda, Paolo; Allegra, Michele

    2018-01-01

    The geometry of quantum states provides a unifying framework for estimation processes based on quantum probes, and it establishes the ultimate bounds of the achievable precision. We show a relation between the statistical distance between infinitesimally close quantum states and the second order variation of the coherence of the optimal measurement basis with respect to the state of the probe. In quantum phase estimation protocols, this leads to propose coherence as the relevant resource that one has to engineer and control to optimize the estimation precision. Furthermore, the main object of the theory i.e. the symmetric logarithmic derivative, in many cases allows one to identify a proper factorization of the whole Hilbert space in two subsystems. The factorization allows one to discuss the role of coherence versus correlations in estimation protocols; to show how certain estimation processes can be completely or effectively described within a single-qubit subsystem; and to derive lower bounds for the scaling of the estimation precision with the number of probes used. We illustrate how the framework works for both noiseless and noisy estimation procedures, in particular those based on multi-qubit GHZ-states. Finally we succinctly analyze estimation protocols based on zero-temperature critical behavior. We identify the coherence that is at the heart of their efficiency, and we show how it exhibits the non-analyticities and scaling behavior proper of a large class of quantum phase transitions.

  5. A Simple Sampling Method for Estimating the Accuracy of Large Scale Record Linkage Projects.

    PubMed

    Boyd, James H; Guiver, Tenniel; Randall, Sean M; Ferrante, Anna M; Semmens, James B; Anderson, Phil; Dickinson, Teresa

    2016-05-17

    Record linkage techniques allow different data collections to be brought together to provide a wider picture of the health status of individuals. Ensuring high linkage quality is important to guarantee the quality and integrity of research. Current methods for measuring linkage quality typically focus on precision (the proportion of incorrect links), given the difficulty of measuring the proportion of false negatives. The aim of this work is to introduce and evaluate a sampling based method to estimate both precision and recall following record linkage. In the sampling based method, record-pairs from each threshold (including those below the identified cut-off for acceptance) are sampled and clerically reviewed. These results are then applied to the entire set of record-pairs, providing estimates of false positives and false negatives. This method was evaluated on a synthetically generated dataset, where the true match status (which records belonged to the same person) was known. The sampled estimates of linkage quality were relatively close to actual linkage quality metrics calculated for the whole synthetic dataset. The precision and recall measures for seven reviewers were very consistent with little variation in the clerical assessment results (overall agreement using the Fleiss Kappa statistics was 0.601). This method presents as a possible means of accurately estimating matching quality and refining linkages in population level linkage studies. The sampling approach is especially important for large project linkages where the number of record pairs produced may be very large often running into millions.

  6. A Fixed-Precision Sequential Sampling Plan for the Potato Tuberworm Moth, Phthorimaea operculella Zeller (Lepidoptera: Gelechidae), on Potato Cultivars.

    PubMed

    Shahbi, M; Rajabpour, A

    2017-08-01

    Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.

  7. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images.

    PubMed

    De Backer, A; van den Bos, K H W; Van den Broek, W; Sijbers, J; Van Aert, S

    2016-12-01

    An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Estimating parameters of hidden Markov models based on marked individuals: use of robust design data

    USGS Publications Warehouse

    Kendall, William L.; White, Gary C.; Hines, James E.; Langtimm, Catherine A.; Yoshizaki, Jun

    2012-01-01

    Development and use of multistate mark-recapture models, which provide estimates of parameters of Markov processes in the face of imperfect detection, have become common over the last twenty years. Recently, estimating parameters of hidden Markov models, where the state of an individual can be uncertain even when it is detected, has received attention. Previous work has shown that ignoring state uncertainty biases estimates of survival and state transition probabilities, thereby reducing the power to detect effects. Efforts to adjust for state uncertainty have included special cases and a general framework for a single sample per period of interest. We provide a flexible framework for adjusting for state uncertainty in multistate models, while utilizing multiple sampling occasions per period of interest to increase precision and remove parameter redundancy. These models also produce direct estimates of state structure for each primary period, even for the case where there is just one sampling occasion. We apply our model to expected value data, and to data from a study of Florida manatees, to provide examples of the improvement in precision due to secondary capture occasions. We also provide user-friendly software to implement these models. This general framework could also be used by practitioners to consider constrained models of particular interest, or model the relationship between within-primary period parameters (e.g., state structure) and between-primary period parameters (e.g., state transition probabilities).

  9. Double the dates and go for Bayes - Impacts of model choice, dating density and quality on chronologies

    NASA Astrophysics Data System (ADS)

    Blaauw, Maarten; Christen, J. Andrés; Bennett, K. D.; Reimer, Paula J.

    2018-05-01

    Reliable chronologies are essential for most Quaternary studies, but little is known about how age-depth model choice, as well as dating density and quality, affect the precision and accuracy of chronologies. A meta-analysis suggests that most existing late-Quaternary studies contain fewer than one date per millennium, and provide millennial-scale precision at best. We use existing and simulated sediment cores to estimate what dating density and quality are required to obtain accurate chronologies at a desired precision. For many sites, a doubling in dating density would significantly improve chronologies and thus their value for reconstructing and interpreting past environmental changes. Commonly used classical age-depth models stop becoming more precise after a minimum dating density is reached, but the precision of Bayesian age-depth models which take advantage of chronological ordering continues to improve with more dates. Our simulations show that classical age-depth models severely underestimate uncertainty and are inaccurate at low dating densities, and also perform poorly at high dating densities. On the other hand, Bayesian age-depth models provide more realistic precision estimates, including at low to average dating densities, and are much more robust against dating scatter and outliers. Indeed, Bayesian age-depth models outperform classical ones at all tested dating densities, qualities and time-scales. We recommend that chronologies should be produced using Bayesian age-depth models taking into account chronological ordering and based on a minimum of 2 dates per millennium.

  10. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  11. An Optimal Parameterization Framework for Infrasonic Tomography of the Stratospheric Winds Using Non-Local Sources

    DOE PAGES

    Blom, Philip Stephen; Marcillo, Omar Eduardo

    2016-12-05

    A method is developed to apply acoustic tomography methods to a localized network of infrasound arrays with intention of monitoring the atmosphere state in the region around the network using non-local sources without requiring knowledge of the precise source location or non-local atmosphere state. Closely spaced arrays provide a means to estimate phase velocities of signals that can provide limiting bounds on certain characteristics of the atmosphere. Larger spacing between such clusters provide a means to estimate celerity from propagation times along multiple unique stratospherically or thermospherically ducted propagation paths and compute more precise estimates of the atmosphere state. Inmore » order to avoid the commonly encountered complex, multimodal distributions for parametric atmosphere descriptions and to maximize the computational efficiency of the method, an optimal parametrization framework is constructed. This framework identifies the ideal combination of parameters for tomography studies in specific regions of the atmosphere and statistical model selection analysis shows that high quality corrections to the middle atmosphere winds can be obtained using as few as three parameters. Lastly, comparison of the resulting estimates for synthetic data sets shows qualitative agreement between the middle atmosphere winds and those estimated from infrasonic traveltime observations.« less

  12. Comparison of Vehicle-Broadcasted Fuel Consumption Rates against Precise Fuel Measurements for Medium- and Heavy-Duty Vehicles and Engines

    DOE PAGES

    Pink, Alex; Ragatz, Adam; Wang, Lijuan; ...

    2017-03-28

    Vehicles continuously report real-time fuel consumption estimates over their data bus, known as the controller area network (CAN). However, the accuracy of these fueling estimates is uncertain to researchers who collect these data from any given vehicle. To assess the accuracy of these estimates, CAN-reported fuel consumption data are compared against fuel measurements from precise instrumentation. The data analyzed consisted of eight medium/heavy-duty vehicles and two medium-duty engines. Varying discrepancies between CAN fueling rates and the more accurate measurements emerged but without a vehicular trend-for some vehicles the CAN under-reported fuel consumption and for others the CAN over-reported fuel consumption.more » Furthermore, a qualitative real-time analysis revealed that the operating conditions under which these fueling discrepancies arose varied among vehicles. A drive cycle analysis revealed that while CAN fueling estimate accuracy differs for individual vehicles, that CAN estimates capture the relative fuel consumption differences between drive cycles within 4% for all vehicles and even more accurately for some vehicles. Furthermore, in situations where only CAN-reported data are available, CAN fueling estimates can provide relative fuel consumption trends but not accurate or precise fuel consumption rates.« less

  13. Comparison of Vehicle-Broadcasted Fuel Consumption Rates against Precise Fuel Measurements for Medium- and Heavy-Duty Vehicles and Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pink, Alex; Ragatz, Adam; Wang, Lijuan

    Vehicles continuously report real-time fuel consumption estimates over their data bus, known as the controller area network (CAN). However, the accuracy of these fueling estimates is uncertain to researchers who collect these data from any given vehicle. To assess the accuracy of these estimates, CAN-reported fuel consumption data are compared against fuel measurements from precise instrumentation. The data analyzed consisted of eight medium/heavy-duty vehicles and two medium-duty engines. Varying discrepancies between CAN fueling rates and the more accurate measurements emerged but without a vehicular trend-for some vehicles the CAN under-reported fuel consumption and for others the CAN over-reported fuel consumption.more » Furthermore, a qualitative real-time analysis revealed that the operating conditions under which these fueling discrepancies arose varied among vehicles. A drive cycle analysis revealed that while CAN fueling estimate accuracy differs for individual vehicles, that CAN estimates capture the relative fuel consumption differences between drive cycles within 4% for all vehicles and even more accurately for some vehicles. Furthermore, in situations where only CAN-reported data are available, CAN fueling estimates can provide relative fuel consumption trends but not accurate or precise fuel consumption rates.« less

  14. Bayesian approach to estimate AUC, partition coefficient and drug targeting index for studies with serial sacrifice design.

    PubMed

    Wang, Tianli; Baron, Kyle; Zhong, Wei; Brundage, Richard; Elmquist, William

    2014-03-01

    The current study presents a Bayesian approach to non-compartmental analysis (NCA), which provides the accurate and precise estimate of AUC 0 (∞) and any AUC 0 (∞) -based NCA parameter or derivation. In order to assess the performance of the proposed method, 1,000 simulated datasets were generated in different scenarios. A Bayesian method was used to estimate the tissue and plasma AUC 0 (∞) s and the tissue-to-plasma AUC 0 (∞) ratio. The posterior medians and the coverage of 95% credible intervals for the true parameter values were examined. The method was applied to laboratory data from a mice brain distribution study with serial sacrifice design for illustration. Bayesian NCA approach is accurate and precise in point estimation of the AUC 0 (∞) and the partition coefficient under a serial sacrifice design. It also provides a consistently good variance estimate, even considering the variability of the data and the physiological structure of the pharmacokinetic model. The application in the case study obtained a physiologically reasonable posterior distribution of AUC, with a posterior median close to the value estimated by classic Bailer-type methods. This Bayesian NCA approach for sparse data analysis provides statistical inference on the variability of AUC 0 (∞) -based parameters such as partition coefficient and drug targeting index, so that the comparison of these parameters following destructive sampling becomes statistically feasible.

  15. Motion compensation using origin ensembles in awake small animal positron emission tomography

    NASA Astrophysics Data System (ADS)

    Gillam, John E.; Angelis, Georgios I.; Kyme, Andre Z.; Meikle, Steven R.

    2017-02-01

    In emission tomographic imaging, the stochastic origin ensembles algorithm provides unique information regarding the detected counts given the measured data. Precision in both voxel and region-wise parameters may be determined for a single data set based on the posterior distribution of the count density allowing uncertainty estimates to be allocated to quantitative measures. Uncertainty estimates are of particular importance in awake animal neurological and behavioral studies for which head motion, unique for each acquired data set, perturbs the measured data. Motion compensation can be conducted when rigid head pose is measured during the scan. However, errors in pose measurements used for compensation can degrade the data and hence quantitative outcomes. In this investigation motion compensation and detector resolution models were incorporated into the basic origin ensembles algorithm and an efficient approach to computation was developed. The approach was validated against maximum liklihood—expectation maximisation and tested using simulated data. The resultant algorithm was then used to analyse quantitative uncertainty in regional activity estimates arising from changes in pose measurement precision. Finally, the posterior covariance acquired from a single data set was used to describe correlations between regions of interest providing information about pose measurement precision that may be useful in system analysis and design. The investigation demonstrates the use of origin ensembles as a powerful framework for evaluating statistical uncertainty of voxel and regional estimates. While in this investigation rigid motion was considered in the context of awake animal PET, the extension to arbitrary motion may provide clinical utility where respiratory or cardiac motion perturb the measured data.

  16. IEEE 802.15.4 ZigBee-Based Time-of-Arrival Estimation for Wireless Sensor Networks.

    PubMed

    Cheon, Jeonghyeon; Hwang, Hyunsu; Kim, Dongsun; Jung, Yunho

    2016-02-05

    Precise time-of-arrival (TOA) estimation is one of the most important techniques in RF-based positioning systems that use wireless sensor networks (WSNs). Because the accuracy of TOA estimation is proportional to the RF signal bandwidth, using broad bandwidth is the most fundamental approach for achieving higher accuracy. Hence, ultra-wide-band (UWB) systems with a bandwidth of 500 MHz are commonly used. However, wireless systems with broad bandwidth suffer from the disadvantages of high complexity and high power consumption. Therefore, it is difficult to employ such systems in various WSN applications. In this paper, we present a precise time-of-arrival (TOA) estimation algorithm using an IEEE 802.15.4 ZigBee system with a narrow bandwidth of 2 MHz. In order to overcome the lack of bandwidth, the proposed algorithm estimates the fractional TOA within the sampling interval. Simulation results show that the proposed TOA estimation algorithm provides an accuracy of 0.5 m at a signal-to-noise ratio (SNR) of 8 dB and achieves an SNR gain of 5 dB as compared with the existing algorithm. In addition, experimental results indicate that the proposed algorithm provides accurate TOA estimation in a real indoor environment.

  17. Precision of channel catfish catch estimates using hoop nets in larger Oklahoma reservoirs

    USGS Publications Warehouse

    Stewart, David R.; Long, James M.

    2012-01-01

    Hoop nets are rapidly becoming the preferred gear type used to sample channel catfish Ictalurus punctatus, and many managers have reported that hoop nets effectively sample channel catfish in small impoundments (<200 ha). However, the utility and precision of this approach in larger impoundments have not been tested. We sought to determine how the number of tandem hoop net series affected the catch of channel catfish and the time involved in using 16 tandem hoop net series in larger impoundments (>200 ha). Hoop net series were fished once, set for 3 d; then we used Monte Carlo bootstrapping techniques that allowed us to estimate the number of net series required to achieve two levels of precision (relative standard errors [RSEs] of 15 and 25) at two levels of confidence (80% and 95%). Sixteen hoop net series were effective at obtaining an RSE of 25 with 80% and 95% confidence in all but one reservoir. Achieving an RSE of 15 was often less effective and required 18-96 hoop net series given the desired level of confidence. We estimated that an hour was needed, on average, to deploy and retrieve three hoop net series, which meant that 16 hoop net series per reservoir could be "set" and "retrieved" within a day, respectively. The estimated number of net series to achieve an RSE of 25 or 15 was positively associated with the coefficient of variation (CV) of the sample but not with reservoir surface area or relative abundance. Our results suggest that hoop nets are capable of providing reasonably precise estimates of channel catfish relative abundance and that the relationship with the CV of the sample reported herein can be used to determine the sampling effort for a desired level of precision.

  18. COBALT: A GN&C Payload for Testing ALHAT Capabilities in Closed-Loop Terrestrial Rocket Flights

    NASA Technical Reports Server (NTRS)

    Carson, John M., III; Amzajerdian, Farzin; Hines, Glenn D.; O'Neal, Travis V.; Robertson, Edward A.; Seubert, Carl; Trawny, Nikolas

    2016-01-01

    The COBALT (CoOperative Blending of Autonomous Landing Technology) payload is being developed within NASA as a risk reduction activity to mature, integrate and test ALHAT (Autonomous precision Landing and Hazard Avoidance Technology) systems targeted for infusion into near-term robotic and future human space flight missions. The initial COBALT payload instantiation is integrating the third-generation ALHAT Navigation Doppler Lidar (NDL) sensor, for ultra high-precision velocity plus range measurements, with the passive-optical Lander Vision System (LVS) that provides Terrain Relative Navigation (TRN) global-position estimates. The COBALT payload will be integrated onboard a rocket-propulsive terrestrial testbed and will provide precise navigation estimates and guidance planning during two flight test campaigns in 2017 (one open-loop and closed- loop). The NDL is targeting performance capabilities desired for future Mars and Moon Entry, Descent and Landing (EDL). The LVS is already baselined for TRN on the Mars 2020 robotic lander mission. The COBALT platform will provide NASA with a new risk-reduction capability to test integrated EDL Guidance, Navigation and Control (GN&C) components in closed-loop flight demonstrations prior to the actual mission EDL.

  19. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  20. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  1. Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness

    PubMed Central

    2015-01-01

    Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073

  2. Mapped Landmark Algorithm for Precision Landing

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew; Ansar, Adnan; Matthies, Larry

    2007-01-01

    A report discusses a computer vision algorithm for position estimation to enable precision landing during planetary descent. The Descent Image Motion Estimation System for the Mars Exploration Rovers has been used as a starting point for creating code for precision, terrain-relative navigation during planetary landing. The algorithm is designed to be general because it handles images taken at different scales and resolutions relative to the map, and can produce mapped landmark matches for any planetary terrain of sufficient texture. These matches provide a measurement of horizontal position relative to a known landing site specified on the surface map. Multiple mapped landmarks generated per image allow for automatic detection and elimination of bad matches. Attitude and position can be generated from each image; this image-based attitude measurement can be used by the onboard navigation filter to improve the attitude estimate, which will improve the position estimates. The algorithm uses normalized correlation of grayscale images, producing precise, sub-pixel images. The algorithm has been broken into two sub-algorithms: (1) FFT Map Matching (see figure), which matches a single large template by correlation in the frequency domain, and (2) Mapped Landmark Refinement, which matches many small templates by correlation in the spatial domain. Each relies on feature selection, the homography transform, and 3D image correlation. The algorithm is implemented in C++ and is rated at Technology Readiness Level (TRL) 4.

  3. Developing methods for assessing abundance and distribution of European oysters (Ostrea edulis) using towed video.

    PubMed

    Thorngren, Linnea; Dunér Holthuis, Thomas; Lindegarth, Susanne; Lindegarth, Mats

    2017-01-01

    Due to large-scale habitat losses and increasing pressures, benthic habitats in general, and perhaps oyster beds in particular, are commonly in decline and severely threatened on regional and global scales. Appropriate and cost-efficient methods for mapping and monitoring of the distribution, abundance and quality of remaining oyster populations are fundamental for sustainable management and conservation of these habitats and their associated values. Towed video has emerged as a promising method for surveying benthic communities in a both non-destructive and cost-efficient way. Here we examine its use as a tool for quantification and monitoring of oyster populations by (i) analysing how well abundances can be estimated and how living Ostrea edulis individuals can be distinguished from dead ones, (ii) estimating the variability within and among observers as well as the spatial variability at a number of scales, and finally (iii) evaluating the precision of estimated abundances under different scenarios for monitoring. Overall, the results show that the can be used to quantify abundance and occurrence of Ostrea edulis in heterogeneous environments. There was a strong correlation between abundances determined in the field and abundances estimated by video-analyses (r2 = 0.93), even though video analyses underestimated the total abundance of living oysters by 20%. Additionally, the method was largely repeatable within and among observers and revealed no evident bias in identification of living and dead oysters. We also concluded that the spatial variability was an order of magnitude larger than that due to observer errors. Subsequent modelling of precision showed that the total area sampled was the main determinant of precision and provided general method for determining precision. This study provides a thorough validation of the application of towed video on quantitative estimations of live oysters. The results suggest that the method can indeed be very useful for this purpose and we therefor recommend it for future monitoring of oysters and other threatened habitats and species.

  4. Developing methods for assessing abundance and distribution of European oysters (Ostrea edulis) using towed video

    PubMed Central

    Dunér Holthuis, Thomas; Lindegarth, Susanne; Lindegarth, Mats

    2017-01-01

    Due to large-scale habitat losses and increasing pressures, benthic habitats in general, and perhaps oyster beds in particular, are commonly in decline and severely threatened on regional and global scales. Appropriate and cost-efficient methods for mapping and monitoring of the distribution, abundance and quality of remaining oyster populations are fundamental for sustainable management and conservation of these habitats and their associated values. Towed video has emerged as a promising method for surveying benthic communities in a both non-destructive and cost-efficient way. Here we examine its use as a tool for quantification and monitoring of oyster populations by (i) analysing how well abundances can be estimated and how living Ostrea edulis individuals can be distinguished from dead ones, (ii) estimating the variability within and among observers as well as the spatial variability at a number of scales, and finally (iii) evaluating the precision of estimated abundances under different scenarios for monitoring. Overall, the results show that the can be used to quantify abundance and occurrence of Ostrea edulis in heterogeneous environments. There was a strong correlation between abundances determined in the field and abundances estimated by video-analyses (r2 = 0.93), even though video analyses underestimated the total abundance of living oysters by 20%. Additionally, the method was largely repeatable within and among observers and revealed no evident bias in identification of living and dead oysters. We also concluded that the spatial variability was an order of magnitude larger than that due to observer errors. Subsequent modelling of precision showed that the total area sampled was the main determinant of precision and provided general method for determining precision. This study provides a thorough validation of the application of towed video on quantitative estimations of live oysters. The results suggest that the method can indeed be very useful for this purpose and we therefor recommend it for future monitoring of oysters and other threatened habitats and species. PMID:29141028

  5. Evaluating sampling designs by computer simulation: A case study with the Missouri bladderpod

    USGS Publications Warehouse

    Morrison, L.W.; Smith, D.R.; Young, C.; Nichols, D.W.

    2008-01-01

    To effectively manage rare populations, accurate monitoring data are critical. Yet many monitoring programs are initiated without careful consideration of whether chosen sampling designs will provide accurate estimates of population parameters. Obtaining accurate estimates is especially difficult when natural variability is high, or limited budgets determine that only a small fraction of the population can be sampled. The Missouri bladderpod, Lesquerella filiformis Rollins, is a federally threatened winter annual that has an aggregated distribution pattern and exhibits dramatic interannual population fluctuations. Using the simulation program SAMPLE, we evaluated five candidate sampling designs appropriate for rare populations, based on 4 years of field data: (1) simple random sampling, (2) adaptive simple random sampling, (3) grid-based systematic sampling, (4) adaptive grid-based systematic sampling, and (5) GIS-based adaptive sampling. We compared the designs based on the precision of density estimates for fixed sample size, cost, and distance traveled. Sampling fraction and cost were the most important factors determining precision of density estimates, and relative design performance changed across the range of sampling fractions. Adaptive designs did not provide uniformly more precise estimates than conventional designs, in part because the spatial distribution of L. filiformis was relatively widespread within the study site. Adaptive designs tended to perform better as sampling fraction increased and when sampling costs, particularly distance traveled, were taken into account. The rate that units occupied by L. filiformis were encountered was higher for adaptive than for conventional designs. Overall, grid-based systematic designs were more efficient and practically implemented than the others. ?? 2008 The Society of Population Ecology and Springer.

  6. Comparison and assessment of aerial and ground estimates of waterbird colonies

    USGS Publications Warehouse

    Green, M.C.; Luent, M.C.; Michot, T.C.; Jeske, C.W.; Leberg, P.L.

    2008-01-01

    Aerial surveys are often used to quantify sizes of waterbird colonies; however, these surveys would benefit from a better understanding of associated biases. We compared estimates of breeding pairs of waterbirds, in colonies across southern Louisiana, USA, made from the ground, fixed-wing aircraft, and a helicopter. We used a marked-subsample method for ground-counting colonies to obtain estimates of error and visibility bias. We made comparisons over 2 sampling periods: 1) surveys conducted on the same colonies using all 3 methods during 3-11 May 2005 and 2) an expanded fixed-wing and ground-survey comparison conducted over 4 periods (May and Jun, 2004-2005). Estimates from fixed-wing aircraft were approximately 65% higher than those from ground counts for overall estimated number of breeding pairs and for both dark and white-plumaged species. The coefficient of determination between estimates based on ground and fixed-wing aircraft was ???0.40 for most species, and based on the assumption that estimates from the ground were closer to the true count, fixed-wing aerial surveys appeared to overestimate numbers of nesting birds of some species; this bias often increased with the size of the colony. Unlike estimates from fixed-wing aircraft, numbers of nesting pairs made from ground and helicopter surveys were very similar for all species we observed. Ground counts by one observer resulted in underestimated number of breeding pairs by 20% on average. The marked-subsample method provided an estimate of the number of missed nests as well as an estimate of precision. These estimates represent a major advantage of marked-subsample ground counts over aerial methods; however, ground counts are difficult in large or remote colonies. Helicopter surveys and ground counts provide less biased, more precise estimates of breeding pairs than do surveys made from fixed-wing aircraft. We recommend managers employ ground counts using double observers for surveying waterbird colonies when feasible. Fixed-wing aerial surveys may be suitable to determine colony activity and composition of common waterbird species. The most appropriate combination of survey approaches will be based on the need for precise and unbiased estimates, balanced with financial and logistical constraints.

  7. Comparative precision of age estimates from two southern reservoir populations of paddlefish [Polyodon spathula (Walbaum, 1792)

    USGS Publications Warehouse

    Long, James M.; Nealis, Ashley

    2017-01-01

    The aim of the study was to determine whether location and sex affected the age precision estimates between two southern, reservoir populations of paddlefish [Polyodon spathula (Walbaum, 1792)]. From 589 paddlefish collected in Grand Lake and Keystone Lake, Oklahoma in 2011, ages from dentaries were estimated using three independent readers and precision was compared with coefficient of variation between locations and sexes. Ages were more precisely estimated from Grand Lake and from females.

  8. Connections between survey calibration estimators and semiparametric models for incomplete data

    PubMed Central

    Lumley, Thomas; Shaw, Pamela A.; Dai, James Y.

    2012-01-01

    Survey calibration (or generalized raking) estimators are a standard approach to the use of auxiliary information in survey sampling, improving on the simple Horvitz–Thompson estimator. In this paper we relate the survey calibration estimators to the semiparametric incomplete-data estimators of Robins and coworkers, and to adjustment for baseline variables in a randomized trial. The development based on calibration estimators explains the ‘estimated weights’ paradox and provides useful heuristics for constructing practical estimators. We present some examples of using calibration to gain precision without making additional modelling assumptions in a variety of regression models. PMID:23833390

  9. Accommodating Chromosome Inversions in Linkage Analysis

    PubMed Central

    Chen, Gary K.; Slaten, Erin; Ophoff, Roel A.; Lange, Kenneth

    2006-01-01

    This work develops a population-genetics model for polymorphic chromosome inversions. The model precisely describes how an inversion changes the nature of and approach to linkage equilibrium. The work also describes algorithms and software for allele-frequency estimation and linkage analysis in the presence of an inversion. The linkage algorithms implemented in the software package Mendel estimate recombination parameters and calculate the posterior probability that each pedigree member carries the inversion. Application of Mendel to eight Centre d'Étude du Polymorphisme Humain pedigrees in a region containing a common inversion on 8p23 illustrates its potential for providing more-precise estimates of the location of an unmapped marker or trait gene. Our expanded cytogenetic analysis of these families further identifies inversion carriers and increases the evidence of linkage. PMID:16826515

  10. Computerized Adaptive Testing: Overview and Introduction.

    ERIC Educational Resources Information Center

    Meijer, Rob R.; Nering, Michael L.

    1999-01-01

    Provides an overview of computerized adaptive testing (CAT) and introduces contributions to this special issue. CAT elements discussed include item selection, estimation of the latent trait, item exposure, measurement precision, and item-bank development. (SLD)

  11. The white dwarf mass-radius relation with Gaia, Hubble and FUSE

    NASA Astrophysics Data System (ADS)

    Joyce, Simon R. G.; Barstow, Martin A.; Casewell, Sarah L.; Holberg, Jay B.; Bond, Howard E.

    2018-04-01

    White dwarfs are becoming useful tools for many areas of astronomy. They can be used as accurate chronometers over Gyr timescales. They are also clues to the history of star formation in our galaxy. Many of these studies require accurate estimates of the mass of the white dwarf. The theoretical mass-radius relation is often invoked to provide these mass estimates. While the theoretical mass-radius relation is well developed, observational tests of this relation show a much larger scatter in the results than expected. High precision observational tests to confirm this relation are required. Gaia is providing distance measurements which will remove one of the main source of uncertainty affecting most previous observations. We combine Gaia distances with spectra from the Hubble and FUSE satelites to make precise tests of the white dwarf mass-radius relation.

  12. An Inertial Dual-State State Estimator for Precision Planetary Landing with Hazard Detection and Avoidance

    NASA Technical Reports Server (NTRS)

    Bishop, Robert H.; DeMars, Kyle; Trawny, Nikolas; Crain, Tim; Hanak, Chad; Carson, John M.; Christian, John

    2016-01-01

    The navigation filter architecture successfully deployed on the Morpheus flight vehicle is presented. The filter was developed as a key element of the NASA Autonomous Landing and Hazard Avoidance Technology (ALHAT) project and over the course of 15 free fights was integrated into the Morpheus vehicle, operations, and flight control loop. Flight testing completed by demonstrating autonomous hazard detection and avoidance, integration of an altimeter, surface relative velocity (velocimeter) and hazard relative navigation (HRN) measurements into the onboard dual-state inertial estimator Kalman flter software, and landing within 2 meters of the vertical testbed GPS-based navigation solution at the safe landing site target. Morpheus followed a trajectory that included an ascent phase followed by a partial descent-to-landing, although the proposed filter architecture is applicable to more general planetary precision entry, descent, and landings. The main new contribution is the incorporation of a sophisticated hazard relative navigation sensor-originally intended to locate safe landing sites-into the navigation system and employed as a navigation sensor. The formulation of a dual-state inertial extended Kalman filter was designed to address the precision planetary landing problem when viewed as a rendezvous problem with an intended landing site. For the required precision navigation system that is capable of navigating along a descent-to-landing trajectory to a precise landing, the impact of attitude errors on the translational state estimation are included in a fully integrated navigation structure in which translation state estimation is combined with attitude state estimation. The map tie errors are estimated as part of the process, thereby creating a dual-state filter implementation. Also, the filter is implemented using inertial states rather than states relative to the target. External measurements include altimeter, velocimeter, star camera, terrain relative navigation sensor, and a hazard relative navigation sensor providing information regarding hazards on a map generated on-the-fly.

  13. A Statistical Guide to the Design of Deep Mutational Scanning Experiments

    PubMed Central

    Matuszewski, Sebastian; Hildebrandt, Marcel E.; Ghenu, Ana-Hermina; Jensen, Jeffrey D.; Bank, Claudia

    2016-01-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. PMID:27412710

  14. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations

    PubMed Central

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs. PMID:27139732

  15. Practical Bias Correction in Aerial Surveys of Large Mammals: Validation of Hybrid Double-Observer with Sightability Method against Known Abundance of Feral Horse (Equus caballus) Populations.

    PubMed

    Lubow, Bruce C; Ransom, Jason I

    2016-01-01

    Reliably estimating wildlife abundance is fundamental to effective management. Aerial surveys are one of the only spatially robust tools for estimating large mammal populations, but statistical sampling methods are required to address detection biases that affect accuracy and precision of the estimates. Although various methods for correcting aerial survey bias are employed on large mammal species around the world, these have rarely been rigorously validated. Several populations of feral horses (Equus caballus) in the western United States have been intensively studied, resulting in identification of all unique individuals. This provided a rare opportunity to test aerial survey bias correction on populations of known abundance. We hypothesized that a hybrid method combining simultaneous double-observer and sightability bias correction techniques would accurately estimate abundance. We validated this integrated technique on populations of known size and also on a pair of surveys before and after a known number was removed. Our analysis identified several covariates across the surveys that explained and corrected biases in the estimates. All six tests on known populations produced estimates with deviations from the known value ranging from -8.5% to +13.7% and <0.7 standard errors. Precision varied widely, from 6.1% CV to 25.0% CV. In contrast, the pair of surveys conducted around a known management removal produced an estimated change in population between the surveys that was significantly larger than the known reduction. Although the deviation between was only 9.1%, the precision estimate (CV = 1.6%) may have been artificially low. It was apparent that use of a helicopter in those surveys perturbed the horses, introducing detection error and heterogeneity in a manner that could not be corrected by our statistical models. Our results validate the hybrid method, highlight its potentially broad applicability, identify some limitations, and provide insight and guidance for improving survey designs.

  16. [Age and time estimation during different types of activity].

    PubMed

    Gareev, E M; Osipova, L G

    1980-01-01

    The study was concerned with the age characteristics of verbal and operative estimation of time intervals filled with different types of mental and physical activity as well as those free of it. The experiment was conducted on 85 subjects, 7--24 years of age. In all age groups and in both forms of time estimation (except verbal estimation in 10--12 years old children) there was a significant connection between the interval estimation and the type of activity. In adults and in 7--8 years old children, the connection was significantly tighter in operative estimations than in verbal ones. Unlike senior school children and adults, in 7--12 years old children there were sharp differences in precision between operative and verbal estimations and a discordance of their changes under the influence of activity. Precision and variability were rather similar in all age groups. It is suggested that the obtained data show heterochronity and a different rate of development of the higher nervous activity mechanisms providing for reflection of time in the form of verbal and voluntary motor reactions to the given interval.

  17. Estimating maneuvers for precise relative orbit determination using GPS

    NASA Astrophysics Data System (ADS)

    Allende-Alba, Gerardo; Montenbruck, Oliver; Ardaens, Jean-Sébastien; Wermuth, Martin; Hugentobler, Urs

    2017-01-01

    Precise relative orbit determination is an essential element for the generation of science products from distributed instrumentation of formation flying satellites in low Earth orbit. According to the mission profile, the required formation is typically maintained and/or controlled by executing maneuvers. In order to generate consistent and precise orbit products, a strategy for maneuver handling is mandatory in order to avoid discontinuities or precision degradation before, after and during maneuver execution. Precise orbit determination offers the possibility of maneuver estimation in an adjustment of single-satellite trajectories using GPS measurements. However, a consistent formulation of a precise relative orbit determination scheme requires the implementation of a maneuver estimation strategy which can be used, in addition, to improve the precision of maneuver estimates by drawing upon the use of differential GPS measurements. The present study introduces a method for precise relative orbit determination based on a reduced-dynamic batch processing of differential GPS pseudorange and carrier phase measurements, which includes maneuver estimation as part of the relative orbit adjustment. The proposed method has been validated using flight data from space missions with different rates of maneuvering activity, including the GRACE, TanDEM-X and PRISMA missions. The results show the feasibility of obtaining precise relative orbits without degradation in the vicinity of maneuvers as well as improved maneuver estimates that can be used for better maneuver planning in flight dynamics operations.

  18. Optimal structure of metaplasticity for adaptive learning

    PubMed Central

    2017-01-01

    Learning from reward feedback in a changing environment requires a high degree of adaptability, yet the precise estimation of reward information demands slow updates. In the framework of estimating reward probability, here we investigated how this tradeoff between adaptability and precision can be mitigated via metaplasticity, i.e. synaptic changes that do not always alter synaptic efficacy. Using the mean-field and Monte Carlo simulations we identified ‘superior’ metaplastic models that can substantially overcome the adaptability-precision tradeoff. These models can achieve both adaptability and precision by forming two separate sets of meta-states: reservoirs and buffers. Synapses in reservoir meta-states do not change their efficacy upon reward feedback, whereas those in buffer meta-states can change their efficacy. Rapid changes in efficacy are limited to synapses occupying buffers, creating a bottleneck that reduces noise without significantly decreasing adaptability. In contrast, more-populated reservoirs can generate a strong signal without manifesting any observable plasticity. By comparing the behavior of our model and a few competing models during a dynamic probability estimation task, we found that superior metaplastic models perform close to optimally for a wider range of model parameters. Finally, we found that metaplastic models are robust to changes in model parameters and that metaplastic transitions are crucial for adaptive learning since replacing them with graded plastic transitions (transitions that change synaptic efficacy) reduces the ability to overcome the adaptability-precision tradeoff. Overall, our results suggest that ubiquitous unreliability of synaptic changes evinces metaplasticity that can provide a robust mechanism for mitigating the tradeoff between adaptability and precision and thus adaptive learning. PMID:28658247

  19. A unified framework for constructing, tuning and assessing photometric redshift density estimates in a selection bias setting

    NASA Astrophysics Data System (ADS)

    Freeman, P. E.; Izbicki, R.; Lee, A. B.

    2017-07-01

    Photometric redshift estimation is an indispensable tool of precision cosmology. One problem that plagues the use of this tool in the era of large-scale sky surveys is that the bright galaxies that are selected for spectroscopic observation do not have properties that match those of (far more numerous) dimmer galaxies; thus, ill-designed empirical methods that produce accurate and precise redshift estimates for the former generally will not produce good estimates for the latter. In this paper, we provide a principled framework for generating conditional density estimates (I.e. photometric redshift PDFs) that takes into account selection bias and the covariate shift that this bias induces. We base our approach on the assumption that the probability that astronomers label a galaxy (I.e. determine its spectroscopic redshift) depends only on its measured (photometric and perhaps other) properties x and not on its true redshift. With this assumption, we can explicitly write down risk functions that allow us to both tune and compare methods for estimating importance weights (I.e. the ratio of densities of unlabelled and labelled galaxies for different values of x) and conditional densities. We also provide a method for combining multiple conditional density estimates for the same galaxy into a single estimate with better properties. We apply our risk functions to an analysis of ≈106 galaxies, mostly observed by Sloan Digital Sky Survey, and demonstrate through multiple diagnostic tests that our method achieves good conditional density estimates for the unlabelled galaxies.

  20. Estimating annual suspended-sediment loads in the northern and central Appalachian Coal region

    USGS Publications Warehouse

    Koltun, G.F.

    1985-01-01

    Multiple-regression equations were developed for estimating the annual suspended-sediment load, for a given year, from small to medium-sized basins in the northern and central parts of the Appalachian coal region. The regression analysis was performed with data for land use, basin characteristics, streamflow, rainfall, and suspended-sediment load for 15 sites in the region. Two variables, the maximum mean-daily discharge occurring within the year and the annual peak discharge, explained much of the variation in the annual suspended-sediment load. Separate equations were developed employing each of these discharge variables. Standard errors for both equations are relatively large, which suggests that future predictions will probably have a low level of precision. This level of precision, however, may be acceptable for certain purposes. It is therefore left to the user to asses whether the level of precision provided by these equations is acceptable for the intended application.

  1. Noisy metrology: a saturable lower bound on quantum Fisher information

    NASA Astrophysics Data System (ADS)

    Yousefjani, R.; Salimi, S.; Khorashad, A. S.

    2017-06-01

    In order to provide a guaranteed precision and a more accurate judgement about the true value of the Cramér-Rao bound and its scaling behavior, an upper bound (equivalently a lower bound on the quantum Fisher information) for precision of estimation is introduced. Unlike the bounds previously introduced in the literature, the upper bound is saturable and yields a practical instruction to estimate the parameter through preparing the optimal initial state and optimal measurement. The bound is based on the underling dynamics, and its calculation is straightforward and requires only the matrix representation of the quantum maps responsible for encoding the parameter. This allows us to apply the bound to open quantum systems whose dynamics are described by either semigroup or non-semigroup maps. Reliability and efficiency of the method to predict the ultimate precision limit are demonstrated by three main examples.

  2. Gene expression during blow fly development: improving the precision of age estimates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2011-01-01

    Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.

  3. Commentary on Holmes et al. (2007): resolving the debate on when extinction risk is predictable.

    PubMed

    Ellner, Stephen P; Holmes, Elizabeth E

    2008-08-01

    We reconcile the findings of Holmes et al. (Ecology Letters, 10, 2007, 1182) that 95% confidence intervals for quasi-extinction risk were narrow for many vertebrates of conservation concern, with previous theory predicting wide confidence intervals. We extend previous theory, concerning the precision of quasi-extinction estimates as a function of population dynamic parameters, prediction intervals and quasi-extinction thresholds, and provide an approximation that specifies the prediction interval and threshold combinations where quasi-extinction estimates are precise (vs. imprecise). This allows PVA practitioners to define the prediction interval and threshold regions of safety (low risk with high confidence), danger (high risk with high confidence), and uncertainty.

  4. Analysis of spatial correlation in predictive models of forest variables that use LiDAR auxiliary information

    Treesearch

    F. Mauro; Vicente J. Monleon; H. Temesgen; L.A. Ruiz

    2017-01-01

    Accounting for spatial correlation of LiDAR model errors can improve the precision of model-based estimators. To estimate spatial correlation, sample designs that provide close observations are needed, but their implementation might be prohibitively expensive. To quantify the gains obtained by accounting for the spatial correlation of model errors, we examined (

  5. The Estimation of Precisions in the Planning of Uas Photogrammetric Surveys

    NASA Astrophysics Data System (ADS)

    Passoni, D.; Federici, B.; Ferrando, I.; Gagliolo, S.; Sguerso, D.

    2018-05-01

    The Unmanned Aerial System (UAS) is widely used in the photogrammetric surveys both of structures and of small areas. Geomatics focuses the attention on the metric quality of the final products of the survey, creating several 3D modelling applications from UAS images. As widely known, the quality of results derives from the quality of images acquisition phase, which needs an a priori estimation of the expected precisions. The planning phase is typically managed using dedicated tools, adapted from the traditional aerial-photogrammetric flight plan. But UAS flight has features completely different from the traditional one. Hence, the use of UAS for photogrammetric applications today requires a growth in knowledge in planning. The basic idea of this research is to provide a drone photogrammetric flight planning tools considering the required metric precisions, given a priori the classical parameters of a photogrammetric planning: flight altitude, overlaps and geometric parameters of the camera. The created "office suite" allows a realistic planning of a photogrammetric survey, starting from an approximate knowledge of the Digital Surface Model (DSM), and the effective attitude parameters, changing along the route. The planning products are the overlapping of the images, the Ground Sample Distance (GSD) and the precision on each pixel taking into account the real geometry. The different tested procedures, the obtained results and the solution proposed for the a priori estimates of the precisions in the particular case of UAS surveys are here reported.

  6. Precision of proportion estimation with binary compressed Raman spectrum.

    PubMed

    Réfrégier, Philippe; Scotté, Camille; de Aguiar, Hilton B; Rigneault, Hervé; Galland, Frédéric

    2018-01-01

    The precision of proportion estimation with binary filtering of a Raman spectrum mixture is analyzed when the number of binary filters is equal to the number of present species and when the measurements are corrupted with Poisson photon noise. It is shown that the Cramer-Rao bound provides a useful methodology to analyze the performance of such an approach, in particular when the binary filters are orthogonal. It is demonstrated that a simple linear mean square error estimation method is efficient (i.e., has a variance equal to the Cramer-Rao bound). Evolutions of the Cramer-Rao bound are analyzed when the measuring times are optimized or when the considered proportion for binary filter synthesis is not optimized. Two strategies for the appropriate choice of this considered proportion are also analyzed for the binary filter synthesis.

  7. Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods

    DOE PAGES

    Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste; ...

    2017-04-03

    This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less

  8. Mapping snow depth within a tundra ecosystem using multiscale observations and Bayesian methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wainwright, Haruko M.; Liljedahl, Anna K.; Dafflon, Baptiste

    This paper compares and integrates different strategies to characterize the variability of end-of-winter snow depth and its relationship to topography in ice-wedge polygon tundra of Arctic Alaska. Snow depth was measured using in situ snow depth probes and estimated using ground-penetrating radar (GPR) surveys and the photogrammetric detection and ranging (phodar) technique with an unmanned aerial system (UAS). We found that GPR data provided high-precision estimates of snow depth (RMSE=2.9cm), with a spatial sampling of 10cm along transects. Phodar-based approaches provided snow depth estimates in a less laborious manner compared to GPR and probing, while yielding a high precision (RMSE=6.0cm) andmore » a fine spatial sampling (4cm×4cm). We then investigated the spatial variability of snow depth and its correlation to micro- and macrotopography using the snow-free lidar digital elevation map (DEM) and the wavelet approach. We found that the end-of-winter snow depth was highly variable over short (several meter) distances, and the variability was correlated with microtopography. Microtopographic lows (i.e., troughs and centers of low-centered polygons) were filled in with snow, which resulted in a smooth and even snow surface following macrotopography. We developed and implemented a Bayesian approach to integrate the snow-free lidar DEM and multiscale measurements (probe and GPR) as well as the topographic correlation for estimating snow depth over the landscape. Our approach led to high-precision estimates of snow depth (RMSE=6.0cm), at 0.5m resolution and over the lidar domain (750m×700m).« less

  9. Estimation of the Parameters in a Two-State System Coupled to a Squeezed Bath

    NASA Astrophysics Data System (ADS)

    Hu, Yao-Hua; Yang, Hai-Feng; Tan, Yong-Gang; Tao, Ya-Ping

    2018-04-01

    Estimation of the phase and weight parameters of a two-state system in a squeezed bath by calculating quantum Fisher information is investigated. The results show that, both for the phase estimation and for the weight estimation, the quantum Fisher information always decays with time and changes periodically with the phases. The estimation precision can be enhanced by choosing the proper values of the phases and the squeezing parameter. These results can be provided as an analysis reference for the practical application of the parameter estimation in a squeezed bath.

  10. Time Delay Embedding Increases Estimation Precision of Models of Intraindividual Variability

    ERIC Educational Resources Information Center

    von Oertzen, Timo; Boker, Steven M.

    2010-01-01

    This paper investigates the precision of parameters estimated from local samples of time dependent functions. We find that "time delay embedding," i.e., structuring data prior to analysis by constructing a data matrix of overlapping samples, increases the precision of parameter estimates and in turn statistical power compared to standard…

  11. Evaluating single-pass catch as a tool for identifying spatial pattern in fish distribution

    USGS Publications Warehouse

    Bateman, Douglas S.; Gresswell, Robert E.; Torgersen, Christian E.

    2005-01-01

    We evaluate the efficacy of single-pass electrofishing without blocknets as a tool for collecting spatially continuous fish distribution data in headwater streams. We compare spatial patterns in abundance, sampling effort, and length-frequency distributions from single-pass sampling of coastal cutthroat trout (Oncorhynchus clarki clarki) to data obtained from a more precise multiple-pass removal electrofishing method in two mid-sized (500–1000 ha) forested watersheds in western Oregon. Abundance estimates from single- and multiple-pass removal electrofishing were positively correlated in both watersheds, r = 0.99 and 0.86. There were no significant trends in capture probabilities at the watershed scale (P > 0.05). Moreover, among-sample variation in fish abundance was higher than within-sample error in both streams indicating that increased precision of unit-scale abundance estimates would provide less information on patterns of abundance than increasing the fraction of habitat units sampled. In the two watersheds, respectively, single-pass electrofishing captured 78 and 74% of the estimated population of cutthroat trout with 7 and 10% of the effort. At the scale of intermediate-sized watersheds, single-pass electrofishing exhibited a sufficient level of precision to be effective in detecting spatial patterns of cutthroat trout abundance and may be a useful tool for providing the context for investigating fish-habitat relationships at multiple scales.

  12. Polynomial Fitting of DT-MRI Fiber Tracts Allows Accurate Estimation of Muscle Architectural Parameters

    PubMed Central

    Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua

    2012-01-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094

  13. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris.

    PubMed

    Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan

    2015-07-22

    Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation.

  14. Real-Time PPP Based on the Coupling Estimation of Clock Bias and Orbit Error with Broadcast Ephemeris

    PubMed Central

    Pan, Shuguo; Chen, Weirong; Jin, Xiaodong; Shi, Xiaofei; He, Fan

    2015-01-01

    Satellite orbit error and clock bias are the keys to precise point positioning (PPP). The traditional PPP algorithm requires precise satellite products based on worldwide permanent reference stations. Such an algorithm requires considerable work and hardly achieves real-time performance. However, real-time positioning service will be the dominant mode in the future. IGS is providing such an operational service (RTS) and there are also commercial systems like Trimble RTX in operation. On the basis of the regional Continuous Operational Reference System (CORS), a real-time PPP algorithm is proposed to apply the coupling estimation of clock bias and orbit error. The projection of orbit error onto the satellite-receiver range has the same effects on positioning accuracy with clock bias. Therefore, in satellite clock estimation, part of the orbit error can be absorbed by the clock bias and the effects of residual orbit error on positioning accuracy can be weakened by the evenly distributed satellite geometry. In consideration of the simple structure of pseudorange equations and the high precision of carrier-phase equations, the clock bias estimation method coupled with orbit error is also improved. Rovers obtain PPP results by receiving broadcast ephemeris and real-time satellite clock bias coupled with orbit error. By applying the proposed algorithm, the precise orbit products provided by GNSS analysis centers are rendered no longer necessary. On the basis of previous theoretical analysis, a real-time PPP system was developed. Some experiments were then designed to verify this algorithm. Experimental results show that the newly proposed approach performs better than the traditional PPP based on International GNSS Service (IGS) real-time products. The positioning accuracies of the rovers inside and outside the network are improved by 38.8% and 36.1%, respectively. The PPP convergence speeds are improved by up to 61.4% and 65.9%. The new approach can change the traditional PPP mode because of its advantages of independence, high positioning precision, and real-time performance. It could be an alternative solution for regional positioning service before global PPP service comes into operation. PMID:26205276

  15. Quantitative Determination of Isotope Ratios from Experimental Isotopic Distributions

    PubMed Central

    Kaur, Parminder; O’Connor, Peter B.

    2008-01-01

    Isotope variability due to natural processes provides important information for studying a variety of complex natural phenomena from the origins of a particular sample to the traces of biochemical reaction mechanisms. These measurements require high-precision determination of isotope ratios of a particular element involved. Isotope Ratio Mass Spectrometers (IRMS) are widely employed tools for such a high-precision analysis, which have some limitations. This work aims at overcoming the limitations inherent to IRMS by estimating the elemental isotopic abundance from the experimental isotopic distribution. In particular, a computational method has been derived which allows the calculation of 13C/12C ratios from the whole isotopic distributions, given certain caveats, and these calculations are applied to several cases to demonstrate their utility. The limitations of the method in terms of the required number of ions and S/N ratio are discussed. For high-precision estimates of the isotope ratios, this method requires very precise measurement of the experimental isotopic distribution abundances, free from any artifacts introduced by noise, sample heterogeneity, or other experimental sources. PMID:17263354

  16. Spatial Patterns and Sequential Sampling Plans for Predators of Aphis glycines (Hemiptera: Aphididae) in Minnesota Soybean.

    PubMed

    Tran, Anh K; Koch, Robert L

    2017-06-01

    The soybean aphid, Aphis glycines Matsumura, is an economically important soybean pest. Many studies have demonstrated that predatory insects are important in suppressing A. glycines population growth. However, to improve the utilization of predators in A. glycines management, sampling plans need to be developed and validated for predators. Aphid predators were sampled in soybean fields near Rosemount, Minnesota, from 2006-2007 and 2013-2015 with sample sizes of 20-80 plants. Sampling plans were developed for Orius insidiosus (Say), Harmonia axyridis (Pallas), and all aphidophagous Coccinellidae species combined. Taylor's power law parameters from the regression of log variance versus log mean suggested aggregated spatial patterns for immature and adult stages combined for O. insidiosus, H. axyridis, and Coccinellidae in soybean fields. Using the parameters from Taylor's power law and Green's method, sequential fixed-precision sampling plans were developed to estimate the density for each predator taxon at desired precision levels of 0.10 and 0.25. To achieve a desired precision of 0.10 and 0.25, the average sample number (ASN) ranged from 398-713 and 64-108 soybean plants, respectively, for all species. Resulting ASNs were relatively large and assumed impractical for most purposes; therefore, the desired precision levels were adjusted to determine the level of precision associated with a more practical ASN. Final analysis indicated an ASN of 38 soybean plants provided precision of 0.32-0.40 for the predators. Development of sampling plans should provide guidance for improved estimation of predator densities for A. glycines pest management programs and for research purposes. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Benchmark dose and the three Rs. Part I. Getting more information from the same number of animals.

    PubMed

    Slob, Wout

    2014-08-01

    Evaluating dose-response data using the Benchmark dose (BMD) approach rather than by the no observed adverse effect (NOAEL) approach implies a considerable step forward from the perspective of the Reduction, Replacement, and Refinement, three Rs, in particular the R of reduction: more information is obtained from the same number of animals, or, vice versa, similar information may be obtained from fewer animals. The first part of this twin paper focusses on the former, the second on the latter aspect. Regarding the former, the BMD approach provides more information from any given dose-response dataset in various ways. First, the BMDL (= BMD lower confidence bound) provides more information by its more explicit definition. Further, as compared to the NOAEL approach the BMD approach results in more statistical precision in the value of the point of departure (PoD), for deriving exposure limits. While part of the animals in the study do not directly contribute to the numerical value of a NOAEL, all animals are effectively used and do contribute to a BMDL. In addition, the BMD approach allows for combining similar datasets for the same chemical (e.g., both sexes) in a single analysis, which further increases precision. By combining a dose-response dataset with similar historical data for other chemicals, the precision can even be substantially increased. Further, the BMD approach results in more precise estimates for relative potency factors (RPFs, or TEFs). And finally, the BMD approach is not only more precise, it also allows for quantification of the precision in the BMD estimate, which is not possible in the NOAEL approach.

  18. Population density estimated from locations of individuals on a passive detector array

    USGS Publications Warehouse

    Efford, Murray G.; Dawson, Deanna K.; Borchers, David L.

    2009-01-01

    The density of a closed population of animals occupying stable home ranges may be estimated from detections of individuals on an array of detectors, using newly developed methods for spatially explicit capture–recapture. Likelihood-based methods provide estimates for data from multi-catch traps or from devices that record presence without restricting animal movement ("proximity" detectors such as camera traps and hair snags). As originally proposed, these methods require multiple sampling intervals. We show that equally precise and unbiased estimates may be obtained from a single sampling interval, using only the spatial pattern of detections. This considerably extends the range of possible applications, and we illustrate the potential by estimating density from simulated detections of bird vocalizations on a microphone array. Acoustic detection can be defined as occurring when received signal strength exceeds a threshold. We suggest detection models for binary acoustic data, and for continuous data comprising measurements of all signals above the threshold. While binary data are often sufficient for density estimation, modeling signal strength improves precision when the microphone array is small.

  19. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    NASA Astrophysics Data System (ADS)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  20. Cobalt: Development and Maturation of GN&C Technologies for Precision Landing

    NASA Technical Reports Server (NTRS)

    Carson, John M.; Restrepo, Carolina; Seubert, Carl; Amzajerdian, Farzin

    2016-01-01

    The CoOperative Blending of Autonomous Landing Technologies (COBALT) instrument is a terrestrial test platform for development and maturation of guidance, navigation and control (GN&C) technologies for precision landing. The project is developing a third-generation Langley Research Center (LaRC) navigation doppler lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the Jet Propulsion Laboratory (JPL) lander vision system (LVS) for terrain relative navigation (TRN) position estimates. These technologies together provide precise navigation knowledge that is critical for a controlled and precise touchdown. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive vertical test bed (VTB) developed by Masten Space Systems, and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).

  1. Optimal Design for the Precise Estimation of an Interaction Threshold: The Impact of Exposure to a Mixture of 18 Polyhalogenated Aromatic Hydrocarbons

    PubMed Central

    Yeatts, Sharon D.; Gennings, Chris; Crofton, Kevin M.

    2014-01-01

    Traditional additivity models provide little flexibility in modeling the dose–response relationships of the single agents in a mixture. While the flexible single chemical required (FSCR) methods allow greater flexibility, its implicit nature is an obstacle in the formation of the parameter covariance matrix, which forms the basis for many statistical optimality design criteria. The goal of this effort is to develop a method for constructing the parameter covariance matrix for the FSCR models, so that (local) alphabetic optimality criteria can be applied. Data from Crofton et al. are provided as motivation; in an experiment designed to determine the effect of 18 polyhalogenated aromatic hydrocarbons on serum total thyroxine (T4), the interaction among the chemicals was statistically significant. Gennings et al. fit the FSCR interaction threshold model to the data. The resulting estimate of the interaction threshold was positive and within the observed dose region, providing evidence of a dose-dependent interaction. However, the corresponding likelihood-ratio-based confidence interval was wide and included zero. In order to more precisely estimate the location of the interaction threshold, supplemental data are required. Using the available data as the first stage, the Ds-optimal second-stage design criterion was applied to minimize the variance of the hypothesized interaction threshold. Practical concerns associated with the resulting design are discussed and addressed using the penalized optimality criterion. Results demonstrate that the penalized Ds-optimal second-stage design can be used to more precisely define the interaction threshold while maintaining the characteristics deemed important in practice. PMID:22640366

  2. A Statistical Guide to the Design of Deep Mutational Scanning Experiments.

    PubMed

    Matuszewski, Sebastian; Hildebrandt, Marcel E; Ghenu, Ana-Hermina; Jensen, Jeffrey D; Bank, Claudia

    2016-09-01

    The characterization of the distribution of mutational effects is a key goal in evolutionary biology. Recently developed deep-sequencing approaches allow for accurate and simultaneous estimation of the fitness effects of hundreds of engineered mutations by monitoring their relative abundance across time points in a single bulk competition. Naturally, the achievable resolution of the estimated fitness effects depends on the specific experimental setup, the organism and type of mutations studied, and the sequencing technology utilized, among other factors. By means of analytical approximations and simulations, we provide guidelines for optimizing time-sampled deep-sequencing bulk competition experiments, focusing on the number of mutants, the sequencing depth, and the number of sampled time points. Our analytical results show that sampling more time points together with extending the duration of the experiment improves the achievable precision disproportionately compared with increasing the sequencing depth or reducing the number of competing mutants. Even if the duration of the experiment is fixed, sampling more time points and clustering these at the beginning and the end of the experiment increase experimental power and allow for efficient and precise assessment of the entire range of selection coefficients. Finally, we provide a formula for calculating the 95%-confidence interval for the measurement error estimate, which we implement as an interactive web tool. This allows for quantification of the maximum expected a priori precision of the experimental setup, as well as for a statistical threshold for determining deviations from neutrality for specific selection coefficient estimates. Copyright © 2016 by the Genetics Society of America.

  3. Validation of UARS Microwave Limb Sounder 183 GHz H2O Measurements

    NASA Technical Reports Server (NTRS)

    Lahoz, W. A.; Suttie, M. R.; Froidevaux, L.; Harwood, R. S.; Lau, C. L.; Lungu, T. A.; Peckham, G. E.; Pumphrey, H. C.; Read, W. G.; Shippony, Z.; hide

    1996-01-01

    The Upper Atmosphere Research Satellite (UARS) microwave limb sounder (MLS) makes measurements of thermal emission at 183.3 GHz which are used to infer the concentration of water vapor over a pressure range of 46-0.2hPa (approximately 20-60 km). We provide a validation of MLS H2O by analyzing the integrity of the measurements, by providing an error characterization, and by comparison with data from other instruments. It is estimated that version 3 MLS H2O retrievals are accurate to within 20-25% in the lower stratosphere and to within 8-13% in the upper stratosphere and lower mesosphere. The precision of a single profile is estimated to be approximately 0.15 parts per million by volume (ppmv) in the midstratosphere and 0.2 ppmv in the lower and upper stratosphere. In the lower mesosphere the estimate of a single profile precision is 0.25-0.45 ppmv. During polar winter conditions, H2O retrievals at 46 hPa can have a substantial contribution from climatology. The vertical resolution of MLS H2O retrievals is approximately 5 km.

  4. Demonstration of line transect methodologies to estimate urban gray squirrel density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hein, E.W.

    1997-11-01

    Because studies estimating density of gray squirrels (Sciurus carolinensis) have been labor intensive and costly, I demonstrate the use of line transect surveys to estimate gray squirrel density and determine the costs of conducting surveys to achieve precise estimates. Density estimates are based on four transacts that were surveyed five times from 30 June to 9 July 1994. Using the program DISTANCE, I estimated there were 4.7 (95% Cl = 1.86-11.92) gray squirrels/ha on the Clemson University campus. Eleven additional surveys would have decreased the percent coefficient of variation from 30% to 20% and would have cost approximately $114. Estimatingmore » urban gray squirrel density using line transect surveys is cost effective and can provide unbiased estimates of density, provided that none of the assumptions of distance sampling theory are violated.« less

  5. Precision of four otolith techniques for estimating age of white perch from a thermally altered reservoir

    USGS Publications Warehouse

    Snow, Richard A.; Porta, Michael J.; Long, James M.

    2018-01-01

    The White Perch Morone americana is an invasive species in many Midwestern states and is widely distributed in reservoir systems, yet little is known about the species' age structure and population dynamics. White Perch were first observed in Sooner Reservoir, a thermally altered cooling reservoir in Oklahoma, by the Oklahoma Department of Wildlife Conservation in 2006. It is unknown how thermally altered systems like Sooner Reservoir may affect the precision of White Perch age estimates. Previous studies have found that age structures from Largemouth Bass Micropterus salmoides and Bluegills Lepomis macrochirus from thermally altered reservoirs had false annuli, which increased error when estimating ages. Our objective was to quantify the precision of White Perch age estimates using four sagittal otolith preparation techniques (whole, broken, browned, and stained). Because Sooner Reservoir is thermally altered, we also wanted to identify the best month to collect a White Perch age sample based on aging precision. Ages of 569 White Perch (20–308 mm TL) were estimated using the four techniques. Age estimates from broken, stained, and browned otoliths ranged from 0 to 8 years; whole‐view otolith age estimates ranged from 0 to 7 years. The lowest mean coefficient of variation (CV) was obtained using broken otoliths, whereas the highest CV was observed using browned otoliths. July was the most precise month (lowest mean CV) for estimating age of White Perch, whereas April was the least precise month (highest mean CV). These results underscore the importance of knowing the best method to prepare otoliths for achieving the most precise age estimates and the best time of year to obtain those samples, as these factors may affect other estimates of population dynamics.

  6. Using simulation to evaluate wildlife survey designs: polar bears and seals in the Chukchi Sea.

    PubMed

    Conn, Paul B; Moreland, Erin E; Regehr, Eric V; Richmond, Erin L; Cameron, Michael F; Boveng, Peter L

    2016-01-01

    Logistically demanding and expensive wildlife surveys should ideally yield defensible estimates. Here, we show how simulation can be used to evaluate alternative survey designs for estimating wildlife abundance. Specifically, we evaluate the potential of instrument-based aerial surveys (combining infrared imagery with high-resolution digital photography to detect and identify species) for estimating abundance of polar bears and seals in the Chukchi Sea. We investigate the consequences of different levels of survey effort, flight track allocation and model configuration on bias and precision of abundance estimators. For bearded seals (0.07 animals km(-2)) and ringed seals (1.29 animals km(-2)), we find that eight flights traversing ≈7840 km are sufficient to achieve target precision levels (coefficient of variation (CV)<20%) for a 2.94×10(5) km(2) study area. For polar bears (provisionally, 0.003 animals km(-2)), 12 flights traversing ≈11 760 km resulted in CVs ranging from 28 to 35%. Estimators were relatively unbiased with similar precision over different flight track allocation strategies and estimation models, although some combinations had superior performance. These findings suggest that instrument-based aerial surveys may provide a viable means for monitoring seal and polar bear populations on the surface of the sea ice over large Arctic regions. More broadly, our simulation-based approach to evaluating survey designs can serve as a template for biologists designing their own surveys.

  7. Stroke Onset Time Determination Using MRI Relaxation Times without Non-Ischaemic Reference in A Rat Stroke Model

    PubMed Central

    Knight, Michael J.; McGarry, Bryony M.; Jokivarsi, Kimmo T.; Gröhn, Olli H.J.; Kauppinen, Risto A.

    2017-01-01

    Background Objective timing of stroke in emergency departments is expected to improve patient stratification. Magnetic resonance imaging (MRI) relaxations times, T2 and T1ρ, in abnormal diffusion delineated ischaemic tissue were used as proxies of stroke time in a rat model. Methods Both ‘non-ischaemic reference’-dependent and -independent estimators were generated. Apparent diffusion coefficient (ADC), T2 and T1ρ, were sequentially quantified for up to 6 hours of stroke in rats (n = 8) at 4.7T. The ischaemic lesion was identified as a contiguous collection of voxels with low ADC. T2 and T1ρ in the ischaemic lesion and in the contralateral non-ischaemic brain tissue were determined. Differences in mean MRI relaxation times between ischaemic and non-ischaemic volumes were used to create reference-dependent estimator. For the reference-independent procedure, only the parameters associated with log-logistic fits to the T2 and T1ρ distributions within the ADC-delineated lesions were used for the onset time estimation. Result The reference-independent estimators from T2 and T1ρ data provided stroke onset time with precisions of ±32 and ±27 minutes, respectively. The reference-dependent estimators yielded respective precisions of ±47 and ±54 minutes. Conclusions A ‘non-ischaemic anatomical reference’-independent estimator for stroke onset time from relaxometric MRI data is shown to yield greater timing precision than previously obtained through reference-dependent procedures. PMID:28685128

  8. Using simulation to evaluate wildlife survey designs: polar bears and seals in the Chukchi Sea

    PubMed Central

    Conn, Paul B.; Moreland, Erin E.; Regehr, Eric V.; Richmond, Erin L.; Cameron, Michael F.; Boveng, Peter L.

    2016-01-01

    Logistically demanding and expensive wildlife surveys should ideally yield defensible estimates. Here, we show how simulation can be used to evaluate alternative survey designs for estimating wildlife abundance. Specifically, we evaluate the potential of instrument-based aerial surveys (combining infrared imagery with high-resolution digital photography to detect and identify species) for estimating abundance of polar bears and seals in the Chukchi Sea. We investigate the consequences of different levels of survey effort, flight track allocation and model configuration on bias and precision of abundance estimators. For bearded seals (0.07 animals km−2) and ringed seals (1.29 animals km−2), we find that eight flights traversing ≈7840 km are sufficient to achieve target precision levels (coefficient of variation (CV)<20%) for a 2.94×105 km2 study area. For polar bears (provisionally, 0.003 animals km−2), 12 flights traversing ≈11 760 km resulted in CVs ranging from 28 to 35%. Estimators were relatively unbiased with similar precision over different flight track allocation strategies and estimation models, although some combinations had superior performance. These findings suggest that instrument-based aerial surveys may provide a viable means for monitoring seal and polar bear populations on the surface of the sea ice over large Arctic regions. More broadly, our simulation-based approach to evaluating survey designs can serve as a template for biologists designing their own surveys. PMID:26909183

  9. Item Response Theory Modeling of the Philadelphia Naming Test.

    PubMed

    Fergadiotis, Gerasimos; Kellough, Stacey; Hula, William D

    2015-06-01

    In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.

  10. Effect of follow-up period on minimal-significant dose in the atomic-bomb survivor studies.

    PubMed

    Cologne, John; Preston, Dale L; Grant, Eric J; Cullings, Harry M; Ozasa, Kotaro

    2018-03-01

    It was recently suggested that earlier reports on solid-cancer mortality and incidence in the Life Span Study of atomic-bomb survivors contain still-useful information about low-dose risk that should not be ignored, because longer follow-up may lead to attenuated estimates of low-dose risk due to longer time since exposure. Here it is demonstrated, through the use of all follow-up data and risk models stratified on period of follow-up (as opposed to sub-setting the data by follow-up period), that the appearance of risk attenuation over time may be the result of less-precise risk estimation-in particular, imprecise estimation of effect-modification parameters-in the earlier periods. Longer follow-up, in addition to allowing more-precise estimation of risk due to larger numbers of radiation-related cases, provides more-precise adjustment for background mortality or incidence and more-accurate assessment of risk modification by age at exposure and attained age. It is concluded that the latest follow-up data are most appropriate for inferring low-dose risk. Furthermore, if researchers are interested in effects of time since exposure, the most-recent follow-up data should be considered rather than the results of earlier reports.

  11. Validating precision estimates in horizontal wind measurements from a Doppler lidar

    DOE PAGES

    Newsom, Rob K.; Brewer, W. Alan; Wilczak, James M.; ...

    2017-03-30

    Results from a recent field campaign are used to assess the accuracy of wind speed and direction precision estimates produced by a Doppler lidar wind retrieval algorithm. The algorithm, which is based on the traditional velocity-azimuth-display (VAD) technique, estimates the wind speed and direction measurement precision using standard error propagation techniques, assuming the input data (i.e., radial velocities) to be contaminated by random, zero-mean, errors. For this study, the lidar was configured to execute an 8-beam plan-position-indicator (PPI) scan once every 12 min during the 6-week deployment period. Several wind retrieval trials were conducted using different schemes for estimating themore » precision in the radial velocity measurements. Here, the resulting wind speed and direction precision estimates were compared to differences in wind speed and direction between the VAD algorithm and sonic anemometer measurements taken on a nearby 300 m tower.« less

  12. An evaluation of rapid methods for monitoring vegetation characteristics of wetland bird habitat

    USGS Publications Warehouse

    Tavernia, Brian G.; Lyons, James E.; Loges, Brian W.; Wilson, Andrew; Collazo, Jaime A.; Runge, Michael C.

    2016-01-01

    Wetland managers benefit from monitoring data of sufficient precision and accuracy to assess wildlife habitat conditions and to evaluate and learn from past management decisions. For large-scale monitoring programs focused on waterbirds (waterfowl, wading birds, secretive marsh birds, and shorebirds), precision and accuracy of habitat measurements must be balanced with fiscal and logistic constraints. We evaluated a set of protocols for rapid, visual estimates of key waterbird habitat characteristics made from the wetland perimeter against estimates from (1) plots sampled within wetlands, and (2) cover maps made from aerial photographs. Estimated percent cover of annuals and perennials using a perimeter-based protocol fell within 10 percent of plot-based estimates, and percent cover estimates for seven vegetation height classes were within 20 % of plot-based estimates. Perimeter-based estimates of total emergent vegetation cover did not differ significantly from cover map estimates. Post-hoc analyses revealed evidence for observer effects in estimates of annual and perennial covers and vegetation height. Median time required to complete perimeter-based methods was less than 7 percent of the time needed for intensive plot-based methods. Our results show that rapid, perimeter-based assessments, which increase sample size and efficiency, provide vegetation estimates comparable to more intensive methods.

  13. Evaluation of three paediatric weight estimation methods in Singapore.

    PubMed

    Loo, Pei Ying; Chong, Shu-Ling; Lek, Ngee; Bautista, Dianne; Ng, Kee Chong

    2013-04-01

    Rapid paediatric weight estimation methods in the emergency setting have not been evaluated for South East Asian children. This study aims to assess the accuracy and precision of three such methods in Singapore children: Broselow-Luten (BL) tape, Advanced Paediatric Life Support (APLS) (estimated weight (kg) = 2 (age + 4)) and Luscombe (estimated weight (kg) = 3 (age) + 7) formulae. We recruited 875 patients aged 1-10 years in a Paediatric Emergency Department in Singapore over a 2-month period. For each patient, true weight and height were determined. True height was cross-referenced to the BL tape markings and used to derive estimated weight (virtual BL tape method), while patient's round-down age (in years) was used to derive estimated weights using APLS and Luscombe formulae, respectively. The percentage difference between the true and estimated weights was calculated. For each method, the bias and extent of agreement were quantified using Bland-Altman method (mean percentage difference (MPD) and 95% limits of agreement (LOA)). The proportion of weight estimates within 10% of true weight (p₁₀) was determined. The BL tape method marginally underestimated weights (MPD +0.6%; 95% LOA -26.8% to +28.1%; p₁₀ 58.9%). The APLS formula underestimated weights (MPD +7.6%; 95% LOA -26.5% to +41.7%; p₁₀ 45.7%). The Luscombe formula overestimated weights (MPD -7.4%; 95% LOA -51.0% to +36.2%; p₁₀ 37.7%). Of the three methods we evaluated, the BL tape method provided the most accurate and precise weight estimation for Singapore children. The APLS and Luscombe formulae underestimated and overestimated the children's weights, respectively, and were considerably less precise. © 2013 The Authors. Journal of Paediatrics and Child Health © 2013 Paediatrics and Child Health Division (Royal Australasian College of Physicians).

  14. Evaluating abundance and trends in a Hawaiian avian community using state-space analysis

    USGS Publications Warehouse

    Camp, Richard J.; Brinck, Kevin W.; Gorresen, P.M.; Paxton, Eben H.

    2016-01-01

    Estimating population abundances and patterns of change over time are important in both ecology and conservation. Trend assessment typically entails fitting a regression to a time series of abundances to estimate population trajectory. However, changes in abundance estimates from year-to-year across time are due to both true variation in population size (process variation) and variation due to imperfect sampling and model fit. State-space models are a relatively new method that can be used to partition the error components and quantify trends based only on process variation. We compare a state-space modelling approach with a more traditional linear regression approach to assess trends in uncorrected raw counts and detection-corrected abundance estimates of forest birds at Hakalau Forest National Wildlife Refuge, Hawai‘i. Most species demonstrated similar trends using either method. In general, evidence for trends using state-space models was less strong than for linear regression, as measured by estimates of precision. However, while the state-space models may sacrifice precision, the expectation is that these estimates provide a better representation of the real world biological processes of interest because they are partitioning process variation (environmental and demographic variation) and observation variation (sampling and model variation). The state-space approach also provides annual estimates of abundance which can be used by managers to set conservation strategies, and can be linked to factors that vary by year, such as climate, to better understand processes that drive population trends.

  15. Parallel computers - Estimate errors caused by imprecise data

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne

    1991-01-01

    A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.

  16. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  17. A computer program for sample size computations for banding studies

    USGS Publications Warehouse

    Wilson, K.R.; Nichols, J.D.; Hines, J.E.

    1989-01-01

    Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.

  18. Efficient estimation of Pareto model: Some modified percentile estimators.

    PubMed

    Bhatti, Sajjad Haider; Hussain, Shahzad; Ahmad, Tanvir; Aslam, Muhammad; Aftab, Muhammad; Raza, Muhammad Ali

    2018-01-01

    The article proposes three modified percentile estimators for parameter estimation of the Pareto distribution. These modifications are based on median, geometric mean and expectation of empirical cumulative distribution function of first-order statistic. The proposed modified estimators are compared with traditional percentile estimators through a Monte Carlo simulation for different parameter combinations with varying sample sizes. Performance of different estimators is assessed in terms of total mean square error and total relative deviation. It is determined that modified percentile estimator based on expectation of empirical cumulative distribution function of first-order statistic provides efficient and precise parameter estimates compared to other estimators considered. The simulation results were further confirmed using two real life examples where maximum likelihood and moment estimators were also considered.

  19. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  20. Inclusion of Exercise Intensities Above the Lactate Threshold in VO2/Running Speed Regression Does not Improve the Precision of Accumulated Oxygen Deficit Estimation in Endurance-Trained Runners

    PubMed Central

    Reis, Victor M.; Silva, António J.; Ascensão, António; Duarte, José A.

    2005-01-01

    The present study intended to verify if the inclusion of intensities above lactate threshold (LT) in the VO2/running speed regression (RSR) affects the estimation error of accumulated oxygen deficit (AOD) during a treadmill running performed by endurance-trained subjects. Fourteen male endurance-trained runners performed a sub maximal treadmill running test followed by an exhaustive supra maximal test 48h later. The total energy demand (TED) and the AOD during the supra maximal test were calculated from the RSR established on first testing. For those purposes two regressions were used: a complete regression (CR) including all available sub maximal VO2 measurements and a sub threshold regression (STR) including solely the VO2 values measured during exercise intensities below LT. TED mean values obtained with CR and STR were not significantly different under the two conditions of analysis (177.71 ± 5.99 and 174.03 ± 6.53 ml·kg-1, respectively). Also the mean values of AOD obtained with CR and STR did not differ under the two conditions (49.75 ± 8.38 and 45.8 9 ± 9.79 ml·kg-1, respectively). Moreover, the precision of those estimations was also similar under the two procedures. The mean error for TED estimation was 3.27 ± 1.58 and 3.41 ± 1.85 ml·kg-1 (for CR and STR, respectively) and the mean error for AOD estimation was 5.03 ± 0.32 and 5.14 ± 0.35 ml·kg-1 (for CR and STR, respectively). The results indicated that the inclusion of exercise intensities above LT in the RSR does not improve the precision of the AOD estimation in endurance-trained runners. However, the use of STR may induce an underestimation of AOD comparatively to the use of CR. Key Points It has been suggested that the inclusion of exercise intensities above the lactate threshold in the VO2/power regression can significantly affect the estimation of the energy cost and, thus, the estimation of the AOD. However data on the precision of those AOD measurements is rarely provided. We have evaluated the effects of the inclusion of those exercise intensities on the AOD precision. The results have indicated that the inclusion of exercise intensities above the lactate threshold in the VO2/running speed regression does not improve the precision of AOD estimation in endurance-trained runners. However, the use of sub threshold regressions may induce an underestimation of AOD comparatively to the use of complete regressions. PMID:24501560

  1. Stochastic precision analysis of 2D cardiac strain estimation in vivo

    NASA Astrophysics Data System (ADS)

    Bunting, E. A.; Provost, J.; Konofagou, E. E.

    2014-11-01

    Ultrasonic strain imaging has been applied to echocardiography and carries great potential to be used as a tool in the clinical setting. Two-dimensional (2D) strain estimation may be useful when studying the heart due to the complex, 3D deformation of the cardiac tissue. Increasing the framerate used for motion estimation, i.e. motion estimation rate (MER), has been shown to improve the precision of the strain estimation, although maintaining the spatial resolution necessary to view the entire heart structure in a single heartbeat remains challenging at high MERs. Two previously developed methods, the temporally unequispaced acquisition sequence (TUAS) and the diverging beam sequence (DBS), have been used in the past to successfully estimate in vivo axial strain at high MERs without compromising spatial resolution. In this study, a stochastic assessment of 2D strain estimation precision is performed in vivo for both sequences at varying MERs (65, 272, 544, 815 Hz for TUAS; 250, 500, 1000, 2000 Hz for DBS). 2D incremental strains were estimated during left ventricular contraction in five healthy volunteers using a normalized cross-correlation function and a least-squares strain estimator. Both sequences were shown capable of estimating 2D incremental strains in vivo. The conditional expected value of the elastographic signal-to-noise ratio (E(SNRe|ɛ)) was used to compare strain estimation precision of both sequences at multiple MERs over a wide range of clinical strain values. The results here indicate that axial strain estimation precision is much more dependent on MER than lateral strain estimation, while lateral estimation is more affected by strain magnitude. MER should be increased at least above 544 Hz to avoid suboptimal axial strain estimation. Radial and circumferential strain estimations were influenced by the axial and lateral strain in different ways. Furthermore, the TUAS and DBS were found to be of comparable precision at similar MERs.

  2. The Quality of Reporting of Measures of Precision in Animal Experiments in Implant Dentistry: A Methodological Study.

    PubMed

    Faggion, Clovis Mariano; Aranda, Luisiana; Diaz, Karla Tatiana; Shih, Ming-Chieh; Tu, Yu-Kang; Alarcón, Marco Antonio

    2016-01-01

    Information on precision of treatment-effect estimates is pivotal for understanding research findings. In animal experiments, which provide important information for supporting clinical trials in implant dentistry, inaccurate information may lead to biased clinical trials. The aim of this methodological study was to determine whether sample size calculation, standard errors, and confidence intervals for treatment-effect estimates are reported accurately in publications describing animal experiments in implant dentistry. MEDLINE (via PubMed), Scopus, and SciELO databases were searched to identify reports involving animal experiments with dental implants published from September 2010 to March 2015. Data from publications were extracted into a standardized form with nine items related to precision of treatment estimates and experiment characteristics. Data selection and extraction were performed independently and in duplicate, with disagreements resolved by discussion-based consensus. The chi-square and Fisher exact tests were used to assess differences in reporting according to study sponsorship type and impact factor of the journal of publication. The sample comprised reports of 161 animal experiments. Sample size calculation was reported in five (2%) publications. P values and confidence intervals were reported in 152 (94%) and 13 (8%) of these publications, respectively. Standard errors were reported in 19 (12%) publications. Confidence intervals were better reported in publications describing industry-supported animal experiments (P = .03) and with a higher impact factor (P = .02). Information on precision of estimates is rarely reported in publications describing animal experiments in implant dentistry. This lack of information makes it difficult to evaluate whether the translation of animal research findings to clinical trials is adequate.

  3. Quantifying condition-dependent intracellular protein levels enables high-precision fitness estimates.

    PubMed

    Geiler-Samerotte, Kerry A; Hashimoto, Tatsunori; Dion, Michael F; Budnik, Bogdan A; Airoldi, Edoardo M; Drummond, D Allan

    2013-01-01

    Countless studies monitor the growth rate of microbial populations as a measure of fitness. However, an enormous gap separates growth-rate differences measurable in the laboratory from those that natural selection can distinguish efficiently. Taking advantage of the recent discovery that transcript and protein levels in budding yeast closely track growth rate, we explore the possibility that growth rate can be more sensitively inferred by monitoring the proteomic response to growth, rather than growth itself. We find a set of proteins whose levels, in aggregate, enable prediction of growth rate to a higher precision than direct measurements. However, we find little overlap between these proteins and those that closely track growth rate in other studies. These results suggest that, in yeast, the pathways that set the pace of cell division can differ depending on the growth-altering stimulus. Still, with proper validation, protein measurements can provide high-precision growth estimates that allow extension of phenotypic growth-based assays closer to the limits of evolutionary selection.

  4. Measuring the bias, precision, accuracy, and validity of self-reported height and weight in assessing overweight and obesity status among adolescents using a surveillance system.

    PubMed

    Pérez, Adriana; Gabriel, Kelley; Nehme, Eileen K; Mandell, Dorothy J; Hoelscher, Deanna M

    2015-07-27

    Evidence regarding bias, precision, and accuracy in adolescent self-reported height and weight across demographic subpopulations is lacking. The bias, precision, and accuracy of adolescent self-reported height and weight across subpopulations were examined using a large, diverse and representative sample of adolescents. A second objective was to develop correction equations for self-reported height and weight to provide more accurate estimates of body mass index (BMI) and weight status. A total of 24,221 students from 8th and 11th grade in Texas participated in the School Physical Activity and Nutrition (SPAN) surveillance system in years 2000-2002 and 2004-2005. To assess bias, the differences between the self-reported and objective measures, for height and weight were estimated. To assess precision and accuracy, the Lin's concordance correlation coefficient was used. BMI was estimated for self-reported and objective measures. The prevalence of students' weight status was estimated using self-reported and objective measures; absolute (bias) and relative error (relative bias) were assessed subsequently. Correction equations for sex and race/ethnicity subpopulations were developed to estimate objective measures of height, weight and BMI from self-reported measures using weighted linear regression. Sensitivity, specificity and positive predictive values of weight status classification using self-reported measures and correction equations are assessed by sex and grade. Students in 8th- and 11th-grade overestimated their height from 0.68cm (White girls) to 2.02 cm (African-American boys), and underestimated their weight from 0.4 kg (Hispanic girls) to 0.98 kg (African-American girls). The differences in self-reported versus objectively-measured height and weight resulted in underestimation of BMI ranging from -0.23 kg/m2 (White boys) to -0.7 kg/m2 (African-American girls). The sensitivity of self-reported measures to classify weight status as obese was 70.8% and 81.9% for 8th- and 11th-graders, respectively. These estimates increased when using the correction equations to 77.4% and 84.4% for 8th- and 11th-graders, respectively. When direct measurement is not practical, self-reported measurements provide a reliable proxy measure across grade, sex and race/ethnicity subpopulations of adolescents. Correction equations increase the sensitivity of self-report measures to identify prevalence of overall overweight/obesity status.

  5. Estimating demographic parameters using a combination of known-fate and open N-mixture models

    USGS Publications Warehouse

    Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.

    2015-01-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.

  6. Estimating demographic parameters using a combination of known-fate and open N-mixture models.

    PubMed

    Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G

    2015-10-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.

  7. Measuring atmospheric density using GPS-LEO tracking data

    NASA Astrophysics Data System (ADS)

    Kuang, D.; Desai, S.; Sibthorpe, A.; Pi, X.

    2014-01-01

    We present a method to estimate the total neutral atmospheric density from precise orbit determination of Low Earth Orbit (LEO) satellites. We derive the total atmospheric density by determining the drag force acting on the LEOs through centimeter-level reduced-dynamic precise orbit determination (POD) using onboard Global Positioning System (GPS) tracking data. The precision of the estimated drag accelerations is assessed using various metrics, including differences between estimated along-track accelerations from consecutive 30-h POD solutions which overlap by 6 h, comparison of the resulting accelerations with accelerometer measurements, and comparison against an existing atmospheric density model, DTM-2000. We apply the method to GPS tracking data from CHAMP, GRACE, SAC-C, Jason-2, TerraSAR-X and COSMIC satellites, spanning 12 years (2001-2012) and covering orbital heights from 400 km to 1300 km. Errors in the estimates, including those introduced by deficiencies in other modeled forces (such as solar radiation pressure and Earth radiation pressure), are evaluated and the signal and noise levels for each satellite are analyzed. The estimated density data from CHAMP, GRACE, SAC-C and TerraSAR-X are identified as having high signal and low noise levels. These data all have high correlations with anominal atmospheric density model and show common features in relative residuals with respect to the nominal model in related parameter space. On the contrary, the estimated density data from COSMIC and Jason-2 show errors larger than the actual signal at corresponding altitudes thus having little practical value for this study. The results demonstrate that this method is applicable to data from a variety of missions and can provide useful total neutral density measurements for atmospheric study up to altitude as high as 715 km, with precision and resolution between those derived from traditional special orbital perturbation analysis and those obtained from onboard accelerometers.

  8. Flight evaluation of differential GPS aided inertial navigation systems

    NASA Technical Reports Server (NTRS)

    Mcnally, B. David; Paielli, Russell A.; Bach, Ralph E., Jr.; Warner, David N., Jr.

    1992-01-01

    Algorithms are described for integration of Differential Global Positioning System (DGPS) data with Inertial Navigation System (INS) data to provide an integrated DGPS/INS navigation system. The objective is to establish the benefits that can be achieved through various levels of integration of DGPS with INS for precision navigation. An eight state Kalman filter integration was implemented in real-time on a twin turbo-prop transport aircraft to evaluate system performance during terminal approach and landing operations. A fully integrated DGPS/INS system is also presented which models accelerometer and rate-gyro measurement errors plus position, velocity, and attitude errors. The fully integrated system was implemented off-line using range-domain (seventeen-state) and position domain (fifteen-state) Kalman filters. Both filter integration approaches were evaluated using data collected during the flight test. Flight-test data consisted of measurements from a 5 channel Precision Code GPS receiver, a strap-down Inertial Navigation Unit (INU), and GPS satellite differential range corrections from a ground reference station. The aircraft was laser tracked to determine its true position. Results indicate that there is no significant improvement in positioning accuracy with the higher levels of DGPS/INS integration. All three systems provided high-frequency (e.g., 20 Hz) estimates of position and velocity. The fully integrated system provided estimates of inertial sensor errors which may be used to improve INS navigation accuracy should GPS become unavailable, and improved estimates of acceleration, attitude, and body rates which can be used for guidance and control. Precision Code DGPS/INS positioning accuracy (root-mean-square) was 1.0 m cross-track and 3.0 m vertical. (This AGARDograph was sponsored by the Guidance and Control Panel.)

  9. Use of inequality constrained least squares estimation in small area estimation

    NASA Astrophysics Data System (ADS)

    Abeygunawardana, R. A. B.; Wickremasinghe, W. N.

    2017-05-01

    Traditional surveys provide estimates that are based only on the sample observations collected for the population characteristic of interest. However, these estimates may have unacceptably large variance for certain domains. Small Area Estimation (SAE) deals with determining precise and accurate estimates for population characteristics of interest for such domains. SAE usually uses least squares or maximum likelihood procedures incorporating prior information and current survey data. Many available methods in SAE use constraints in equality form. However there are practical situations where certain inequality restrictions on model parameters are more realistic. It will lead to Inequality Constrained Least Squares (ICLS) estimates if the method used is least squares. In this study ICLS estimation procedure is applied to many proposed small area estimates.

  10. A hybrid double-observer sightability model for aerial surveys

    USGS Publications Warehouse

    Griffin, Paul C.; Lubow, Bruce C.; Jenkins, Kurt J.; Vales, David J.; Moeller, Barbara J.; Reid, Mason; Happe, Patricia J.; Mccorquodale, Scott M.; Tirhi, Michelle J.; Schaberi, Jim P.; Beirne, Katherine

    2013-01-01

    Raw counts from aerial surveys make no correction for undetected animals and provide no estimate of precision with which to judge the utility of the counts. Sightability modeling and double-observer (DO) modeling are 2 commonly used approaches to account for detection bias and to estimate precision in aerial surveys. We developed a hybrid DO sightability model (model MH) that uses the strength of each approach to overcome the weakness in the other, for aerial surveys of elk (Cervus elaphus). The hybrid approach uses detection patterns of 2 independent observer pairs in a helicopter and telemetry-based detections of collared elk groups. Candidate MH models reflected hypotheses about effects of recorded covariates and unmodeled heterogeneity on the separate front-seat observer pair and back-seat observer pair detection probabilities. Group size and concealing vegetation cover strongly influenced detection probabilities. The pilot's previous experience participating in aerial surveys influenced detection by the front pair of observers if the elk group was on the pilot's side of the helicopter flight path. In 9 surveys in Mount Rainier National Park, the raw number of elk counted was approximately 80–93% of the abundance estimated by model MH. Uncorrected ratios of bulls per 100 cows generally were low compared to estimates adjusted for detection bias, but ratios of calves per 100 cows were comparable whether based on raw survey counts or adjusted estimates. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to DO modeling.

  11. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.

    PubMed

    Kaplan, David; Chen, Jianshen

    2012-07-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.

  12. Geostatistics, remote sensing and precision farming.

    PubMed

    Mulla, D J

    1997-01-01

    Precision farming is possible today because of advances in farming technology, procedures for mapping and interpolating spatial patterns, and geographic information systems for overlaying and interpreting several soil, landscape and crop attributes. The key component of precision farming is the map showing spatial patterns in field characteristics. Obtaining information for this map is often achieved by soil sampling. This approach, however, can be cost-prohibitive for grain crops. Soil sampling strategies can be simplified by use of auxiliary data provided by satellite or aerial photo imagery. This paper describes geostatistical methods for estimating spatial patterns in soil organic matter, soil test phosphorus and wheat grain yield from a combination of Thematic Mapper imaging and soil sampling.

  13. Estimating DXA total body fat percentage by lipometer subcutaneous adipose tissue thicknesses.

    PubMed

    Tafeit, Erwin; Greilberger, Joachim; Cvirn, Gerhard; Lipp, Rainer Walther; Schnedl, Wolfgang Johann; Jürimäe, Toivo; Jürimäe, Jaak; Wallner-Liebmann, Sandra Johanna

    2009-06-01

    DXA is an accepted reference method to estimate body composition. However several difficulties in the applicability exist. The equipment is rather expensive, not portable, impractical for measurement of big study populations and it provides a minimal amount of ionizing radiation exposure. The optical device Lipometer (EU Pat.No. 0516251) provides non-invasive, quick, precise and safe measurements of subcutaneous adipose tissue (SAT) layer thicknesses at any site of the human body. Compared to DXA there are some advantages in the Lipometer approach, because this device is portable, quick, not expensive and no radiation is involved. To use these advantages in the field of total body fat% (TBF%) assessment, an acceptable estimation of DXA TBF% by Lipometer SAT thicknesses is necessary, which was the aim of this study. Height, weight, waist and hip circumferences, DXA TBF% and Lipometer SAT thicknesses at fifteen defined body sites were measured in 28 healthy men (age: 33.9 +/- 16.6 years) and 52 healthy women (age: 40.1 +/- 10.7 years). To estimate Lipometer TBF% stepwise multiple regression analysis was applied, using DXA TBF% as dependent variable. Using the fifteen Lipometer SAT thicknesses together with age, height, weight and BMI as independent variables provided the best estimations of Lipometer TBF% for both genders with strong correlations to DXA TBF% (R = 0.985 for males and R = 0.953 for females). The limits of agreement were -2.48% to +2.48% for males and -4.28% to + 4.28% for females. For both genders we received a bias of 0.00%. The results of this paper extend the abilities of the Lipometer by a precise estimation of TBF% using DXA as golden standard.

  14. Assessment of Various Remote Sensing Technologies in Biomass and Nitrogen Content Estimation Using AN Agricultural Test Field

    NASA Astrophysics Data System (ADS)

    Näsi, R.; Viljanen, N.; Kaivosoja, J.; Hakala, T.; Pandžić, M.; Markelin, L.; Honkavaara, E.

    2017-10-01

    Multispectral and hyperspectral imaging is usually acquired by satellite and aircraft platforms. Recently, miniaturized hyperspectral 2D frame cameras have showed great potential to precise agriculture estimations and they are feasible to combine with lightweight platforms, such as drones. Drone platform is a flexible tool for remote sensing applications with environment and agriculture. The assessment and comparison of different platforms such as satellite, aircraft and drones with different sensors, such as hyperspectral and RGB cameras is an important task in order to understand the potential of the data provided by these equipment and to select the most appropriate according to the user applications and requirements. In this context, open and permanent test fields are very significant and helpful experimental environment, since they provide a comparative data for different platforms, sensors and users, allowing multi-temporal analyses as well. Objective of this work was to investigate the feasibility of an open permanent test field in context of precision agriculture. Satellite (Sentinel-2), aircraft and drones with hyperspectral and RGB cameras were assessed in this study to estimate biomass, using linear regression models and in-situ samples. Spectral data and 3D information were used and compared in different combinations to investigate the quality of the models. The biomass estimation accuracies using linear regression models were better than 90 % for the drone based datasets. The results showed that the use of spectral and 3D features together improved the estimation model. However, estimation of nitrogen content was less accurate with the evaluated remote sensing sensors. The open and permanent test field showed to be suitable to provide an accurate and reliable reference data for the commercial users and farmers.

  15. Reliability of Pressure Ulcer Rates: How Precisely Can We Differentiate Among Hospital Units, and Does the Standard Signal‐Noise Reliability Measure Reflect This Precision?

    PubMed Central

    Cramer, Emily

    2016-01-01

    Abstract Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital‐acquired pressure ulcer rates and evaluate a standard signal‐noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step‐down, medical, surgical, and medical‐surgical nursing units from 1,299 US hospitals were analyzed. Using beta‐binomial models, we estimated between‐unit variability (signal) and within‐unit variability (noise) in annual unit pressure ulcer rates. Signal‐noise reliability was computed as the ratio of between‐unit variability to the total of between‐ and within‐unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal‐noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal‐noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. PMID:27223598

  16. The Gaia-ESO Survey: Empirical determination of the precision of stellar radial velocities and projected rotation velocities

    NASA Astrophysics Data System (ADS)

    Jackson, R. J.; Jeffries, R. D.; Lewis, J.; Koposov, S. E.; Sacco, G. G.; Randich, S.; Gilmore, G.; Asplund, M.; Binney, J.; Bonifacio, P.; Drew, J. E.; Feltzing, S.; Ferguson, A. M. N.; Micela, G.; Neguerela, I.; Prusti, T.; Rix, H.-W.; Vallenari, A.; Alfaro, E. J.; Allende Prieto, C.; Babusiaux, C.; Bensby, T.; Blomme, R.; Bragaglia, A.; Flaccomio, E.; Francois, P.; Hambly, N.; Irwin, M.; Korn, A. J.; Lanzafame, A. C.; Pancino, E.; Recio-Blanco, A.; Smiljanic, R.; Van Eck, S.; Walton, N.; Bayo, A.; Bergemann, M.; Carraro, G.; Costado, M. T.; Damiani, F.; Edvardsson, B.; Franciosini, E.; Frasca, A.; Heiter, U.; Hill, V.; Hourihane, A.; Jofré, P.; Lardo, C.; de Laverny, P.; Lind, K.; Magrini, L.; Marconi, G.; Martayan, C.; Masseron, T.; Monaco, L.; Morbidelli, L.; Prisinzano, L.; Sbordone, L.; Sousa, S. G.; Worley, C. C.; Zaggia, S.

    2015-08-01

    Context. The Gaia-ESO Survey (GES) is a large public spectroscopic survey at the European Southern Observatory Very Large Telescope. Aims: A key aim is to provide precise radial velocities (RVs) and projected equatorial velocities (vsini) for representative samples of Galactic stars, which will complement information obtained by the Gaia astrometry satellite. Methods: We present an analysis to empirically quantify the size and distribution of uncertainties in RV and vsini using spectra from repeated exposures of the same stars. Results: We show that the uncertainties vary as simple scaling functions of signal-to-noise ratio (S/N) and vsini, that the uncertainties become larger with increasing photospheric temperature, but that the dependence on stellar gravity, metallicity and age is weak. The underlying uncertainty distributions have extended tails that are better represented by Student's t-distributions than by normal distributions. Conclusions: Parametrised results are provided, which enable estimates of the RV precision for almost all GES measurements, and estimates of the vsini precision for stars in young clusters, as a function of S/N, vsini and stellar temperature. The precision of individual high S/N GES RV measurements is 0.22-0.26 km s-1, dependent on instrumental configuration. Based on observations collected with the FLAMES spectrograph at VLT/UT2 telescope (Paranal Observatory, ESO, Chile), for the Gaia- ESO Large Public Survey (188.B-3002).Full Table 2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/580/A75

  17. Improved accuracy and precision of tracer kinetic parameters by joint fitting to variable flip angle and dynamic contrast enhanced MRI data.

    PubMed

    Dickie, Ben R; Banerji, Anita; Kershaw, Lucy E; McPartlin, Andrew; Choudhury, Ananya; West, Catharine M; Rose, Chris J

    2016-10-01

    To improve the accuracy and precision of tracer kinetic model parameter estimates for use in dynamic contrast enhanced (DCE) MRI studies of solid tumors. Quantitative DCE-MRI requires an estimate of precontrast T1 , which is obtained prior to fitting a tracer kinetic model. As T1 mapping and tracer kinetic signal models are both a function of precontrast T1 it was hypothesized that its joint estimation would improve the accuracy and precision of both precontrast T1 and tracer kinetic model parameters. Accuracy and/or precision of two-compartment exchange model (2CXM) parameters were evaluated for standard and joint fitting methods in well-controlled synthetic data and for 36 bladder cancer patients. Methods were compared under a number of experimental conditions. In synthetic data, joint estimation led to statistically significant improvements in the accuracy of estimated parameters in 30 of 42 conditions (improvements between 1.8% and 49%). Reduced accuracy was observed in 7 of the remaining 12 conditions. Significant improvements in precision were observed in 35 of 42 conditions (between 4.7% and 50%). In clinical data, significant improvements in precision were observed in 18 of 21 conditions (between 4.6% and 38%). Accuracy and precision of DCE-MRI parameter estimates are improved when signal models are fit jointly rather than sequentially. Magn Reson Med 76:1270-1281, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  18. Optimal reference polarization states for the calibration of general Stokes polarimeters in the presence of noise

    NASA Astrophysics Data System (ADS)

    Mu, Tingkui; Bao, Donghao; Zhang, Chunmin; Chen, Zeyu; Song, Jionghui

    2018-07-01

    During the calibration of the system matrix of a Stokes polarimeter using reference polarization states (RPSs) and pseudo-inversion estimation method, the measurement intensities are usually noised by the signal-independent additive Gaussian noise or signal-dependent Poisson shot noise, the precision of the estimated system matrix is degraded. In this paper, we present a paradigm for selecting RPSs to improve the precision of the estimated system matrix in the presence of both types of noise. The analytical solution of the precision of the system matrix estimated with the RPSs are derived. Experimental measurements from a general Stokes polarimeter show that accurate system matrix is estimated with the optimal RPSs, which are generated using two rotating quarter-wave plates. The advantage of using optimal RPSs is a reduction in measurement time with high calibration precision.

  19. Using confidence intervals to evaluate the focus alignment of spectrograph detector arrays.

    PubMed

    Sawyer, Travis W; Hawkins, Kyle S; Damento, Michael

    2017-06-20

    High-resolution spectrographs extract detailed spectral information of a sample and are frequently used in astronomy, laser-induced breakdown spectroscopy, and Raman spectroscopy. These instruments employ dispersive elements such as prisms and diffraction gratings to spatially separate different wavelengths of light, which are then detected by a charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) detector array. Precise alignment along the optical axis (focus position) of the detector array is critical to maximize the instrumental resolution; however, traditional approaches of scanning the detector through focus lack a quantitative measure of precision, limiting the repeatability and relying on one's experience. Here we propose a method to evaluate the focus alignment of spectrograph detector arrays by establishing confidence intervals to measure the alignment precision. We show that propagation of uncertainty can be used to estimate the variance in an alignment, thus providing a quantitative and repeatable means to evaluate the precision and confidence of an alignment. We test the approach by aligning the detector array of a prototype miniature echelle spectrograph. The results indicate that the procedure effectively quantifies alignment precision, enabling one to objectively determine when an alignment has reached an acceptable level. This quantitative approach also provides a foundation for further optimization, including automated alignment. Furthermore, the procedure introduced here can be extended to other alignment techniques that rely on numerically fitting data to a model, providing a general framework for evaluating the precision of alignment methods.

  20. Amount and distribution of isozyme variation in various conifer species

    Treesearch

    M. Thompson Conkle

    1980-01-01

    Estimation of the relative amount and the geographic distribution of genetically controlled variation is a central topic of tree resource management. Biochemical data from the analysis of forest tree enzyme variants provides a direct and precise measure of allele frequencies of tree genes.

  1. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  2. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  3. Estimation of precipitable water vapour using kinematic GNSS precise point positioning over an altitude range of 1 km

    NASA Astrophysics Data System (ADS)

    Webb, S. R.; Penna, N. T.; Clarke, P. J.; Webster, S.; Martin, I.

    2013-12-01

    The estimation of total precipitable water vapour (PWV) using kinematic GNSS has been investigated since around 2001, aiming to extend the use of static ground-based GNSS, from which PWV estimates are now operationally assimilated into numerical weather prediction models. To date, kinematic GNSS PWV studies suggest a PWV measurement agreement with radiosondes of 2-3 mm, almost commensurate with static GNSS measurement accuracy, but only shipborne experiments have so far been carried out. As a first step towards extending such sea level-based studies to platforms that operate at a range of altitudes, such as airplanes or land based vehicles, the kinematic GNSS estimation of PWV over an exactly repeated trajectory is considered. A data set was collected from a GNSS receiver and antenna mounted on a carriage of the Snowdon Mountain Railway, UK, which continually ascends and descends through 950 m of vertical relief. Static GNSS reference receivers were installed at the top and bottom of the altitude profile, and derived zenith wet delay (ZWD) was interpolated to the altitude of the train to provide reference values together with profile estimates from the 100 m resolution runs of the Met Office's Unified Model. We demonstrate similar GNSS accuracies as obtained from previous shipborne studies, namely a double difference relative kinematic GNSS ZWD accuracy within 14 mm, and a kinematic GNSS precise point positioning ZWD accuracy within 15 mm. The latter is a more typical airborne PWV estimation scenario i.e. without the reliance on ground-based GNSS reference stations. We show that the kinematic GPS-only precise point positioning ZWD estimation is enhanced by also incorporating GLONASS observations.

  4. Pesticide Environmental Accounting: a method for assessing the external costs of individual pesticide applications.

    PubMed

    Leach, A W; Mumford, J D

    2008-01-01

    The Pesticide Environmental Accounting (PEA) tool provides a monetary estimate of environmental and health impacts per hectare-application for any pesticide. The model combines the Environmental Impact Quotient method and a methodology for absolute estimates of external pesticide costs in UK, USA and Germany. For many countries resources are not available for intensive assessments of external pesticide costs. The model converts external costs of a pesticide in the UK, USA and Germany to Mediterranean countries. Economic and policy applications include estimating impacts of pesticide reduction policies or benefits from technologies replacing pesticides, such as sterile insect technique. The system integrates disparate data and approaches into a single logical method. The assumptions in the system provide transparency and consistency but at the cost of some specificity and precision, a reasonable trade-off for a method that provides both comparative estimates of pesticide impacts and area-based assessments of absolute impacts.

  5. Multi-objective optimization in quantum parameter estimation

    NASA Astrophysics Data System (ADS)

    Gong, BeiLi; Cui, Wei

    2018-04-01

    We investigate quantum parameter estimation based on linear and Kerr-type nonlinear controls in an open quantum system, and consider the dissipation rate as an unknown parameter. We show that while the precision of parameter estimation is improved, it usually introduces a significant deformation to the system state. Moreover, we propose a multi-objective model to optimize the two conflicting objectives: (1) maximizing the Fisher information, improving the parameter estimation precision, and (2) minimizing the deformation of the system state, which maintains its fidelity. Finally, simulations of a simplified ɛ-constrained model demonstrate the feasibility of the Hamiltonian control in improving the precision of the quantum parameter estimation.

  6. Estimating avian population size using Bowden's estimator

    USGS Publications Warehouse

    Diefenbach, D.R.

    2009-01-01

    Avian researchers often uniquely mark birds, and multiple estimators could be used to estimate population size using individually identified birds. However, most estimators of population size require that all sightings of marked birds be uniquely identified, and many assume homogeneous detection probabilities. Bowden's estimator can incorporate sightings of marked birds that are not uniquely identified and relax assumptions required of other estimators. I used computer simulation to evaluate the performance of Bowden's estimator for situations likely to be encountered in bird studies. When the assumptions of the estimator were met, abundance and variance estimates and confidence-interval coverage were accurate. However, precision was poor for small population sizes (N < 50) unless a large percentage of the population was marked (>75%) and multiple (≥8) sighting surveys were conducted. If additional birds are marked after sighting surveys begin, it is important to initially mark a large proportion of the population (pm ≥ 0.5 if N ≤ 100 or pm > 0.1 if N ≥ 250) and minimize sightings in which birds are not uniquely identified; otherwise, most population estimates will be overestimated by >10%. Bowden's estimator can be useful for avian studies because birds can be resighted multiple times during a single survey, not all sightings of marked birds have to uniquely identify individuals, detection probabilities among birds can vary, and the complete study area does not have to be surveyed. I provide computer code for use with pilot data to design mark-resight surveys to meet desired precision for abundance estimates.

  7. Wideband Global SATCOM (WGS)

    DTIC Science & Technology

    2015-12-01

    system level testing. ​The WGS-6 financial data is not reported in this SAR because funding is provided by Australia in exchange for access to a...A 3831.3 3539.7 3539.7 3801.9 Confidence Level Confidence Level of cost estimate for current APB: 50% The ICE to support WGS Milestone C decision...to calculate mathematically the precise confidence levels associated with life-cycle cost estimates prepared for MDAPs. Based on the rigor in

  8. The effects of SENSE on PROPELLER imaging.

    PubMed

    Chang, Yuchou; Pipe, James G; Karis, John P; Gibbs, Wende N; Zwart, Nicholas R; Schär, Michael

    2015-12-01

    To study how sensitivity encoding (SENSE) impacts periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) image quality, including signal-to-noise ratio (SNR), robustness to motion, precision of motion estimation, and image quality. Five volunteers were imaged by three sets of scans. A rapid method for generating the g-factor map was proposed and validated via Monte Carlo simulations. Sensitivity maps were extrapolated to increase the area over which SENSE can be performed and therefore enhance the robustness to head motion. The precision of motion estimation of PROPELLER blades that are unfolded with these sensitivity maps was investigated. An interleaved R-factor PROPELLER sequence was used to acquire data with similar amounts of motion with and without SENSE acceleration. Two neuroradiologists independently and blindly compared 214 image pairs. The proposed method of g-factor calculation was similar to that provided by the Monte Carlo methods. Extrapolation and rotation of the sensitivity maps allowed for continued robustness of SENSE unfolding in the presence of motion. SENSE-widened blades improved the precision of rotation and translation estimation. PROPELLER images with a SENSE factor of 3 outperformed the traditional PROPELLER images when reconstructing the same number of blades. SENSE not only accelerates PROPELLER but can also improve robustness and precision of head motion correction, which improves overall image quality even when SNR is lost due to acceleration. The reduction of SNR, as a penalty of acceleration, is characterized by the proposed g-factor method. © 2014 Wiley Periodicals, Inc.

  9. Clinical Strategies for Sampling Word Recognition Performance.

    PubMed

    Schlauch, Robert S; Carney, Edward

    2018-04-17

    Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list. The PB max simulations were conducted on a "client" with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance. A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score. A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.

  10. A predictive model of avian natal dispersal distance provides prior information for investigating response to landscape change.

    PubMed

    Garrard, Georgia E; McCarthy, Michael A; Vesk, Peter A; Radford, James Q; Bennett, Andrew F

    2012-01-01

    1. Informative Bayesian priors can improve the precision of estimates in ecological studies or estimate parameters for which little or no information is available. While Bayesian analyses are becoming more popular in ecology, the use of strongly informative priors remains rare, perhaps because examples of informative priors are not readily available in the published literature. 2. Dispersal distance is an important ecological parameter, but is difficult to measure and estimates are scarce. General models that provide informative prior estimates of dispersal distances will therefore be valuable. 3. Using a world-wide data set on birds, we develop a predictive model of median natal dispersal distance that includes body mass, wingspan, sex and feeding guild. This model predicts median dispersal distance well when using the fitted data and an independent test data set, explaining up to 53% of the variation. 4. Using this model, we predict a priori estimates of median dispersal distance for 57 woodland-dependent bird species in northern Victoria, Australia. These estimates are then used to investigate the relationship between dispersal ability and vulnerability to landscape-scale changes in habitat cover and fragmentation. 5. We find evidence that woodland bird species with poor predicted dispersal ability are more vulnerable to habitat fragmentation than those species with longer predicted dispersal distances, thus improving the understanding of this important phenomenon. 6. The value of constructing informative priors from existing information is also demonstrated. When used as informative priors for four example species, predicted dispersal distances reduced the 95% credible intervals of posterior estimates of dispersal distance by 8-19%. Further, should we have wished to collect information on avian dispersal distances and relate it to species' responses to habitat loss and fragmentation, data from 221 individuals across 57 species would have been required to obtain estimates with the same precision as those provided by the general model. © 2011 The Authors. Journal of Animal Ecology © 2011 British Ecological Society.

  11. Designing an effective mark-recapture study of Antarctic blue whales.

    PubMed

    Peel, David; Bravington, Mark; Kelly, Natalie; Double, Michael C

    2015-06-01

    To properly conserve and manage wild populations, it is important to have information on abundance and population dynamics. In the case of rare and cryptic species, especially in remote locations, surveys can be difficult and expensive, and run the risk of not producing sample sizes large enough to produce precise estimates. Therefore, it is crucial to conduct preliminary analysis to determine if the study will produce useable estimates. The focus of this paper is a proposed mark-recapture study of Antarctic blue whales (Balaenoptera musculus intermedia). Antarctic blue whales were hunted to near extinction up until the mid- 1960s, when commercial exploitation of this species ended. Current abundance estimates are a decade old. Furthermore, at present, there are no formal circumpolar-level cetacean surveys operating in Antarctic waters and, specifically, there is no strategy to monitor the potential recovery of Antarctic blue whales. Hence the work in this paper was motivated by the need to inform decisions on strategies for future monitoring of Antarctic blue whale population. The paper describes a model to predict the precision and bias of estimates from a proposed survey program. The analysis showed that mark-recapture is indeed a suitable method to provide a circumpolar abundance estimate of Antarctic blue whales, with precision of the abundance, at the midpoint of the program, predicted to be between 0.2 and 0.3. However, this was only if passive acoustic tracking was utilized to increase the encounter rate. The analysis also provided guidance on general design for an Antarctic blue whale program, showing that it requires a 12-year duration; although surveys do not necessarily need to be run every year if multiple vessels are available to clump effort. Mark-recapture is based on a number of assumptions; it was evident from the analysis that ongoing analysis and monitoring of the data would be required to check such assumptions hold (e.g., test for heterogeneity), with the modeling adjusted as needed.

  12. An Inertial Sensor-Based Method for Estimating the Athlete's Relative Joint Center Positions and Center of Mass Kinematics in Alpine Ski Racing

    PubMed Central

    Fasel, Benedikt; Spörri, Jörg; Schütz, Pascal; Lorenzetti, Silvio; Aminian, Kamiar

    2017-01-01

    For the purpose of gaining a deeper understanding of the relationship between external training load and health in competitive alpine skiing, an accurate and precise estimation of the athlete's kinematics is an essential methodological prerequisite. This study proposes an inertial sensor-based method to estimate the athlete's relative joint center positions and center of mass (CoM) kinematics in alpine skiing. Eleven inertial sensors were fixed to the lower and upper limbs, trunk, and head. The relative positions of the ankle, knee, hip, shoulder, elbow, and wrist joint centers, as well as the athlete's CoM kinematics were validated against a marker-based optoelectronic motion capture system during indoor carpet skiing. For all joints centers analyzed, position accuracy (mean error) was below 110 mm and precision (error standard deviation) was below 30 mm. CoM position accuracy and precision were 25.7 and 6.7 mm, respectively. Both the accuracy and precision of the system to estimate the distance between the ankle of the outside leg and CoM (measure quantifying the skier's overall vertical motion) were found to be below 11 mm. Some poorer accuracy and precision values (below 77 mm) were observed for the athlete's fore-aft position (i.e., the projection of the outer ankle-CoM vector onto the line corresponding to the projection of ski's longitudinal axis on the snow surface). In addition, the system was found to be sensitive enough to distinguish between different types of turns (wide/narrow). Thus, the method proposed in this paper may also provide a useful, pervasive way to monitor and control adverse external loading patterns that occur during regular on-snow training. Moreover, as demonstrated earlier, such an approach might have a certain potential to quantify competition time, movement repetitions and/or the accelerations acting on the different segments of the human body. However, prior to getting feasible for applications in daily training, future studies should primarily focus on a simplification of the sensor setup, as well as a fusion with global navigation satellite systems (i.e., the estimation of the absolute joint and CoM positions). PMID:29163196

  13. An Inertial Sensor-Based Method for Estimating the Athlete's Relative Joint Center Positions and Center of Mass Kinematics in Alpine Ski Racing.

    PubMed

    Fasel, Benedikt; Spörri, Jörg; Schütz, Pascal; Lorenzetti, Silvio; Aminian, Kamiar

    2017-01-01

    For the purpose of gaining a deeper understanding of the relationship between external training load and health in competitive alpine skiing, an accurate and precise estimation of the athlete's kinematics is an essential methodological prerequisite. This study proposes an inertial sensor-based method to estimate the athlete's relative joint center positions and center of mass (CoM) kinematics in alpine skiing. Eleven inertial sensors were fixed to the lower and upper limbs, trunk, and head. The relative positions of the ankle, knee, hip, shoulder, elbow, and wrist joint centers, as well as the athlete's CoM kinematics were validated against a marker-based optoelectronic motion capture system during indoor carpet skiing. For all joints centers analyzed, position accuracy (mean error) was below 110 mm and precision (error standard deviation) was below 30 mm. CoM position accuracy and precision were 25.7 and 6.7 mm, respectively. Both the accuracy and precision of the system to estimate the distance between the ankle of the outside leg and CoM (measure quantifying the skier's overall vertical motion) were found to be below 11 mm. Some poorer accuracy and precision values (below 77 mm) were observed for the athlete's fore-aft position (i.e., the projection of the outer ankle-CoM vector onto the line corresponding to the projection of ski's longitudinal axis on the snow surface). In addition, the system was found to be sensitive enough to distinguish between different types of turns (wide/narrow). Thus, the method proposed in this paper may also provide a useful, pervasive way to monitor and control adverse external loading patterns that occur during regular on-snow training. Moreover, as demonstrated earlier, such an approach might have a certain potential to quantify competition time, movement repetitions and/or the accelerations acting on the different segments of the human body. However, prior to getting feasible for applications in daily training, future studies should primarily focus on a simplification of the sensor setup, as well as a fusion with global navigation satellite systems (i.e., the estimation of the absolute joint and CoM positions).

  14. A new linear least squares method for T1 estimation from SPGR signals with multiple TRs

    NASA Astrophysics Data System (ADS)

    Chang, Lin-Ching; Koay, Cheng Guan; Basser, Peter J.; Pierpaoli, Carlo

    2009-02-01

    The longitudinal relaxation time, T1, can be estimated from two or more spoiled gradient recalled echo x (SPGR) images with two or more flip angles and one or more repetition times (TRs). The function relating signal intensity and the parameters are nonlinear; T1 maps can be computed from SPGR signals using nonlinear least squares regression. A widely-used linear method transforms the nonlinear model by assuming a fixed TR in SPGR images. This constraint is not desirable since multiple TRs are a clinically practical way to reduce the total acquisition time, to satisfy the required resolution, and/or to combine SPGR data acquired at different times. A new linear least squares method is proposed using the first order Taylor expansion. Monte Carlo simulations of SPGR experiments are used to evaluate the accuracy and precision of the estimated T1 from the proposed linear and the nonlinear methods. We show that the new linear least squares method provides T1 estimates comparable in both precision and accuracy to those from the nonlinear method, allowing multiple TRs and reducing computation time significantly.

  15. Quantum-classical boundary for precision optical phase estimation

    NASA Astrophysics Data System (ADS)

    Birchall, Patrick M.; O'Brien, Jeremy L.; Matthews, Jonathan C. F.; Cable, Hugo

    2017-12-01

    Understanding the fundamental limits on the precision to which an optical phase can be estimated is of key interest for many investigative techniques utilized across science and technology. We study the estimation of a fixed optical phase shift due to a sample which has an associated optical loss, and compare phase estimation strategies using classical and nonclassical probe states. These comparisons are based on the attainable (quantum) Fisher information calculated per number of photons absorbed or scattered by the sample throughout the sensing process. We find that for a given number of incident photons upon the unknown phase, nonclassical techniques in principle provide less than a 20 % reduction in root-mean-square error (RMSE) in comparison with ideal classical techniques in multipass optical setups. Using classical techniques in a different optical setup that we analyze, which incorporates additional stages of interference during the sensing process, the achievable reduction in RMSE afforded by nonclassical techniques falls to only ≃4 % . We explain how these conclusions change when nonclassical techniques are compared to classical probe states in nonideal multipass optical setups, with additional photon losses due to the measurement apparatus.

  16. A generalised random encounter model for estimating animal density with remote sensor data.

    PubMed

    Lucas, Tim C D; Moorcroft, Elizabeth A; Freeman, Robin; Rowcliffe, J Marcus; Jones, Kate E

    2015-05-01

    Wildlife monitoring technology is advancing rapidly and the use of remote sensors such as camera traps and acoustic detectors is becoming common in both the terrestrial and marine environments. Current methods to estimate abundance or density require individual recognition of animals or knowing the distance of the animal from the sensor, which is often difficult. A method without these requirements, the random encounter model (REM), has been successfully applied to estimate animal densities from count data generated from camera traps. However, count data from acoustic detectors do not fit the assumptions of the REM due to the directionality of animal signals.We developed a generalised REM (gREM), to estimate absolute animal density from count data from both camera traps and acoustic detectors. We derived the gREM for different combinations of sensor detection widths and animal signal widths (a measure of directionality). We tested the accuracy and precision of this model using simulations of different combinations of sensor detection widths and animal signal widths, number of captures and models of animal movement.We find that the gREM produces accurate estimates of absolute animal density for all combinations of sensor detection widths and animal signal widths. However, larger sensor detection and animal signal widths were found to be more precise. While the model is accurate for all capture efforts tested, the precision of the estimate increases with the number of captures. We found no effect of different animal movement models on the accuracy and precision of the gREM.We conclude that the gREM provides an effective method to estimate absolute animal densities from remote sensor count data over a range of sensor and animal signal widths. The gREM is applicable for count data obtained in both marine and terrestrial environments, visually or acoustically (e.g. big cats, sharks, birds, echolocating bats and cetaceans). As sensors such as camera traps and acoustic detectors become more ubiquitous, the gREM will be increasingly useful for monitoring unmarked animal populations across broad spatial, temporal and taxonomic scales.

  17. Improving Weather Forecasts Through Reduced Precision Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hatfield, Samuel; Düben, Peter; Palmer, Tim

    2017-04-01

    We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results to observational error and assimilation window length. Half precision hardware will become available very shortly, with the introduction of Nvidia's Pascal GPU architecture and the Intel Knights Mill coprocessor. We hope that the results presented here will encourage the uptake of this hardware. References [1] Peter D. Düben and T. N. Palmer, 2014: Benchmark Tests for Numerical Weather Forecasts on Inexact Hardware, Mon. Weather Rev., 142, 3809-3829 [2] Peter D. Düben, Hugh McNamara and T. N. Palmer, 2014: The use of imprecise processing to improve accuracy in weather & climate prediction, J. Comput. Phys., 271, 2-18

  18. Real-time, autonomous precise satellite orbit determination using the global positioning system

    NASA Astrophysics Data System (ADS)

    Goldstein, David Ben

    2000-10-01

    The desire for autonomously generated, rapidly available, and highly accurate satellite ephemeris is growing with the proliferation of constellations of satellites and the cost and overhead of ground tracking resources. Autonomous Orbit Determination (OD) may be done on the ground in a post-processing mode or in real-time on board a satellite and may be accomplished days, hours or immediately after observations are processed. The Global Positioning System (GPS) is now widely used as an alternative to ground tracking resources to supply observation data for satellite positioning and navigation. GPS is accurate, inexpensive, provides continuous coverage, and is an excellent choice for autonomous systems. In an effort to estimate precise satellite ephemeris in real-time on board a satellite, the Goddard Space Flight Center (GSFC) created the GPS Enhanced OD Experiment (GEODE) flight navigation software. This dissertation offers alternative methods and improvements to GEODE to increase on board autonomy and real-time total position accuracy and precision without increasing computational burden. First, GEODE is modified to include a Gravity Acceleration Approximation Function (GAAF) to replace the traditional spherical harmonic representation of the gravity field. Next, an ionospheric correction method called Differenced Range Versus Integrated Doppler (DRVID) is applied to correct for ionospheric errors in the GPS measurements used in GEODE. Then, Dynamic Model Compensation (DMC) is added to estimate unmodeled and/or mismodeled forces in the dynamic model and to provide an alternative process noise variance-covariance formulation. Finally, a Genetic Algorithm (GA) is implemented in the form of Genetic Model Compensation (GMC) to optimize DMC forcing noise parameters. Application of GAAF, DRVID and DMC improved GEODE's position estimates by 28.3% when applied to GPS/MET data collected in the presence of Selective Availability (SA), 17.5% when SA is removed from the GPS/MET data and 10.8% on SA free TOPEX data. Position estimates with RSS errors below I meter are now achieved using SA free TOPEX data. DRVID causes an increase in computational burden while GAAF and DMC reduce computational burden. The net effect of applying GAAF, DRVID and DMC is an improvement in GEODE's accuracy/precision without an increase in computational burden.

  19. Mourning dove population trend estimates from Call-Count and North American Breeding Bird Surveys

    USGS Publications Warehouse

    Sauer, J.R.; Dolton, D.D.; Droege, S.

    1994-01-01

    The mourning dove (Zenaida macroura) Callcount Survey and the North American Breeding Bird Survey provide information on population trends of mourning doves throughout the continental United States. Because surveys are an integral part of the development of hunting regulations, a need exists to determine which survey provides precise information. We estimated population trends from 1966 to 1988 by state and dove management unit, and assessed the relative efficiency of each survey. Estimates of population trend differ (P lt 0.05) between surveys in 11 of 48 states; 9 of 11 states with divergent results occur in the Eastern Management Unit. Differences were probably a consequence of smaller sample sizes in the Callcount Survey. The Breeding Bird Survey generally provided trend estimates with smaller variances than did the Callcount Survey. Although the Callcount Survey probably provides more withinroute accuracy because of survey methods and timing, the Breeding Bird Survey has a larger sample size of survey routes and greater consistency of coverage in the Eastern Unit.

  20. Simple and accurate methods for quantifying deformation, disruption, and development in biological tissues

    PubMed Central

    Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros

    2014-01-01

    When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601

  1. Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Ge, Maorong; Dai, Xiaolei; Ren, Xiaodong; Fritsche, Mathias; Wickert, Jens; Schuh, Harald

    2015-06-01

    In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. A rigorous multi-GNSS analysis is performed to achieve the best possible consistency by processing the observations from different GNSS together in one common parameter estimation procedure. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the multi-GNSS Experiment, BeiDou Experimental Tracking Network, and International GNSS Service networks including stations all over the world. The statistical analysis of the 6-h predicted orbits show that the radial and cross root mean square (RMS) values are smaller than 10 cm for BeiDou and Galileo, and smaller than 5 cm for both GLONASS and GPS satellites, respectively. The RMS values of the clock differences between real-time and batch-processed solutions for GPS satellites are about 0.10 ns, while the RMS values for BeiDou, Galileo and GLONASS are 0.13, 0.13 and 0.14 ns, respectively. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70 %, while the positioning accuracy is improved by about 25 %. Some outliers in the GPS-only solutions vanish when multi-GNSS observations are processed simultaneous. The availability and reliability of GPS precise positioning decrease dramatically as the elevation cutoff increases. However, the accuracy of multi-GNSS precise point positioning (PPP) is hardly decreased and few centimeter are still achievable in the horizontal components even with 40 elevation cutoff. At 30 and 40 elevation cutoffs, the availability rates of GPS-only solution drop significantly to only around 70 and 40 %, respectively. However, multi-GNSS PPP can provide precise position estimates continuously (availability rate is more than 99.5 %) even up to 40 elevation cutoff (e.g., in urban canyons).

  2. Image subsampling and point scoring approaches for large-scale marine benthic monitoring programs

    NASA Astrophysics Data System (ADS)

    Perkins, Nicholas R.; Foster, Scott D.; Hill, Nicole A.; Barrett, Neville S.

    2016-07-01

    Benthic imagery is an effective tool for quantitative description of ecologically and economically important benthic habitats and biota. The recent development of autonomous underwater vehicles (AUVs) allows surveying of spatial scales that were previously unfeasible. However, an AUV collects a large number of images, the scoring of which is time and labour intensive. There is a need to optimise the way that subsamples of imagery are chosen and scored to gain meaningful inferences for ecological monitoring studies. We examine the trade-off between the number of images selected within transects and the number of random points scored within images on the percent cover of target biota, the typical output of such monitoring programs. We also investigate the efficacy of various image selection approaches, such as systematic or random, on the bias and precision of cover estimates. We use simulated biotas that have varying size, abundance and distributional patterns. We find that a relatively small sampling effort is required to minimise bias. An increased precision for groups that are likely to be the focus of monitoring programs is best gained through increasing the number of images sampled rather than the number of points scored within images. For rare species, sampling using point count approaches is unlikely to provide sufficient precision, and alternative sampling approaches may need to be employed. The approach by which images are selected (simple random sampling, regularly spaced etc.) had no discernible effect on mean and variance estimates, regardless of the distributional pattern of biota. Field validation of our findings is provided through Monte Carlo resampling analysis of a previously scored benthic survey from temperate waters. We show that point count sampling approaches are capable of providing relatively precise cover estimates for candidate groups that are not overly rare. The amount of sampling required, in terms of both the number of images and number of points, varies with the abundance, size and distributional pattern of target biota. Therefore, we advocate either the incorporation of prior knowledge or the use of baseline surveys to establish key properties of intended target biota in the initial stages of monitoring programs.

  3. Application of a hybrid model to reduce bias and improve precision in population estimates for elk (Cervus elaphus) inhabiting a cold desert ecosystem

    USGS Publications Warehouse

    Schoenecker, Kathryn A.; Lubow, Bruce C.

    2016-01-01

    Accurately estimating the size of wildlife populations is critical to wildlife management and conservation of species. Raw counts or “minimum counts” are still used as a basis for wildlife management decisions. Uncorrected raw counts are not only negatively biased due to failure to account for undetected animals, but also provide no estimate of precision on which to judge the utility of counts. We applied a hybrid population estimation technique that combined sightability modeling, radio collar-based mark-resight, and simultaneous double count (double-observer) modeling to estimate the population size of elk in a high elevation desert ecosystem. Combining several models maximizes the strengths of each individual model while minimizing their singular weaknesses. We collected data with aerial helicopter surveys of the elk population in the San Luis Valley and adjacent mountains in Colorado State, USA in 2005 and 2007. We present estimates from 7 alternative analyses: 3 based on different methods for obtaining a raw count and 4 based on different statistical models to correct for sighting probability bias. The most reliable of these approaches is a hybrid double-observer sightability model (model MH), which uses detection patterns of 2 independent observers in a helicopter plus telemetry-based detections of radio collared elk groups. Data were fit to customized mark-resight models with individual sighting covariates. Error estimates were obtained by a bootstrapping procedure. The hybrid method was an improvement over commonly used alternatives, with improved precision compared to sightability modeling and reduced bias compared to double-observer modeling. The resulting population estimate corrected for multiple sources of undercount bias that, if left uncorrected, would have underestimated the true population size by as much as 22.9%. Our comparison of these alternative methods demonstrates how various components of our method contribute to improving the final estimate and demonstrates why each is necessary.

  4. Determination of Eros Physical Parameters for Near Earth Asteroid Rendezvous Orbit Phase Navigation

    NASA Technical Reports Server (NTRS)

    Miller, J. K.; Antreasian, P. J.; Georgini, J.; Owen, W. M.; Williams, B. G.; Yeomans, D. K.

    1995-01-01

    Navigation of the orbit phase of the Near Earth steroid Rendezvous (NEAR) mission will re,quire determination of certain physical parameters describing the size, shape, gravity field, attitude and inertial properties of Eros. Prior to launch, little was known about Eros except for its orbit which could be determined with high precision from ground based telescope observations. Radar bounce and light curve data provided a rough estimate of Eros shape and a fairly good estimate of the pole, prime meridian and spin rate. However, the determination of the NEAR spacecraft orbit requires a high precision model of Eros's physical parameters and the ground based data provides only marginal a priori information. Eros is the principal source of perturbations of the spacecraft's trajectory and the principal source of data for determining the orbit. The initial orbit determination strategy is therefore concerned with developing a precise model of Eros. The original plan for Eros orbital operations was to execute a series of rendezvous burns beginning on December 20,1998 and insert into a close Eros orbit in January 1999. As a result of an unplanned termination of the rendezvous burn on December 20, 1998, the NEAR spacecraft continued on its high velocity approach trajectory and passed within 3900 km of Eros on December 23, 1998. The planned rendezvous burn was delayed until January 3, 1999 which resulted in the spacecraft being placed on a trajectory that slowly returns to Eros with a subsequent delay of close Eros orbital operations until February 2001. The flyby of Eros provided a brief glimpse and allowed for a crude estimate of the pole, prime meridian and mass of Eros. More importantly for navigation, orbit determination software was executed in the landmark tracking mode to determine the spacecraft orbit and a preliminary shape and landmark data base has been obtained. The flyby also provided an opportunity to test orbit determination operational procedures that will be used in February of 2001. The initial attitude and spin rate of Eros, as well as estimates of reference landmark locations, are obtained from images of the asteroid. These initial estimates are used as a priori values for a more precise refinement of these parameters by the orbit determination software which combines optical measurements with Doppler tracking data to obtain solutions for the required parameters. As the spacecraft is maneuvered; closer to the asteroid, estimates of spacecraft state, asteroid attitude, solar pressure, landmark locations and Eros physical parameters including mass, moments of inertia and gravity harmonics are determined with increasing precision. The determination of the elements of the inertia tensor of the asteroid is critical to spacecraft orbit determination and prediction of the asteroid attitude. The moments of inertia about the principal axes are also of scientific interest since they provide some insight into the internal mass distribution. Determination of the principal axes moments of inertia will depend on observing free precession in the asteroid's attitude dynamics. Gravity harmonics are in themselves of interest to science. When compared with the asteroid shape, some insight may be obtained into Eros' internal structure. The location of the center of mass derived from the first degree harmonic coefficients give a direct indication of overall mass distribution. The second degree harmonic coefficients relate to the radial distribution of mass. Higher degree harmonics may be compared with surface features to gain additional insight into mass distribution. In this paper, estimates of Eros physical parameters obtained from the December 23,1998 flyby will be presented. This new knowledge will be applied to simplification of Eros orbital operations in February of 2001. The resulting revision to the orbit determination strategy will also be discussed.

  5. Precise monitoring of global temperature trends from satellites

    NASA Technical Reports Server (NTRS)

    Spencer, Roy W.; Christy, John R.

    1990-01-01

    Passive microwave radiometry from satellites provides more precise atmospheric temperature information than that obtained from the relatively sparse distribution of thermometers over the earth's surface. Accurate global atmospheric temperature estimates are needed for detection of possible greenhouse warming, evaluation of computer models of climate change, and for understanding important factors in the climate system. Analysis of the first 10 years (1979 to 1988) of satellite measurements of lower atmospheric temperature changes reveals a monthly precision of 0.01 C, large temperature variability on time scales from weeks to several years, but no obvious trend for the 10-year period. The warmest years, in descending order, were 1987, 1988, 1983, and 1980. The years 1984, 1985, and 1986 were the coolest.

  6. COBALT CoOperative Blending of Autonomous Landing Technology

    NASA Technical Reports Server (NTRS)

    Carson, John M. III; Restrepo, Carolina I.; Robertson, Edward A.; Seubert, Carl R.; Amzajerdian, Farzin

    2016-01-01

    COBALT is a terrestrial test platform for development and maturation of GN&C (Guidance, Navigation and Control) technologies for PL&HA (Precision Landing and Hazard Avoidance). The project is developing a third generation, Langley Navigation Doppler Lidar (NDL) for ultra-precise velocity and range measurements, which will be integrated and tested with the JPL Lander Vision System (LVS) for Terrain Relative Navigation (TRN) position estimates. These technologies together provide navigation that enables controlled precision landing. The COBALT hardware will be integrated in 2017 into the GN&C subsystem of the Xodiac rocket-propulsive Vertical Test Bed (VTB) developed by Masten Space Systems (MSS), and two terrestrial flight campaigns will be conducted: one open-loop (i.e., passive) and one closed-loop (i.e., active).

  7. Group Matching: Is This a Research Technique to Be Avoided?

    ERIC Educational Resources Information Center

    Ross, Donald C.; Klein, Donald F.

    1988-01-01

    The variance of the sample difference and the power of the "F" test for mean differences were studied under group matching on covariates and also under random assignment. Results shed light on systematic assignment procedures advocated to provide more precise estimates of treatment effects than simple random assignment. (TJH)

  8. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities

    PubMed Central

    Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan

    2015-01-01

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993

  9. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    PubMed

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  10. GLONASS orbit/clock combination in VNIIFTRI

    NASA Astrophysics Data System (ADS)

    Bezmenov, I.; Pasynok, S.

    2015-08-01

    An algorithm and a program for GLONASS satellites orbit/clock combination based on daily precise orbits submitted by several Analytic Centers were developed. Some theoretical estimates for combine orbit positions RMS were derived. It was shown that under condition that RMS of satellite orbits provided by the Analytic Centers during a long time interval are commensurable the RMS of combine orbit positions is no greater than RMS of other satellite positions estimated by any of the Analytic Centers.

  11. Fundamental limits to frequency estimation: a comprehensive microscopic perspective

    NASA Astrophysics Data System (ADS)

    Haase, J. F.; Smirne, A.; Kołodyński, J.; Demkowicz-Dobrzański, R.; Huelga, S. F.

    2018-05-01

    We consider a metrology scenario in which qubit-like probes are used to sense an external field that affects their energy splitting in a linear fashion. Following the frequency estimation approach in which one optimizes the state and sensing time of the probes to maximize the sensitivity, we provide a systematic study of the attainable precision under the impact of noise originating from independent bosonic baths. Specifically, we invoke an explicit microscopic derivation of the probe dynamics using the spin-boson model with weak coupling of arbitrary geometry. We clarify how the secular approximation leads to a phase-covariant (PC) dynamics, where the noise terms commute with the field Hamiltonian, while the inclusion of non-secular contributions breaks the PC. Moreover, unless one restricts to a particular (i.e., Ohmic) spectral density of the bath modes, the noise terms may contain relevant information about the frequency to be estimated. Thus, by considering general evolutions of a single probe, we study regimes in which these two effects have a non-negligible impact on the achievable precision. We then consider baths of Ohmic spectral density yet fully accounting for the lack of PC, in order to characterize the ultimate attainable scaling of precision when N probes are used in parallel. Crucially, we show that beyond the semigroup (Lindbladian) regime the Zeno limit imposing the 1/N 3/2 scaling of the mean squared error, recently derived assuming PC, generalises to any dynamics of the probes, unless the latter are coupled to the baths in the direction perfectly transversal to the frequency encoding—when a novel scaling of 1/N 7/4 arises. As our microscopic approach covers all classes of dissipative dynamics, from semigroup to non-Markovian ones (each of them potentially non-phase-covariant), it provides an exhaustive picture, in which all the different asymptotic scalings of precision naturally emerge.

  12. An Efficient Design Strategy for Logistic Regression Using Outcome- and Covariate-Dependent Pooling of Biospecimens Prior to Assay

    PubMed Central

    Lyles, Robert H.; Mitchell, Emily M.; Weinberg, Clarice R.; Umbach, David M.; Schisterman, Enrique F.

    2016-01-01

    Summary Potential reductions in laboratory assay costs afforded by pooling equal aliquots of biospecimens have long been recognized in disease surveillance and epidemiological research and, more recently, have motivated design and analytic developments in regression settings. For example, Weinberg and Umbach (1999, Biometrics 55, 718–726) provided methods for fitting set-based logistic regression models to case-control data when a continuous exposure variable (e.g., a biomarker) is assayed on pooled specimens. We focus on improving estimation efficiency by utilizing available subject-specific information at the pool allocation stage. We find that a strategy that we call “(y,c)-pooling,” which forms pooling sets of individuals within strata defined jointly by the outcome and other covariates, provides more precise estimation of the risk parameters associated with those covariates than does pooling within strata defined only by the outcome. We review the approach to set-based analysis through offsets developed by Weinberg and Umbach in a recent correction to their original paper. We propose a method for variance estimation under this design and use simulations and a real-data example to illustrate the precision benefits of (y,c)-pooling relative to y-pooling. We also note and illustrate that set-based models permit estimation of covariate interactions with exposure. PMID:26964741

  13. A Model for the Estimation of Hepatic Insulin Extraction After a Meal.

    PubMed

    Piccinini, Francesca; Dalla Man, Chiara; Vella, Adrian; Cobelli, Claudio

    2016-09-01

    Quantitative assessment of hepatic insulin extraction (HE) after an oral glucose challenge, e.g., a meal, is important to understand the regulation of carbohydrate metabolism. The aim of the current study is to develop a model of system for estimating HE. Nine different models, of increasing complexity, were tested on data of 204 normal subjects, who underwent a mixed meal tolerance test, with frequent measurement of plasma glucose, insulin, and C-peptide concentrations. All these models included a two-compartment model of C-peptide kinetics, an insulin secretion model, a compartmental model of insulin kinetics (with number of compartments ranging from one to three), and different HE descriptions, depending on plasma glucose and insulin. Model performances were compared on the basis of data fit, precision of parameter estimates, and parsimony criteria. The three-compartment model of insulin kinetics, coupled with HE depending on glucose concentration, showed the best fit and a good ability to precisely estimate the parameters. In addition, the model calculates basal and total indices of HE ( HE b and HE tot , respectively), and provides an index of HE sensitivity to glucose ( S G HE ). A new physiologically based HE model has been developed, which allows an improved quantitative description of glucose regulation. The use of the new model provides an in-depth description of insulin kinetics, thus enabling a better understanding of a given subject's metabolic state.

  14. The impact of composite AUC estimates on the prediction of systemic exposure in toxicology experiments.

    PubMed

    Sahota, Tarjinder; Danhof, Meindert; Della Pasqua, Oscar

    2015-06-01

    Current toxicity protocols relate measures of systemic exposure (i.e. AUC, Cmax) as obtained by non-compartmental analysis to observed toxicity. A complicating factor in this practice is the potential bias in the estimates defining safe drug exposure. Moreover, it prevents the assessment of variability. The objective of the current investigation was therefore (a) to demonstrate the feasibility of applying nonlinear mixed effects modelling for the evaluation of toxicokinetics and (b) to assess the bias and accuracy in summary measures of systemic exposure for each method. Here, simulation scenarios were evaluated, which mimic toxicology protocols in rodents. To ensure differences in pharmacokinetic properties are accounted for, hypothetical drugs with varying disposition properties were considered. Data analysis was performed using non-compartmental methods and nonlinear mixed effects modelling. Exposure levels were expressed as area under the concentration versus time curve (AUC), peak concentrations (Cmax) and time above a predefined threshold (TAT). Results were then compared with the reference values to assess the bias and precision of parameter estimates. Higher accuracy and precision were observed for model-based estimates (i.e. AUC, Cmax and TAT), irrespective of group or treatment duration, as compared with non-compartmental analysis. Despite the focus of guidelines on establishing safety thresholds for the evaluation of new molecules in humans, current methods neglect uncertainty, lack of precision and bias in parameter estimates. The use of nonlinear mixed effects modelling for the analysis of toxicokinetics provides insight into variability and should be considered for predicting safe exposure in humans.

  15. Improving inferences from fisheries capture-recapture studies through remote detection of PIT tags

    USGS Publications Warehouse

    Hewitt, David A.; Janney, Eric C.; Hayes, Brian S.; Shively, Rip S.

    2010-01-01

    Models for capture-recapture data are commonly used in analyses of the dynamics of fish and wildlife populations, especially for estimating vital parameters such as survival. Capture-recapture methods provide more reliable inferences than other methods commonly used in fisheries studies. However, for rare or elusive fish species, parameter estimation is often hampered by small probabilities of re-encountering tagged fish when encounters are obtained through traditional sampling methods. We present a case study that demonstrates how remote antennas for passive integrated transponder (PIT) tags can increase encounter probabilities and the precision of survival estimates from capture-recapture models. Between 1999 and 2007, trammel nets were used to capture and tag over 8,400 endangered adult Lost River suckers (Deltistes luxatus) during the spawning season in Upper Klamath Lake, Oregon. Despite intensive sampling at relatively discrete spawning areas, encounter probabilities from Cormack-Jolly-Seber models were consistently low (< 0.2) and the precision of apparent annual survival estimates was poor. Beginning in 2005, remote PIT tag antennas were deployed at known spawning locations to increase the probability of re-encountering tagged fish. We compare results based only on physical recaptures with results based on both physical recaptures and remote detections to demonstrate the substantial improvement in estimates of encounter probabilities (approaching 100%) and apparent annual survival provided by the remote detections. The richer encounter histories provided robust inferences about the dynamics of annual survival and have made it possible to explore more realistic models and hypotheses about factors affecting the conservation and recovery of this endangered species. Recent advances in technology related to PIT tags have paved the way for creative implementation of large-scale tagging studies in systems where they were previously considered impracticable.

  16. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  17. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus.

    PubMed

    Goldberg, Joshua F; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L Scott; Wangchuk, Tshewang R; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010-2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the "true" explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25-15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest.

  18. Examining Temporal Sample Scale and Model Choice with Spatial Capture-Recapture Models in the Common Leopard Panthera pardus

    PubMed Central

    Goldberg, Joshua F.; Tempa, Tshering; Norbu, Nawang; Hebblewhite, Mark; Mills, L. Scott; Wangchuk, Tshewang R.; Lukacs, Paul

    2015-01-01

    Many large carnivores occupy a wide geographic distribution, and face threats from habitat loss and fragmentation, poaching, prey depletion, and human wildlife-conflicts. Conservation requires robust techniques for estimating population densities and trends, but the elusive nature and low densities of many large carnivores make them difficult to detect. Spatial capture-recapture (SCR) models provide a means for handling imperfect detectability, while linking population estimates to individual movement patterns to provide more accurate estimates than standard approaches. Within this framework, we investigate the effect of different sample interval lengths on density estimates, using simulations and a common leopard (Panthera pardus) model system. We apply Bayesian SCR methods to 89 simulated datasets and camera-trapping data from 22 leopards captured 82 times during winter 2010–2011 in Royal Manas National Park, Bhutan. We show that sample interval length from daily, weekly, monthly or quarterly periods did not appreciably affect median abundance or density, but did influence precision. We observed the largest gains in precision when moving from quarterly to shorter intervals. We therefore recommend daily sampling intervals for monitoring rare or elusive species where practicable, but note that monthly or quarterly sample periods can have similar informative value. We further develop a novel application of Bayes factors to select models where multiple ecological factors are integrated into density estimation. Our simulations demonstrate that these methods can help identify the “true” explanatory mechanisms underlying the data. Using this method, we found strong evidence for sex-specific movement distributions in leopards, suggesting that sexual patterns of space-use influence density. This model estimated a density of 10.0 leopards/100 km2 (95% credibility interval: 6.25–15.93), comparable to contemporary estimates in Asia. These SCR methods provide a guide to monitor and observe the effect of management interventions on leopards and other species of conservation interest. PMID:26536231

  19. The reliability, precision and clinically meaningful change of walking assessments in multiple sclerosis.

    PubMed

    Learmonth, Yvonne C; Dlugonski, Deirdre D; Pilutti, Lara A; Sandroff, Brian M; Motl, Robert W

    2013-11-01

    Assessing walking impairment in those with multiple sclerosis (MS) is common, however little is known about the reliability, precision and clinically important change of walking outcomes. The purpose of this study was to determine the reliability, precision and clinically important change of the Timed 25-Foot Walk (T25FW), Six-Minute Walk (6MW), Multiple Sclerosis Walking Scale-12 (MSWS-12) and accelerometry. Data were collected from 82 persons with MS at two time points, six months apart. Analyses were undertaken for the whole sample and stratified based on disability level and usage of walking aids. Intraclass correlation coefficient (ICC) analyses established reliability: standard error of measurement (SEM) and coefficient of variation (CV) determined precision; and minimal detectable change (MDC) defined clinically important change. All outcome measures were reliable with precision and MDC varying between measures in the whole sample: T25FW: ICC=0.991; SEM=1 s; CV=6.2%; MDC=2.7 s (36%), 6MW: ICC=0.959; SEM=32 m; CV=6.2%; MDC=88 m (20%), MSWS-12: ICC=0.927; SEM=8; CV=27%; MDC=22 (53%), accelerometry counts/day: ICC=0.883; SEM=28450; CV=17%; MDC=78860 (52%), accelerometry steps/day: ICC=0.907; SEM=726; CV=16%; MDC=2011 (45%). Variation in these estimates was seen based on disability level and walking aid. The reliability of these outcomes is good and falls within acceptable ranges. Precision and clinically important change estimates provide guidelines for interpreting these outcomes in clinical and research settings.

  20. Skylab water balance error analysis

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  1. Spherical Harmonics Functions Modelling of Meteorological Parameters in PWV Estimation

    NASA Astrophysics Data System (ADS)

    Deniz, Ilke; Mekik, Cetin; Gurbuz, Gokhan

    2016-08-01

    Aim of this study is to derive temperature, pressure and humidity observations using spherical harmonics modelling and to interpolate for the derivation of precipitable water vapor (PWV) of TUSAGA-Active stations in the test area encompassing 38.0°-42.0° northern latitudes and 28.0°-34.0° eastern longitudes of Turkey. In conclusion, the meteorological parameters computed by using GNSS observations for the study area have been modelled with a precision of ±1.74 K in temperature, ±0.95 hPa in pressure and ±14.88 % in humidity. Considering studies on the interpolation of meteorological parameters, the precision of temperature and pressure models provide adequate solutions. This study funded by the Scientific and Technological Research Council of Turkey (TUBITAK) (The Estimation of Atmospheric Water Vapour with GPS Project, Project No: 112Y350).

  2. Verification of the exponential model of body temperature decrease after death in pigs.

    PubMed

    Kaliszan, Michal; Hauser, Roman; Kaliszan, Roman; Wiczling, Paweł; Buczyñski, Janusz; Penkowski, Michal

    2005-09-01

    The authors have conducted a systematic study in pigs to verify the models of post-mortem body temperature decrease currently employed in forensic medicine. Twenty-four hour automatic temperature recordings were performed in four body sites starting 1.25 h after pig killing in an industrial slaughterhouse under typical environmental conditions (19.5-22.5 degrees C). The animals had been randomly selected under a regular manufacturing process. The temperature decrease time plots drawn starting 75 min after death for the eyeball, the orbit soft tissues, the rectum and muscle tissue were found to fit the single-exponential thermodynamic model originally proposed by H. Rainy in 1868. In view of the actual intersubject variability, the addition of a second exponential term to the model was demonstrated to be statistically insignificant. Therefore, the two-exponential model for death time estimation frequently recommended in the forensic medicine literature, even if theoretically substantiated for individual test cases, provides no advantage as regards the reliability of estimation in an actual case. The improvement of the precision of time of death estimation by the reconstruction of an individual curve on the basis of two dead body temperature measurements taken 1 h apart or taken continuously for a longer time (about 4 h), has also been proved incorrect. It was demonstrated that the reported increase of precision of time of death estimation due to use of a multiexponential model, with individual exponential terms to account for the cooling rate of the specific body sites separately, is artifactual. The results of this study support the use of the eyeball and/or the orbit soft tissues as temperature measuring sites at times shortly after death. A single-exponential model applied to the eyeball cooling has been shown to provide a very precise estimation of the time of death up to approximately 13 h after death. For the period thereafter, a better estimation of the time of death is obtained from temperature data collected from the muscles or the rectum.

  3. An evaluation of portion size estimation aids: precision, ease of use and likelihood of future use.

    PubMed

    Faulkner, Gemma P; Livingstone, M Barbara E; Pourshahidi, L Kirsty; Spence, Michelle; Dean, Moira; O'Brien, Sinead; Gibney, Eileen R; Wallace, Julie Mw; McCaffrey, Tracy A; Kerr, Maeve A

    2016-09-01

    The present study aimed to evaluate the precision, ease of use and likelihood of future use of portion size estimation aids (PSEA). A range of PSEA were used to estimate the serving sizes of a range of commonly eaten foods and rated for ease of use and likelihood of future usage. For each food, participants selected their preferred PSEA from a range of options including: quantities and measures; reference objects; measuring; and indicators on food packets. These PSEA were used to serve out various foods (e.g. liquid, amorphous, and composite dishes). Ease of use and likelihood of future use were noted. The foods were weighed to determine the precision of each PSEA. Males and females aged 18-64 years (n 120). The quantities and measures were the most precise PSEA (lowest range of weights for estimated portion sizes). However, participants preferred household measures (e.g. 200 ml disposable cup) - deemed easy to use (median rating of 5), likely to use again in future (all scored either 4 or 5 on a scale from 1='not very likely' to 5='very likely to use again') and precise (narrow range of weights for estimated portion sizes). The majority indicated they would most likely use the PSEA preparing a meal (94 %), particularly dinner (86 %) in the home (89 %; all P<0·001) for amorphous grain foods. Household measures may be precise, easy to use and acceptable aids for estimating the appropriate portion size of amorphous grain foods.

  4. NaCl nucleation from brine in seeded simulations: Sources of uncertainty in rate estimates.

    PubMed

    Zimmermann, Nils E R; Vorselaars, Bart; Espinosa, Jorge R; Quigley, David; Smith, William R; Sanz, Eduardo; Vega, Carlos; Peters, Baron

    2018-06-14

    This work reexamines seeded simulation results for NaCl nucleation from a supersaturated aqueous solution at 298.15 K and 1 bar pressure. We present a linear regression approach for analyzing seeded simulation data that provides both nucleation rates and uncertainty estimates. Our results show that rates obtained from seeded simulations rely critically on a precise driving force for the model system. The driving force vs. solute concentration curve need not exactly reproduce that of the real system, but it should accurately describe the thermodynamic properties of the model system. We also show that rate estimates depend strongly on the nucleus size metric. We show that the rate estimates systematically increase as more stringent local order parameters are used to count members of a cluster and provide tentative suggestions for appropriate clustering criteria.

  5. Parallel algorithm for solving Kepler’s equation on Graphics Processing Units: Application to analysis of Doppler exoplanet searches

    NASA Astrophysics Data System (ADS)

    Ford, Eric B.

    2009-05-01

    We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.

  6. Precise Image-Based Motion Estimation for Autonomous Small Body Exploration

    NASA Technical Reports Server (NTRS)

    Johnson, Andrew E.; Matthies, Larry H.

    1998-01-01

    Space science and solar system exploration are driving NASA to develop an array of small body missions ranging in scope from near body flybys to complete sample return. This paper presents an algorithm for onboard motion estimation that will enable the precision guidance necessary for autonomous small body landing. Our techniques are based on automatic feature tracking between a pair of descent camera images followed by two frame motion estimation and scale recovery using laser altimetry data. The output of our algorithm is an estimate of rigid motion (attitude and position) and motion covariance between frames. This motion estimate can be passed directly to the spacecraft guidance and control system to enable rapid execution of safe and precise trajectories.

  7. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    PubMed

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with a second differently generated set of spatial point populations, ν₈ and ν(W) again being the best performers in the longer-range autocorrelated populations. However, no systematic variance estimators tested were free from bias. On balance, systematic designs bring more narrow confidence intervals in clustered populations, while random designs permit unbiased estimates of (often wider) confidence interval. The search continues for better estimators of sampling variance for the systematic survey mean.

  8. Aliasing, Ambiguities, and Interpolation in Wideband Direction-of-Arrival Estimation Using Antenna Arrays

    ERIC Educational Resources Information Center

    Ho, Chung-Cheng

    2016-01-01

    For decades, direction finding has been an important research topic in many applications such as radar, location services, and medical diagnosis for treatment. For those kinds of applications, the precision of location estimation plays an important role, since that, having a higher precision location estimate method is always desirable. Although…

  9. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  10. Comparing Methods of Assessing Dog Rabies Vaccination Coverage in Rural and Urban Communities in Tanzania

    PubMed Central

    Sambo, Maganga; Johnson, Paul C. D.; Hotopp, Karen; Changalucha, Joel; Cleaveland, Sarah; Kazwala, Rudovick; Lembo, Tiziana; Lugelo, Ahmed; Lushasi, Kennedy; Maziku, Mathew; Mbunda, Eberhard; Mtema, Zacharia; Sikana, Lwitiko; Townsend, Sunny E.; Hampson, Katie

    2017-01-01

    Rabies can be eliminated by achieving comprehensive coverage of 70% of domestic dogs during annual mass vaccination campaigns. Estimates of vaccination coverage are, therefore, required to evaluate and manage mass dog vaccination programs; however, there is no specific guidance for the most accurate and efficient methods for estimating coverage in different settings. Here, we compare post-vaccination transects, school-based surveys, and household surveys across 28 districts in southeast Tanzania and Pemba island covering rural, urban, coastal and inland settings, and a range of different livelihoods and religious backgrounds. These approaches were explored in detail in a single district in northwest Tanzania (Serengeti), where their performance was compared with a complete dog population census that also recorded dog vaccination status. Post-vaccination transects involved counting marked (vaccinated) and unmarked (unvaccinated) dogs immediately after campaigns in 2,155 villages (24,721 dogs counted). School-based surveys were administered to 8,587 primary school pupils each representing a unique household, in 119 randomly selected schools approximately 2 months after campaigns. Household surveys were conducted in 160 randomly selected villages (4,488 households) in July/August 2011. Costs to implement these coverage assessments were $12.01, $66.12, and $155.70 per village for post-vaccination transects, school-based, and household surveys, respectively. Simulations were performed to assess the effect of sampling on the precision of coverage estimation. The sampling effort required to obtain reasonably precise estimates of coverage from household surveys is generally very high and probably prohibitively expensive for routine monitoring across large areas, particularly in communities with high human to dog ratios. School-based surveys partially overcame sampling constraints, however, were also costly to obtain reasonably precise estimates of coverage. Post-vaccination transects provided precise and timely estimates of community-level coverage that could be used to troubleshoot the performance of campaigns across large areas. However, transects typically overestimated coverage by around 10%, which therefore needs consideration when evaluating the impacts of campaigns. We discuss the advantages and disadvantages of these different methods and make recommendations for how vaccination campaigns can be better monitored and managed at different stages of rabies control and elimination programs. PMID:28352630

  11. Crowdsourcing as an Analytical Method: Metrology of Smartphone Measurements in Heritage Science.

    PubMed

    Brigham, Rosie; Grau-Bové, Josep; Rudnicka, Anna; Cassar, May; Strlic, Matija

    2018-06-18

    This research assesses the precision, repeatability, and accuracy of crowdsourced scientific measurements, and whether their quality is sufficient to provide usable results. Measurements of colour and area were chosen because of the possibility of producing them with smartphone cameras. The quality of the measurements was estimated experimentally by comparing data contributed by anonymous participants in heritage sites with reference measurements of known accuracy and precision. Participants performed the measurements by taking photographs with their smartphones, from which colour and dimensional data could be extracted. The results indicate that smartphone measurements provided by citizen scientists can be used to measure changes in colour, but that the performance is strongly dependent on the measured colour coordinate. The same method can be used to measure areas when the difference in colour with the neighbouring areas is large enough. These results render the method useful in some heritage science contexts, but higher precision would be desirable. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A Long Distance Laser Altimeter for Terrain Relative Navigation and Spacecraft Landing

    NASA Technical Reports Server (NTRS)

    Pierrottet, Diego F.; Amzajerdian, Farzin; Barnes, Bruce W.

    2014-01-01

    A high precision laser altimeter was developed under the Autonomous Landing and Hazard Avoidance (ALHAT) project at NASA Langley Research Center. The laser altimeter provides slant-path range measurements from operational ranges exceeding 30 km that will be used to support surface-relative state estimation and navigation during planetary descent and precision landing. The altimeter uses an advanced time-of-arrival receiver, which produces multiple signal-return range measurements from tens of kilometers with 5 cm precision. The transmitter is eye-safe, simplifying operations and testing on earth. The prototype is fully autonomous, and able to withstand the thermal and mechanical stresses experienced during test flights conducted aboard helicopters, fixed-wing aircraft, and Morpheus, a terrestrial rocket-powered vehicle developed by NASA Johnson Space Center. This paper provides an overview of the sensor and presents results obtained during recent field experiments including a helicopter flight test conducted in December 2012 and Morpheus flight tests conducted during March of 2014.

  13. Accounting for Limited Detection Efficiency and Localization Precision in Cluster Analysis in Single Molecule Localization Microscopy

    PubMed Central

    Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra

    2015-01-01

    Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150

  14. NASA's global differential GPS system and the TDRSS augmentation service for satellites

    NASA Technical Reports Server (NTRS)

    Bar-Sever, Yoaz; Young, Larry; Stocklin, Frank; Rush, John

    2004-01-01

    NASA is planning to launch a new service for Earth satellites providing them with precise GPS differential corrections and other ancillary information enabling decimeter level orbit determination accuracy, and nanosecond time-transfer accuracy, onboard, in real-time. The TDRSS Augmentation Service for Satellites (TASS) will broadcast its message on the S-band multiple access channel of NASA's Tracking and Data Relay Satellite System (TDRSS). The satellite's phase array antenna has been configured to provide a wide beam, extending coverage up to 1000 km altitude over the poles. Global coverage will be ensured with broadcast from three or more TDRSS satellites. The GPS differential corrections are provided by the NASA Global Differential GPS (GDGPS) System, developed and operated by NASA's Jet Propulsion Laboratory. The GDGPS System employs a global ground network of more than 70 GPS receivers to monitor the GPS constellation in real time. The system provides real-time estimates of the GPS satellite states, as well as many other real-time products such as differential corrections, global ionospheric maps, and integrity monitoring. The unique multiply redundant architecture of the GDGPS System ensures very high reliability, with 99.999% demonstrated since the inception of the system in Early 2000. The estimated real time GPS orbit and clock states provided by the GDGPS system are accurate to better than 20 cm 3D RMS, and have been demonstrated to support sub-decimeter real time positioning and orbit determination for a variety of terrestrial, airborne, and spaceborne applications. In addition to the GPS differential corrections, TASS will provide real-time Earth orientation and solar flux information that enable precise onboard knowledge of the Earth-fixed position of the spacecraft, and precise orbit prediction and planning capabilities. TASS will also provide 5 seconds alarms for GPS integrity failures based on the unique GPS integrity monitoring service of the GDGPS System.

  15. Item Response Theory Modeling of the Philadelphia Naming Test

    ERIC Educational Resources Information Center

    Fergadiotis, Gerasimos; Kellough, Stacey; Hula, William D.

    2015-01-01

    Purpose: In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating…

  16. Applying Bootstrap Resampling to Compute Confidence Intervals for Various Statistics with R

    ERIC Educational Resources Information Center

    Dogan, C. Deha

    2017-01-01

    Background: Most of the studies in academic journals use p values to represent statistical significance. However, this is not a good indicator of practical significance. Although confidence intervals provide information about the precision of point estimation, they are, unfortunately, rarely used. The infrequent use of confidence intervals might…

  17. Reporting Confidence Intervals and Effect Sizes: Collecting the Evidence

    ERIC Educational Resources Information Center

    Zientek, Linda Reichwein; Ozel, Z. Ebrar Yetkiner; Ozel, Serkan; Allen, Jeff

    2012-01-01

    Confidence intervals (CIs) and effect sizes are essential to encourage meta-analytic thinking and to accumulate research findings. CIs provide a range of plausible values for population parameters with a degree of confidence that the parameter is in that particular interval. CIs also give information about how precise the estimates are. Comparison…

  18. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  19. Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model

    ERIC Educational Resources Information Center

    Kim, Kyung Yong; Lee, Won-Chan

    2018-01-01

    Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…

  20. Examining Exposure Assessment in Shift Work Research: A Study on Depression Among Nurses.

    PubMed

    Hall, Amy L; Franche, Renée-Louise; Koehoorn, Mieke

    2018-02-13

    Coarse exposure assessment and assignment is a common issue facing epidemiological studies of shift work. Such measures ignore a number of exposure characteristics that may impact on health, increasing the likelihood of biased effect estimates and masked exposure-response relationships. To demonstrate the impacts of exposure assessment precision in shift work research, this study investigated relationships between work schedule and depression in a large survey of Canadian nurses. The Canadian 2005 National Survey of the Work and Health of Nurses provided the analytic sample (n = 11450). Relationships between work schedule and depression were assessed using logistic regression models with high, moderate, and low-precision exposure groupings. The high-precision grouping described shift timing and rotation frequency, the moderate-precision grouping described shift timing, and the low-precision grouping described the presence/absence of shift work. Final model estimates were adjusted for the potential confounding effects of demographic and work variables, and bootstrap weights were used to generate sampling variances that accounted for the survey sample design. The high-precision exposure grouping model showed the strongest relationships between work schedule and depression, with increased odds ratios [ORs] for rapidly rotating (OR = 1.51, 95% confidence interval [CI] = 0.91-2.51) and undefined rotating (OR = 1.67, 95% CI = 0.92-3.02) shift workers, and a decreased OR for depression in slow rotating (OR = 0.79, 95% CI = 0.57-1.08) shift workers. For the low- and moderate-precision exposure grouping models, weak relationships were observed for all work schedule categories (OR range 0.95 to 0.99). Findings from this study support the need to consider and collect the data required for precise and conceptually driven exposure assessment and assignment in future studies of shift work and health. Further research into the effects of shift rotation frequency on depression is also recommended. © The Author(s) 2018. Published by Oxford University Press on behalf of the British Occupational Hygiene Society.

  1. Reliability of Pressure Ulcer Rates: How Precisely Can We Differentiate Among Hospital Units, and Does the Standard Signal-Noise Reliability Measure Reflect This Precision?

    PubMed

    Staggs, Vincent S; Cramer, Emily

    2016-08-01

    Hospital performance reports often include rankings of unit pressure ulcer rates. Differentiating among units on the basis of quality requires reliable measurement. Our objectives were to describe and apply methods for assessing reliability of hospital-acquired pressure ulcer rates and evaluate a standard signal-noise reliability measure as an indicator of precision of differentiation among units. Quarterly pressure ulcer data from 8,199 critical care, step-down, medical, surgical, and medical-surgical nursing units from 1,299 US hospitals were analyzed. Using beta-binomial models, we estimated between-unit variability (signal) and within-unit variability (noise) in annual unit pressure ulcer rates. Signal-noise reliability was computed as the ratio of between-unit variability to the total of between- and within-unit variability. To assess precision of differentiation among units based on ranked pressure ulcer rates, we simulated data to estimate the probabilities of a unit's observed pressure ulcer rate rank in a given sample falling within five and ten percentiles of its true rank, and the probabilities of units with ulcer rates in the highest quartile and highest decile being identified as such. We assessed the signal-noise measure as an indicator of differentiation precision by computing its correlations with these probabilities. Pressure ulcer rates based on a single year of quarterly or weekly prevalence surveys were too susceptible to noise to allow for precise differentiation among units, and signal-noise reliability was a poor indicator of precision of differentiation. To ensure precise differentiation on the basis of true differences, alternative methods of assessing reliability should be applied to measures purported to differentiate among providers or units based on quality. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc. © 2016 The Authors. Research in Nursing & Health published by Wiley Periodicals, Inc.

  2. The effectiveness of visitation proxy variables in improving recreation use estimates for the USDA Forest Service

    Treesearch

    Donald B.K. English; Susan M. Kocis; J. Ross Arnold; Stanley J. Zarnoch; Larry Warren

    2003-01-01

    In estimating recreation visitation at the National Forest level in the US, annual counts of a number of types of visitation proxy measures were used. The intent was to improve the overall precision of the visitation estimate by employing the proxy counts. The precision of visitation estimates at sites that had proxy information versus those that did not is examined....

  3. Using satellite imagery as ancillary data for increasing the precision of estimates for the Forest Inventory and Analysis program of the USDA Forest Service

    Treesearch

    Ronald E. McRoberts; Geoffrey R. Holden; Mark D. Nelson; Greg C. Liknes; Dale D. Gormanson

    2006-01-01

    Forest inventory programs report estimates of forest variables for areas of interest ranging in size from municipalities, to counties, to states or provinces. Because of numerous factors, sample sizes are often insufficient to estimate attributes as precisely as is desired, unless the estimation process is enhanced using ancillary data. Classified satellite imagery has...

  4. A method for estimating radioactive cesium concentrations in cattle blood using urine samples.

    PubMed

    Sato, Itaru; Yamagishi, Ryoma; Sasaki, Jun; Satoh, Hiroshi; Miura, Kiyoshi; Kikuchi, Kaoru; Otani, Kumiko; Okada, Keiji

    2017-12-01

    In the region contaminated by the Fukushima nuclear accident, radioactive contamination of live cattle should be checked before slaughter. In this study, we establish a precise method for estimating radioactive cesium concentrations in cattle blood using urine samples. Blood and urine samples were collected from a total of 71 cattle on two farms in the 'difficult-to-return zone'. Urine 137 Cs, specific gravity, electrical conductivity, pH, sodium, potassium, calcium, and creatinine were measured and various estimation methods for blood 137 Cs were tested. The average error rate of the estimation was 54.2% without correction. Correcting for urine creatinine, specific gravity, electrical conductivity, or potassium improved the precision of the estimation. Correcting for specific gravity using the following formula gave the most precise estimate (average error rate = 16.9%): [blood 137 Cs] = [urinary 137 Cs]/([specific gravity] - 1)/329. Urine samples are faster to measure than blood samples because urine can be obtained in larger quantities and has a higher 137 Cs concentration than blood. These advantages of urine and the estimation precision demonstrated in our study, indicate that estimation of blood 137 Cs using urine samples is a practical means of monitoring radioactive contamination in live cattle. © 2017 Japanese Society of Animal Science.

  5. Inter-examination Precision of Magnitude-based Magnetic Resonance Imaging for Estimation of Segmental Hepatic Proton Density Fat Fraction (PDFF) in Obese Subjects

    PubMed Central

    Negrete, Lindsey M.; Middleton, Michael S.; Clark, Lisa; Wolfson, Tanya; Gamst, Anthony C.; Lam, Jessica; Changchien, Chris; Deyoung-Dominguez, Ivan M.; Hamilton, Gavin; Loomba, Rohit; Schwimmer, Jeffrey; Sirlin, Claude B.

    2013-01-01

    Purpose To prospectively describe magnitude-based multi-echo gradient-echo hepatic proton density fat fraction (PDFF) inter-examination precision at 3T. Materials and Methods In this prospective, IRB approved, HIPAA compliant study, written informed consent was obtained from 29 subjects (body mass indexes > 30kg/m2). Three 3T magnetic resonance imaging (MRI) examinations were obtained over 75-90 minutes. Segmental, lobar, and whole liver PDFF were estimated (using three, four, five, or six echoes) by magnitude-based multi-echo MRI in co-localized regions of interest (ROIs). For estimate (using three, four, five, or six echoes), at each anatomic level (segmental, lobar, whole liver), three inter-examination precision metrics were computed: intra-class correlation coefficient (ICC), standard deviation (SD), and range. Results Magnitude-based PDFF estimates using each reconstruction method showed excellent inter-examination precision for each segment (ICC ≥ 0.992; SD ≤ 0.66%; range ≤ 1.24%), lobe (ICC ≥ 0.998; SD ≤ 0.34%; range ≤ 0.64%), and the whole liver (ICC = 0.999; SD ≤ 0.24%; range ≤ 0.45%). Inter-examination precision was unaffected by whether PDFF was estimated using three, four, five, or six echoes. Conclusion Magnitude-based PDFF estimation shows high inter-examination precision at segmental, lobar, and whole liver anatomic levels, supporting its use in clinical care or clinical trials. The results of this study suggest that longitudinal hepatic PDFF change greater than 1.6% is likely to represent signal rather than noise. PMID:24136736

  6. Peak Measurement for Vancomycin AUC Estimation in Obese Adults Improves Precision and Lowers Bias.

    PubMed

    Pai, Manjunath P; Hong, Joseph; Krop, Lynne

    2017-04-01

    Vancomycin area under the curve (AUC) estimates may be skewed in obese adults due to weight-dependent pharmacokinetic parameters. We demonstrate that peak and trough measurements reduce bias and improve the precision of vancomycin AUC estimates in obese adults ( n = 75) and validate this in an independent cohort ( n = 31). The precision and mean percent bias of Bayesian vancomycin AUC estimates are comparable between covariate-dependent ( R 2 = 0.774, 3.55%) and covariate-independent ( R 2 = 0.804, 3.28%) models when peaks and troughs are measured but not when measurements are restricted to troughs only ( R 2 = 0.557, 15.5%). Copyright © 2017 American Society for Microbiology.

  7. Sample Size and Item Parameter Estimation Precision When Utilizing the One-Parameter "Rasch" Model

    ERIC Educational Resources Information Center

    Custer, Michael

    2015-01-01

    This study examines the relationship between sample size and item parameter estimation precision when utilizing the one-parameter model. Item parameter estimates are examined relative to "true" values by evaluating the decline in root mean squared deviation (RMSD) and the number of outliers as sample size increases. This occurs across…

  8. Integrating acoustic telemetry into mark-recapture models to improve the precision of apparent survival and abundance estimates.

    PubMed

    Dudgeon, Christine L; Pollock, Kenneth H; Braccini, J Matias; Semmens, Jayson M; Barnett, Adam

    2015-07-01

    Capture-mark-recapture models are useful tools for estimating demographic parameters but often result in low precision when recapture rates are low. Low recapture rates are typical in many study systems including fishing-based studies. Incorporating auxiliary data into the models can improve precision and in some cases enable parameter estimation. Here, we present a novel application of acoustic telemetry for the estimation of apparent survival and abundance within capture-mark-recapture analysis using open population models. Our case study is based on simultaneously collecting longline fishing and acoustic telemetry data for a large mobile apex predator, the broadnose sevengill shark (Notorhynchus cepedianus), at a coastal site in Tasmania, Australia. Cormack-Jolly-Seber models showed that longline data alone had very low recapture rates while acoustic telemetry data for the same time period resulted in at least tenfold higher recapture rates. The apparent survival estimates were similar for the two datasets but the acoustic telemetry data showed much greater precision and enabled apparent survival parameter estimation for one dataset, which was inestimable using fishing data alone. Combined acoustic telemetry and longline data were incorporated into Jolly-Seber models using a Monte Carlo simulation approach. Abundance estimates were comparable to those with longline data only; however, the inclusion of acoustic telemetry data increased precision in the estimates. We conclude that acoustic telemetry is a useful tool for incorporating in capture-mark-recapture studies in the marine environment. Future studies should consider the application of acoustic telemetry within this framework when setting up the study design and sampling program.

  9. Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel.

    PubMed

    Dansirikul, Chantaratsamon; Choi, Malcolm; Duffull, Stephen B

    2005-06-01

    This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel spreadsheet was implemented with the use of Solver and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.

  10. Channel Estimation and Pilot Design for Massive MIMO Systems with Block-Structured Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Lv, ZhuoKai; Yang, Tiejun; Zhu, Chunhua

    2018-03-01

    Through utilizing the technology of compressive sensing (CS), the channel estimation methods can achieve the purpose of reducing pilots and improving spectrum efficiency. The channel estimation and pilot design scheme are explored during the correspondence under the help of block-structured CS in massive MIMO systems. The block coherence property of the aggregate system matrix can be minimized so that the pilot design scheme based on stochastic search is proposed. Moreover, the block sparsity adaptive matching pursuit (BSAMP) algorithm under the common sparsity model is proposed so that the channel estimation can be caught precisely. Simulation results are to be proved the proposed design algorithm with superimposed pilots design and the BSAMP algorithm can provide better channel estimation than existing methods.

  11. Glacier and Ice Shelves Studies Using Satellite SAR Interferometry

    NASA Technical Reports Server (NTRS)

    Rignot, Eric

    1999-01-01

    Satellite radar interferometry is a powerful technique to measure the surface velocity and topography of glacier ice. On ice shelves, a quadruple difference technique separates tidal motion from the steady creep flow deformation of ice. The results provide a wealth of information about glacier grounding lines , mass fluxes, stability, elastic properties of ice, and tidal regime. The grounding line, which is where the glacier detaches from its bed and becomes afloat, is detected with a precision of a few tens of meters. Combining this information with satellite radar altimetry makes it possible to measure glacier discharge into the ocean and state of mass balance with greater precision than ever before, and in turn provide a significant revision of past estimates of mass balance of the Greenland and Antarctic Ice Sheets. Analysis of creep rates on floating ice permits an estimation of basal melting at the ice shelf underside. The results reveal that the action of ocean water in sub-ice-shelf cavities has been largely underestimated by oceanographic models and is the dominant mode of mass release to the ocean from an ice shelf. Precise mapping of grounding line positions also permits the detection of grounding line migration, which is a fine indicator of glacier change, independent of our knowledge of snow accumulation and ice melting. This technique has been successfully used to detect the rapid retreat of Pine Island Glacier, the largest ice stream in West Antarctica. Finally, tidal motion of ice shelves measured interferometrically provides a modern, synoptic view of the physical processes which govern the formation of tabular icebergs in the Antarctic.

  12. Artificial Intelligence Procedures for Tree Taper Estimation within a Complex Vegetation Mosaic in Brazil

    PubMed Central

    Nunes, Matheus Henrique

    2016-01-01

    Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects. PMID:27187074

  13. Artificial Intelligence Procedures for Tree Taper Estimation within a Complex Vegetation Mosaic in Brazil.

    PubMed

    Nunes, Matheus Henrique; Görgens, Eric Bastos

    2016-01-01

    Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.

  14. Bottom stress measurements on the inner shelf

    USGS Publications Warehouse

    Sherwood, Christopher R.; Scully, Malcolm; Trowbridge, John

    2015-01-01

    Bottom stress shapes the mean circulation patterns, controls sediment transport, and influences benthic habitat in the coastal ocean. Accurate and precise measurements of bottom stress have proved elusive, in part because of the difficulty in separating the turbulent eddies that transport momentum from inviscid wave-induced motions. Direct covariance measurements from a pair of acoustic Doppler velocimeters has proved capable of providing robust estimates, so we designed a mobile platform coined the NIMBBLE for these measurements, and deployed two of them and two more conventional quadpods at seven sites on the inner shelf over a period of seven months. The resulting covariance estimates of stress and bottom roughness were lower than log-fit estimates, especially during calmer periods. Analyses of these data suggest the NIMBBLEs may provide an accurate and practical method for measuring bottom stress.

  15. Horizon Based Orientation Estimation for Planetary Surface Navigation

    NASA Technical Reports Server (NTRS)

    Bouyssounouse, X.; Nefian, A. V.; Deans, M.; Thomas, A.; Edwards, L.; Fong, T.

    2016-01-01

    Planetary rovers navigate in extreme environments for which a Global Positioning System (GPS) is unavailable, maps are restricted to relatively low resolution provided by orbital imagery, and compass information is often lacking due to weak or not existent magnetic fields. However, an accurate rover localization is particularly important to achieve the mission success by reaching the science targets, avoiding negative obstacles visible only in orbital maps, and maintaining good communication connections with ground. This paper describes a horizon solution for precise rover orientation estimation. The detected horizon in imagery provided by the on board navigation cameras is matched with the horizon rendered over the existing terrain model. The set of rotation parameters (roll, pitch yaw) that minimize the cost function between the two horizon curves corresponds to the rover estimated pose.

  16. Correlation Techniques as Applied to Pose Estimation in Space Station Docking

    NASA Technical Reports Server (NTRS)

    Rollins, J. Michael; Juday, Richard D.; Monroe, Stanley E., Jr.

    2002-01-01

    The telerobotic assembly of space-station components has become the method of choice for the International Space Station (ISS) because it offers a safe alternative to the more hazardous option of space walks. The disadvantage of telerobotic assembly is that it does not provide for direct arbitrary views of mating interfaces for the teleoperator. Unless cameras are present very close to the interface positions, such views must be generated graphically, based on calculated pose relationships derived from images. To assist in this photogrammetric pose estimation, circular targets, or spots, of high contrast have been affixed on each connecting module at carefully surveyed positions. The appearance of a subset of spots essentially must form a constellation of specific relative positions in the incoming digital image stream in order for the docking to proceed. Spot positions are expressed in terms of their apparent centroids in an image. The precision of centroid estimation is required to be as fine as 1I20th pixel, in some cases. This paper presents an approach to spot centroid estimation using cross correlation between spot images and synthetic spot models of precise centration. Techniques for obtaining sub-pixel accuracy and for shadow, obscuration and lighting irregularity compensation are discussed.

  17. A Timing Estimation Method Based-on Skewness Analysis in Vehicular Wireless Networks.

    PubMed

    Cui, Xuerong; Li, Juan; Wu, Chunlei; Liu, Jian-Hang

    2015-11-13

    Vehicle positioning technology has drawn more and more attention in vehicular wireless networks to reduce transportation time and traffic accidents. Nowadays, global navigation satellite systems (GNSS) are widely used in land vehicle positioning, but most of them are lack precision and reliability in situations where their signals are blocked. Positioning systems base-on short range wireless communication are another effective way that can be used in vehicle positioning or vehicle ranging. IEEE 802.11p is a new real-time short range wireless communication standard for vehicles, so a new method is proposed to estimate the time delay or ranges between vehicles based on the IEEE 802.11p standard which includes three main steps: cross-correlation between the received signal and the short preamble, summing up the correlated results in groups, and finding the maximum peak using a dynamic threshold based on the skewness analysis. With the range between each vehicle or road-side infrastructure, the position of neighboring vehicles can be estimated correctly. Simulation results were presented in the International Telecommunications Union (ITU) vehicular multipath channel, which show that the proposed method provides better precision than some well-known timing estimation techniques, especially in low signal to noise ratio (SNR) environments.

  18. Application of a temperature-dependent fluorescent dye (Rhodamine B) to the measurement of radiofrequency radiation-induced temperature changes in biological samples.

    PubMed

    Chen, Yuen Y; Wood, Andrew W

    2009-10-01

    We have applied a non-contact method for studying the temperature changes produced by radiofrequency (RF) radiation specifically to small biological samples. A temperature-dependent fluorescent dye, Rhodamine B, as imaged by laser scanning confocal microscopy (LSCM) was used to do this. The results were calibrated against real-time temperature measurements from fiber optic probes, with a calibration factor of 3.4% intensity change degrees C(-1) and a reproducibility of +/-6%. This non-contact method provided two-dimensional and three-dimensional images of temperature change and distributions in biological samples, at a spatial resolution of a few micrometers and with an estimated absolute precision of around 1.5 degrees C, with a differential precision of 0.4 degree C. Temperature rise within tissue was found to be non-uniform. Estimates of specific absorption rate (SAR) from absorbed power measurements were greater than those estimated from rate of temperature rise, measured at 1 min intervals, probably because this interval is too long to permit accurate estimation of initial temperature rise following start of RF exposure. Future experiments will aim to explore this.

  19. New body fat prediction equations for severely obese patients.

    PubMed

    Horie, Lilian Mika; Barbosa-Silva, Maria Cristina Gonzalez; Torrinhas, Raquel Susana; de Mello, Marco Túlio; Cecconello, Ivan; Waitzberg, Dan Linetzky

    2008-06-01

    Severe obesity imposes physical limitations to body composition assessment. Our aim was to compare body fat (BF) estimations of severely obese patients obtained by bioelectrical impedance (BIA) and air displacement plethysmography (ADP) for development of new equations for BF prediction. Severely obese subjects (83 female/36 male, mean age=41.6+/-11.6 years) had BF estimated by BIA and ADP. The agreement of the data was evaluated using Bland-Altman's graphic and concordance correlation coefficient (CCC). A multivariate regression analysis was performed to develop and validate new predictive equations. BF estimations from BIA (64.8+/-15 kg) and ADP (65.6+/-16.4 kg) did not differ (p>0.05, with good accuracy, precision, and CCC), but the Bland- Altman graphic showed a wide limit of agreement (-10.4; 8.8). The standard BIA equation overestimated BF in women (-1.3 kg) and underestimated BF in men (5.6 kg; p<0.05). Two BF new predictive equations were generated after BIA measurement, which predicted BF with higher accuracy, precision, CCC, and limits of agreement than the standard BIA equation. Standard BIA equations were inadequate for estimating BF in severely obese patients. Equations developed especially for this population provide more accurate BF assessment.

  20. Quantum Discord Determines the Interferometric Power of Quantum States

    NASA Astrophysics Data System (ADS)

    Girolami, Davide; Souza, Alexandre M.; Giovannetti, Vittorio; Tufarelli, Tommaso; Filgueiras, Jefferson G.; Sarthour, Roberto S.; Soares-Pinto, Diogo O.; Oliveira, Ivan S.; Adesso, Gerardo

    2014-05-01

    Quantum metrology exploits quantum mechanical laws to improve the precision in estimating technologically relevant parameters such as phase, frequency, or magnetic fields. Probe states are usually tailored to the particular dynamics whose parameters are being estimated. Here we consider a novel framework where quantum estimation is performed in an interferometric configuration, using bipartite probe states prepared when only the spectrum of the generating Hamiltonian is known. We introduce a figure of merit for the scheme, given by the worst-case precision over all suitable Hamiltonians, and prove that it amounts exactly to a computable measure of discord-type quantum correlations for the input probe. We complement our theoretical results with a metrology experiment, realized in a highly controllable room-temperature nuclear magnetic resonance setup, which provides a proof-of-concept demonstration for the usefulness of discord in sensing applications. Discordant probes are shown to guarantee a nonzero phase sensitivity for all the chosen generating Hamiltonians, while classically correlated probes are unable to accomplish the estimation in a worst-case setting. This work establishes a rigorous and direct operational interpretation for general quantum correlations, shedding light on their potential for quantum technology.

  1. Application of cognitive diagnosis models to competency-based situational judgment tests.

    PubMed

    García, Pablo Eduardo; Olea, Julio; De la Torre, Jimmy

    2014-01-01

    Profiling of jobs in terms of competency requirements has increasingly been applied in many organizational settings. Testing these competencies through situational judgment tests (SJTs) leads to validity problems because it is not usually clear which constructs SJTs measure. The primary purpose of this paper is to evaluate whether the application of cognitive diagnosis models (CDM) to competency-based SJTs can ascertain the underlying competencies measured by the items, and whether these competencies can be estimated precisely. The generalized deterministic inputs, noisy "and" gate (G-DINA) model was applied to 26 situational judgment items measuring professional competencies based on the great eight model. These items were applied to 485 employees of a Spanish financial company. The fit of the model to the data and the convergent validity between the estimated competencies and personality dimensions were examined. The G-DINA showed a good fit to the data and the estimated competency factors, adapting and coping and interacting and presenting were positively related to emotional stability and extraversion, respectively. This work indicates that CDM can be a useful tool when measuring professional competencies through SJTs. CDM can clarify the competencies being measured and provide precise estimates of these competencies.

  2. Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.

    PubMed

    Yuan, Haidong

    2016-10-14

    Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.

  3. Cumulative Retrospective Exposure Assessment (REA) as a predictor of amphibole asbestos lung burden: validation procedures and results for industrial hygiene and pathology estimates.

    PubMed

    Rasmuson, James O; Roggli, Victor L; Boelter, Fred W; Rasmuson, Eric J; Redinger, Charles F

    2014-01-01

    A detailed evaluation of the correlation and linearity of industrial hygiene retrospective exposure assessment (REA) for cumulative asbestos exposure with asbestos lung burden analysis (LBA) has not been previously performed, but both methods are utilized for case-control and cohort studies and other applications such as setting occupational exposure limits. (a) To correlate REA with asbestos LBA for a large number of cases from varied industries and exposure scenarios; (b) to evaluate the linearity, precision, and applicability of both industrial hygiene exposure reconstruction and LBA; and (c) to demonstrate validation methods for REA. A panel of four experienced industrial hygiene raters independently estimated the cumulative asbestos exposure for 363 cases with limited exposure details in which asbestos LBA had been independently determined. LBA for asbestos bodies was performed by a pathologist by both light microscopy and scanning electron microscopy (SEM) and free asbestos fibers by SEM. Precision, reliability, correlation and linearity were evaluated via intraclass correlation, regression analysis and analysis of covariance. Plaintiff's answers to interrogatories, work history sheets, work summaries or plaintiff's discovery depositions that were obtained in court cases involving asbestos were utilized by the pathologist to provide a summarized brief asbestos exposure and work history for each of the 363 cases. Linear relationships between REA and LBA were found when adjustment was made for asbestos fiber-type exposure differences. Significant correlation between REA and LBA was found with amphibole asbestos lung burden and mixed fiber-types, but not with chrysotile. The intraclass correlation coefficients (ICC) for the precision of the industrial hygiene rater cumulative asbestos exposure estimates and the precision of repeated laboratory analysis were found to be in the excellent range. The ICC estimates were performed independent of specific asbestos fiber-type. Both REA and pathology assessment are reliable and complementary predictive methods to characterize asbestos exposures. Correlation analysis between the two methods effectively validates both REA methodology and LBA procedures within the determined precision, particularly for cumulative amphibole asbestos exposures since chrysotile fibers, for the most part, are not retained in the lung for an extended period of time.

  4. Mitogenome Phylogenetics: The Impact of Using Single Regions and Partitioning Schemes on Topology, Substitution Rate and Divergence Time Estimation

    PubMed Central

    Duchêne, Sebastián; Archer, Frederick I.; Vilstrup, Julia; Caballero, Susana; Morin, Phillip A.

    2011-01-01

    The availability of mitochondrial genome sequences is growing as a result of recent technological advances in molecular biology. In phylogenetic analyses, the complete mitogenome is increasingly becoming the marker of choice, usually providing better phylogenetic resolution and precision relative to traditional markers such as cytochrome b (CYTB) and the control region (CR). In some cases, the differences in phylogenetic estimates between mitogenomic and single-gene markers have yielded incongruent conclusions. By comparing phylogenetic estimates made from different genes, we identified the most informative mitochondrial regions and evaluated the minimum amount of data necessary to reproduce the same results as the mitogenome. We compared results among individual genes and the mitogenome for recently published complete mitogenome datasets of selected delphinids (Delphinidae) and killer whales (genus Orcinus). Using Bayesian phylogenetic methods, we investigated differences in estimation of topologies, divergence dates, and clock-like behavior among genes for both datasets. Although the most informative regions were not the same for each taxonomic group (COX1, CYTB, ND3 and ATP6 for Orcinus, and ND1, COX1 and ND4 for Delphinidae), in both cases they were equivalent to less than a quarter of the complete mitogenome. This suggests that gene information content can vary among groups, but can be adequately represented by a portion of the complete sequence. Although our results indicate that complete mitogenomes provide the highest phylogenetic resolution and most precise date estimates, a minimum amount of data can be selected using our approach when the complete sequence is unavailable. Studies based on single genes can benefit from the addition of a few more mitochondrial markers, producing topologies and date estimates similar to those obtained using the entire mitogenome. PMID:22073275

  5. Using frequency analysis to improve the precision of human body posture algorithms based on Kalman filters.

    PubMed

    Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G

    2016-05-01

    With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Reporting the accuracy of biochemical measurements for epidemiologic and nutrition studies.

    PubMed

    McShane, L M; Clark, L C; Combs, G F; Turnbull, B W

    1991-06-01

    Procedures for reporting and monitoring the accuracy of biochemical measurements are presented. They are proposed as standard reporting procedures for laboratory assays for epidemiologic and clinical-nutrition studies. The recommended procedures require identification and estimation of all major sources of variability and explanations of laboratory quality control procedures employed. Variance-components techniques are used to model the total variability and calculate a maximum percent error that provides an easily understandable measure of laboratory precision accounting for all sources of variability. This avoids ambiguities encountered when reporting an SD that may taken into account only a few of the potential sources of variability. Other proposed uses of the total-variability model include estimating precision of laboratory methods for various replication schemes and developing effective quality control-checking schemes. These procedures are demonstrated with an example of the analysis of alpha-tocopherol in human plasma by using high-performance liquid chromatography.

  7. Evaluating cost-efficiency and accuracy of hunter harvest survey designs

    USGS Publications Warehouse

    Lukacs, P.M.; Gude, J.A.; Russell, R.E.; Ackerman, B.B.

    2011-01-01

    Effective management of harvested wildlife often requires accurate estimates of the number of animals harvested annually by hunters. A variety of techniques exist to obtain harvest data, such as hunter surveys, check stations, mandatory reporting requirements, and voluntary reporting of harvest. Agencies responsible for managing harvested wildlife such as deer (Odocoileus spp.), elk (Cervus elaphus), and pronghorn (Antilocapra americana) are challenged with balancing the cost of data collection versus the value of the information obtained. We compared precision, bias, and relative cost of several common strategies, including hunter self-reporting and random sampling, for estimating hunter harvest using a realistic set of simulations. Self-reporting with a follow-up survey of hunters who did not report produces the best estimate of harvest in terms of precision and bias, but it is also, by far, the most expensive technique. Self-reporting with no followup survey risks very large bias in harvest estimates, and the cost increases with increased response rate. Probability-based sampling provides a substantial cost savings, though accuracy can be affected by nonresponse bias. We recommend stratified random sampling with a calibration estimator used to reweight the sample based on the proportions of hunters responding in each covariate category as the best option for balancing cost and accuracy. ?? 2011 The Wildlife Society.

  8. Fast estimation of space-robots inertia parameters: A modular mathematical formulation

    NASA Astrophysics Data System (ADS)

    Nabavi Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2016-10-01

    This work aims to propose a new technique that considerably helps enhance time and precision needed to identify ;Inertia Parameters (IPs); of a typical Autonomous Space-Robot (ASR). Operations might include, capturing an unknown Target Space-Object (TSO), ;active space-debris removal; or ;automated in-orbit assemblies;. In these operations generating precise successive commands are essential to the success of the mission. We show how a generalized, repeatable estimation-process could play an effective role to manage the operation. With the help of the well-known Force-Based approach, a new ;modular formulation; has been developed to simultaneously identify IPs of an ASR while it captures a TSO. The idea is to reorganize the equations with associated IPs with a ;Modular Set; of matrices instead of a single matrix representing the overall system dynamics. The devised Modular Matrix Set will then facilitate the estimation process. It provides a conjugate linear model in mass and inertia terms. The new formulation is, therefore, well-suited for ;simultaneous estimation processes; using recursive algorithms like RLS. Further enhancements would be needed for cases the effect of center of mass location becomes important. Extensive case studies reveal that estimation time is drastically reduced which in-turn paves the way to acquire better results.

  9. The 2006 William Feinberg lecture: shifting the paradigm from stroke to global vascular risk estimation.

    PubMed

    Sacco, Ralph L

    2007-06-01

    By the year 2010, it is estimated that 18.1 million people worldwide will die annually because of cardiovascular diseases and stroke. "Global vascular risk" more broadly includes the multiple overlapping disease silos of stroke, myocardial infarction, peripheral arterial disease, and vascular death. Estimation of global vascular risk requires consideration of a variety of variables including demographics, environmental behaviors, and risk factors. Data from multiple studies suggest continuous linear relationships between the physiological vascular risk modulators of blood pressure, lipids, and blood glucose rather than treating these conditions as categorical risk factors. Constellations of risk factors may be more relevant than individual categorical components. Exciting work with novel risk factors may also have predictive value in estimates of global vascular risk. Advances in imaging have led to the measurement of subclinical conditions such as carotid intima-media thickness and subclinical brain conditions such as white matter hyperintensities and silent infarcts. These subclinical measurements may be intermediate stages in the transition from asymptomatic to symptomatic vascular events, appear to be associated with the fundamental vascular risk factors, and represent opportunities to more precisely quantitate disease progression. The expansion of studies in molecular epidemiology and detection of genetic markers underlying vascular risks also promises to extend our precision of global vascular risk estimation. Global vascular risk estimation will require quantitative methods that bundle these multi-dimensional data into more precise estimates of future risk. The power of genetic information coupled with data on demographics, risk-inducing behaviors, vascular risk modulators, biomarkers, and measures of subclinical conditions should provide the most realistic approximation of an individual's future global vascular risk. The ultimate public health benefit, however, will depend on not only identification of global vascular risk but also the realization that we can modify this risk and prove the prediction models wrong.

  10. What’s Driving Uncertainty? The Model or the Model Parameters (What’s Driving Uncertainty? The influences of model and model parameters in data analysis)

    DOE PAGES

    Anderson-Cook, Christine Michaela

    2017-03-01

    Here, one of the substantial improvements to the practice of data analysis in recent decades is the change from reporting just a point estimate for a parameter or characteristic, to now including a summary of uncertainty for that estimate. Understanding the precision of the estimate for the quantity of interest provides better understanding of what to expect and how well we are able to predict future behavior from the process. For example, when we report a sample average as an estimate of the population mean, it is good practice to also provide a confidence interval (or credible interval, if youmore » are doing a Bayesian analysis) to accompany that summary. This helps to calibrate what ranges of values are reasonable given the variability observed in the sample and the amount of data that were included in producing the summary.« less

  11. Combining multistate capture-recapture data with tag recoveries to estimate demographic parameters

    USGS Publications Warehouse

    Kendall, W.L.; Conn, P.B.; Hines, J.E.

    2006-01-01

    Matrix population models that allow an animal to occupy more than one state over time are important tools for population and evolutionary ecologists. Definition of state can vary, including location for metapopulation models and breeding state for life history models. For populations whose members can be marked and subsequently re-encountered, multistate mark-recapture models are available to estimate the survival and transition probabilities needed to construct population models. Multistate models have proved extremely useful in this context, but they often require a substantial amount of data and restrict estimation of transition probabilities to those areas or states subjected to formal sampling effort. At the same time, for many species, there are considerable tag recovery data provided by the public that could be modeled in order to increase precision and to extend inference to a greater number of areas or states. Here we present a statistical model for combining multistate capture-recapture data (e.g., from a breeding ground study) with multistate tag recovery data (e.g., from wintering grounds). We use this method to analyze data from a study of Canada Geese (Branta canadensis) in the Atlantic Flyway of North America. Our analysis produced marginal improvement in precision, due to relatively few recoveries, but we demonstrate how precision could be further improved with increases in the probability that a retrieved tag is reported.

  12. Use of geostatistics in planning optimum drilling program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghose S.

    1989-08-01

    Application of geostatistics in the natural resources industry is well established. In a typical process of estimation, the statistically dependent geological data are used to predict the characteristics of a deposit. The estimator used is the best linear unbiased estimator (or BLUE), and a numerical factor of confidence is also provided. The natural inhomogeneity and anisotropy of a deposit are also quantified with preciseness. Drilling is the most reliable way of obtaining data for mining and related industries. However, it is often difficult to decide what is the optimum number of drill holes necessary for evaluation. In this paper, sequentialmore » measures of percent variation at 95% confidence level of a geological variable have been used to decipher economically optimum drilling density. A coal reserve model has been used to illustrate the method and findings. Fictitious drilling data were added (within the domain of population characteristics) in stages, to obtain a point of stability, beyond which the gain was significant (diminishing marginal benefit). The final relations are established by graphically projecting and comparing two variables - cost and precision. By mapping the percent variation at each stage, the localized areas of discrepancies can be identified. These are the locations where additional drilling is needed. The system can be controlled if performed at progressive stages and the preciseness toward stability is monitored.« less

  13. Multilevel modeling of single-case data: A comparison of maximum likelihood and Bayesian estimation.

    PubMed

    Moeyaert, Mariola; Rindskopf, David; Onghena, Patrick; Van den Noortgate, Wim

    2017-12-01

    The focus of this article is to describe Bayesian estimation, including construction of prior distributions, and to compare parameter recovery under the Bayesian framework (using weakly informative priors) and the maximum likelihood (ML) framework in the context of multilevel modeling of single-case experimental data. Bayesian estimation results were found similar to ML estimation results in terms of the treatment effect estimates, regardless of the functional form and degree of information included in the prior specification in the Bayesian framework. In terms of the variance component estimates, both the ML and Bayesian estimation procedures result in biased and less precise variance estimates when the number of participants is small (i.e., 3). By increasing the number of participants to 5 or 7, the relative bias is close to 5% and more precise estimates are obtained for all approaches, except for the inverse-Wishart prior using the identity matrix. When a more informative prior was added, more precise estimates for the fixed effects and random effects were obtained, even when only 3 participants were included. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  15. A Contamination-Free Ultrahigh Precision Formation Flying Method for Micro-, Nano-, and Pico-Satellites with Nanometer Accuracy

    NASA Astrophysics Data System (ADS)

    Bae, Young K.

    2006-01-01

    Formation flying of clusters of micro-, nano- and pico-satellites has been recognized to be more affordable, robust and versatile than building a large monolithic satellite in implementing next generation space missions requiring large apertures or large sample collection areas and sophisticated earth imaging/monitoring. We propose a propellant free, thus contamination free, method that enables ultrahigh precision satellite formation flying with intersatellite distance accuracy of nm (10-9 m) at maximum estimated distances in the order of tens of km. The method is based on ultrahigh precision CW intracavity photon thrusters and tethers. The pushing-out force of the intracavity photon thruster and the pulling-in force of the tether tension between satellites form the basic force structure to stabilize crystalline-like structures of satellites and/or spacecrafts with a relative distance accuracy better than nm. The thrust of the photons can be amplified by up to tens of thousand times by bouncing them between two mirrors located separately on pairing satellites. For example, a 10 W photon thruster, suitable for micro-satellite applications, is theoretically capable of providing thrusts up to mN, and its weight and power consumption are estimated to be several kgs and tens of W, respectively. The dual usage of photon thruster as a precision laser source for the interferometric ranging system further simplifies the system architecture and minimizes the weight and power consumption. The present method does not require propellant, thus provides significant propulsion system mass savings, and is free from propellant exhaust contamination, ideal for missions that require large apertures composed of highly sensitive sensors. The system can be readily scaled down for the nano- and pico-satellite applications.

  16. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.

    2014-04-15

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample sizemore » required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence.« less

  17. Sample size requirements for estimating effective dose from computed tomography using solid-state metal-oxide-semiconductor field-effect transistor dosimetry

    PubMed Central

    Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.

    2014-01-01

    Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same precision and confidence. PMID:24694150

  18. Econometric models for predicting confusion crop ratios

    NASA Technical Reports Server (NTRS)

    Umberger, D. E.; Proctor, M. H.; Clark, J. E.; Eisgruber, L. M.; Braschler, C. B. (Principal Investigator)

    1979-01-01

    Results for both the United States and Canada show that econometric models can provide estimates of confusion crop ratios that are more accurate than historical ratios. Whether these models can support the LACIE 90/90 accuracy criterion is uncertain. In the United States, experimenting with additional model formulations could provide improved methods models in some CRD's, particularly in winter wheat. Improved models may also be possible for the Canadian CD's. The more aggressive province/state models outperformed individual CD/CRD models. This result was expected partly because acreage statistics are based on sampling procedures, and the sampling precision declines from the province/state to the CD/CRD level. Declining sampling precision and the need to substitute province/state data for the CD/CRD data introduced measurement error into the CD/CRD models.

  19. Increasing precision of turbidity-based suspended sediment concentration and load estimates.

    PubMed

    Jastram, John D; Zipper, Carl E; Zelazny, Lucian W; Hyer, Kenneth E

    2010-01-01

    Turbidity is an effective tool for estimating and monitoring suspended sediments in aquatic systems. Turbidity can be measured in situ remotely and at fine temporal scales as a surrogate for suspended sediment concentration (SSC), providing opportunity for a more complete record of SSC than is possible with physical sampling approaches. However, there is variability in turbidity-based SSC estimates and in sediment loadings calculated from those estimates. This study investigated the potential to improve turbidity-based SSC, and by extension the resulting sediment loading estimates, by incorporating hydrologic variables that can be monitored remotely and continuously (typically 15-min intervals) into the SSC estimation procedure. On the Roanoke River in southwestern Virginia, hydrologic stage, turbidity, and other water-quality parameters were monitored with in situ instrumentation; suspended sediments were sampled manually during elevated turbidity events; samples were analyzed for SSC and physical properties including particle-size distribution and organic C content; and rainfall was quantified by geologic source area. The study identified physical properties of the suspended-sediment samples that contribute to SSC estimation variance and hydrologic variables that explained variability of those physical properties. Results indicated that the inclusion of any of the measured physical properties in turbidity-based SSC estimation models reduces unexplained variance. Further, the use of hydrologic variables to represent these physical properties, along with turbidity, resulted in a model, relying solely on data collected remotely and continuously, that estimated SSC with less variance than a conventional turbidity-based univariate model, allowing a more precise estimate of sediment loading, Modeling results are consistent with known mechanisms governing sediment transport in hydrologic systems.

  20. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  1. Analysing the Zenith Tropospheric Delay Estimates in On-line Precise Point Positioning (PPP) Services and PPP Software Packages.

    PubMed

    Mendez Astudillo, Jorge; Lau, Lawrence; Tang, Yu-Ting; Moore, Terry

    2018-02-14

    As Global Navigation Satellite System (GNSS) signals travel through the troposphere, a tropospheric delay occurs due to a change in the refractive index of the medium. The Precise Point Positioning (PPP) technique can achieve centimeter/millimeter positioning accuracy with only one GNSS receiver. The Zenith Tropospheric Delay (ZTD) is estimated alongside with the position unknowns in PPP. Estimated ZTD can be very useful for meteorological applications, an example is the estimation of water vapor content in the atmosphere from the estimated ZTD. PPP is implemented with different algorithms and models in online services and software packages. In this study, a performance assessment with analysis of ZTD estimates from three PPP online services and three software packages is presented. The main contribution of this paper is to show the accuracy of ZTD estimation achievable in PPP. The analysis also provides the GNSS users and researchers the insight of the processing algorithm dependence and impact on PPP ZTD estimation. Observation data of eight whole days from a total of nine International GNSS Service (IGS) tracking stations spread in the northern hemisphere, the equatorial region and the southern hemisphere is used in this analysis. The PPP ZTD estimates are compared with the ZTD obtained from the IGS tropospheric product of the same days. The estimates of two of the three online PPP services show good agreement (<1 cm) with the IGS ZTD values at the northern and southern hemisphere stations. The results also show that the online PPP services perform better than the selected PPP software packages at all stations.

  2. Genotyping-by-sequencing for estimating relatedness in nonmodel organisms: Avoiding the trap of precise bias.

    PubMed

    Attard, Catherine R M; Beheregaray, Luciano B; Möller, Luciana M

    2018-05-01

    There has been remarkably little attention to using the high resolution provided by genotyping-by-sequencing (i.e., RADseq and similar methods) for assessing relatedness in wildlife populations. A major hurdle is the genotyping error, especially allelic dropout, often found in this type of data that could lead to downward-biased, yet precise, estimates of relatedness. Here, we assess the applicability of genotyping-by-sequencing for relatedness inferences given its relatively high genotyping error rate. Individuals of known relatedness were simulated under genotyping error, allelic dropout and missing data scenarios based on an empirical ddRAD data set, and their true relatedness was compared to that estimated by seven relatedness estimators. We found that an estimator chosen through such analyses can circumvent the influence of genotyping error, with the estimator of Ritland (Genetics Research, 67, 175) shown to be unaffected by allelic dropout and to be the most accurate when there is genotyping error. We also found that the choice of estimator should not rely solely on the strength of correlation between estimated and true relatedness as a strong correlation does not necessarily mean estimates are close to true relatedness. We also demonstrated how even a large SNP data set with genotyping error (allelic dropout or otherwise) or missing data still performs better than a perfectly genotyped microsatellite data set of tens of markers. The simulation-based approach used here can be easily implemented by others on their own genotyping-by-sequencing data sets to confirm the most appropriate and powerful estimator for their data. © 2017 John Wiley & Sons Ltd.

  3. Effects of tag loss on direct estimates of population growth rate

    USGS Publications Warehouse

    Rotella, J.J.; Hines, J.E.

    2005-01-01

    The temporal symmetry approach of R. Pradel can be used with capture-recapture data to produce retrospective estimates of a population's growth rate, lambda(i), and the relative contributions to lambda(i) from different components of the population. Direct estimation of lambda(i) provides an alternative to using population projection matrices to estimate asymptotic lambda and is seeing increased use. However, the robustness of direct estimates of lambda(1) to violations of several key assumptions has not yet been investigated. Here, we consider tag loss as a possible source of bias for scenarios in which the rate of tag loss is (1) the same for all marked animals in the population and (2) a function of tag age. We computed analytic approximations of the expected values for each of the parameter estimators involved in direct estimation and used those values to calculate bias and precision for each parameter estimator. Estimates of lambda(i) were robust to homogeneous rates of tag loss. When tag loss rates varied by tag age, bias occurred for some of the sampling situations evaluated, especially those with low capture probability, a high rate of tag loss, or both. For situations with low rates of tag loss and high capture probability, bias was low and often negligible. Estimates of contributions of demographic components to lambda(i) were not robust to tag loss. Tag loss reduced the precision of all estimates because tag loss results in fewer marked animals remaining available for estimation. Clearly tag loss should be prevented if possible, and should be considered in analyses of lambda(i), but tag loss does not necessarily preclude unbiased estimation of lambda(i).

  4. Capacity and precision in an animal model of visual short-term memory.

    PubMed

    Lara, Antonio H; Wallis, Jonathan D

    2012-03-14

    Temporary storage of information in visual short-term memory (VSTM) is a key component of many complex cognitive abilities. However, it is highly limited in capacity. Understanding the neurophysiological nature of this capacity limit will require a valid animal model of VSTM. We used a multiple-item color change detection task to measure macaque monkeys' VSTM capacity. Subjects' performance deteriorated and reaction times increased as a function of the number of items in memory. Additionally, we measured the precision of the memory representations by varying the distance between sample and test colors. In trials with similar sample and test colors, subjects made more errors compared to trials with highly discriminable colors. We modeled the error distribution as a Gaussian function and used this to estimate the precision of VSTM representations. We found that as the number of items in memory increases the precision of the representations decreases dramatically. Additionally, we found that focusing attention on one of the objects increases the precision with which that object is stored and degrades the precision of the remaining. These results are in line with recent findings in human psychophysics and provide a solid foundation for understanding the neurophysiological nature of the capacity limit of VSTM.

  5. Analysis of the Precision of Variable Flip Angle T1 Mapping with Emphasis on the Noise Propagated from RF Transmit Field Maps.

    PubMed

    Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan

    2017-01-01

    In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.

  6. A comparison of technical replicate (cuts) effect on lamb Warner-Bratzler shear force measurement precision.

    PubMed

    Holman, B W B; Alvarenga, T I R C; van de Ven, R J; Hopkins, D L

    2015-07-01

    The Warner-Bratzler shear force (WBSF) of 335 lamb m. longissimus lumborum (LL) caudal and cranial ends was measured to examine and simulate the effect of replicate number (r: 1-8) on the precision of mean WBSF estimates and to compare LL caudal and cranial end WBSF means. All LL were sourced from two experimental flocks as part of the Information Nucleus slaughter programme (CRC for Sheep Industry Innovation) and analysed using a Lloyd Texture analyser with a Warner-Bratzler blade attachment. WBSF data were natural logarithm (ln) transformed before statistical analysis. Mean ln(WBSF) precision improved as r increased; however the practical implications support an r equal to 6, as precision improves only marginally with additional replicates. Increasing LL sample replication results in better ln(WBSF) precision compared with increasing r, provided that sample replicates are removed from the same LL end. Cranial end mean WBSF was 11.2 ± 1.3% higher than the caudal end. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  7. Does choice of estimators influence conclusions from true metabolizable energy feeding trials?

    USGS Publications Warehouse

    Sherfy, M.H.; Kirkpatrick, R.L.; Webb, K.E.

    2005-01-01

    True metabolizable energy (TME) is a measure of avian dietary quality that accounts for metabolic fecal and endogenous urinary energy losses (EL) of non-dietary origin. The TME is calculated using a bird fed the test diet and an estimate of EL derived from another bird (Paired Bird Correction), the same bird (Self Correction), or several other birds (Group Mean Correction). We evaluated precision of these estimators by using each to calculate TME of three seed diets in blue-winged teal (Anas discors). The TME varied by <2% among estimators for all three diets, and Self Correction produced the least variable TMEs for each. The TME did not differ between estimators in nine paired comparisons within diets, but variation between estimators within individual birds was sufficient to be of practical consequence. Although differences in precision among methods were slight, Self Correction required the lowest sample size to achieve a given precision. Feeding trial methods that minimize variation among individuals have several desirable properties, including higher precision of TME estimates and more rigorous experimental control. Consequently, we believe that Self Correction is most likely to accurately represent nutritional value of food items and should be considered the standard method for TME feeding trials. ?? Dt. Ornithologen-Gesellschaft e.V. 2005.

  8. Using known populations of pronghorn to evaluate sampling plans and estimators

    USGS Publications Warehouse

    Kraft, K.M.; Johnson, D.H.; Samuelson, J.M.; Allen, S.H.

    1995-01-01

    Although sampling plans and estimators of abundance have good theoretical properties, their performance in real situations is rarely assessed because true population sizes are unknown. We evaluated widely used sampling plans and estimators of population size on 3 known clustered distributions of pronghorn (Antilocapra americana). Our criteria were accuracy of the estimate, coverage of 95% confidence intervals, and cost. Sampling plans were combinations of sampling intensities (16, 33, and 50%), sample selection (simple random sampling without replacement, systematic sampling, and probability proportional to size sampling with replacement), and stratification. We paired sampling plans with suitable estimators (simple, ratio, and probability proportional to size). We used area of the sampling unit as the auxiliary variable for the ratio and probability proportional to size estimators. All estimators were nearly unbiased, but precision was generally low (overall mean coefficient of variation [CV] = 29). Coverage of 95% confidence intervals was only 89% because of the highly skewed distribution of the pronghorn counts and small sample sizes, especially with stratification. Stratification combined with accurate estimates of optimal stratum sample sizes increased precision, reducing the mean CV from 33 without stratification to 25 with stratification; costs increased 23%. Precise results (mean CV = 13) but poor confidence interval coverage (83%) were obtained with simple and ratio estimators when the allocation scheme included all sampling units in the stratum containing most pronghorn. Although areas of the sampling units varied, ratio estimators and probability proportional to size sampling did not increase precision, possibly because of the clumped distribution of pronghorn. Managers should be cautious in using sampling plans and estimators to estimate abundance of aggregated populations.

  9. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    USGS Publications Warehouse

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against those risks. The analysis framework presented here will be useful for other species exhibiting heterogeneity by detection method.

  10. Probabilistic seismic loss estimation via endurance time method

    NASA Astrophysics Data System (ADS)

    Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.

    2017-01-01

    Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.

  11. A new global anthropogenic heat estimation based on high-resolution nighttime light data

    PubMed Central

    Yang, Wangming; Luan, Yibo; Liu, Xiaolei; Yu, Xiaoyong; Miao, Lijuan; Cui, Xuefeng

    2017-01-01

    Consumption of fossil fuel resources leads to global warming and climate change. Apart from the negative impact of greenhouse gases on the climate, the increasing emission of anthropogenic heat from energy consumption also brings significant impacts on urban ecosystems and the surface energy balance. The objective of this work is to develop a new method of estimating the global anthropogenic heat budget and validate it on the global scale with a high precision and resolution dataset. A statistical algorithm was applied to estimate the annual mean anthropogenic heat (AH-DMSP) from 1992 to 2010 at 1×1 km2 spatial resolution for the entire planet. AH-DMSP was validated for both provincial and city scales, and results indicate that our dataset performs well at both scales. Compared with other global anthropogenic heat datasets, the AH-DMSP has a higher precision and finer spatial distribution. Although there are some limitations, the AH-DMSP could provide reliable, multi-scale anthropogenic heat information, which could be used for further research on regional or global climate change and urban ecosystems. PMID:28829436

  12. Use of Longitudinal Data in Genetic Studies in the Genome-wide Association Studies Era: Summary of Group 14

    PubMed Central

    Kerner, Berit; North, Kari E; Fallin, M Daniele

    2010-01-01

    Participants analyzed actual and simulated longitudinal data from the Framingham Heart Study for various metabolic and cardiovascular traits. The genetic information incorporated into these investigations ranged from selected single-nucleotide polymorphisms to genome-wide association arrays. Genotypes were incorporated using a broad range of methodological approaches including conditional logistic regression, linear mixed models, generalized estimating equations, linear growth curve estimation, growth modeling, growth mixture modeling, population attributable risk fraction based on survival functions under the proportional hazards models, and multivariate adaptive splines for the analysis of longitudinal data. The specific scientific questions addressed by these different approaches also varied, ranging from a more precise definition of the phenotype, bias reduction in control selection, estimation of effect sizes and genotype associated risk, to direct incorporation of genetic data into longitudinal modeling approaches and the exploration of population heterogeneity with regard to longitudinal trajectories. The group reached several overall conclusions: 1) The additional information provided by longitudinal data may be useful in genetic analyses. 2) The precision of the phenotype definition as well as control selection in nested designs may be improved, especially if traits demonstrate a trend over time or have strong age-of-onset effects. 3) Analyzing genetic data stratified for high-risk subgroups defined by a unique development over time could be useful for the detection of rare mutations in common multi-factorial diseases. 4) Estimation of the population impact of genomic risk variants could be more precise. The challenges and computational complexity demanded by genome-wide single-nucleotide polymorphism data were also discussed. PMID:19924713

  13. Precision and relative effectiveness of a purse seine for sampling age-0 river herring in lakes

    USGS Publications Warehouse

    Devine, Matthew T.; Roy, Allison; Whiteley, Andrew R.; Gahagan, Benjamin I.; Armstrong, Michael P.; Jordaan, Adrian

    2018-01-01

    Stock assessments for anadromous river herring, collectively Alewife Alosa pseudoharengus and Blueback Herring A. aestivalis, lack adequate demographic information, particularly with respect to early life stages. Although sampling adult river herring is increasingly common throughout their range, currently no standardized, field‐based, analytical methods exist for estimating juvenile abundance in freshwater lakes. The objective of this research was to evaluate the relative effectiveness and sampling precision of a purse seine for estimating densities of age‐0 river herring in freshwater lakes. We used a purse seine to sample age‐0 river herring in June–September 2015 and June–July 2016 in 16 coastal freshwater lakes in the northeastern USA. Sampling effort varied from two seine hauls to more than 50 seine hauls per lake. Catch rates were highest in June and July, and sampling precision was maximized in July. Sampling at night (versus day) in open water (versus littoral areas) was most effective for capturing newly hatched larvae and juveniles up to ca. 100 mm TL. Bootstrap simulation results indicated that sampling precision of CPUE estimates increased with sampling effort, and there was a clear threshold beyond which increased effort resulted in negligible increases in precision. The effort required to produce precise CPUE estimates, as determined by the CV, was dependent on lake size; river herring densities could be estimated with up to 10 purse‐seine hauls (one‐two nights) in a small lake (<50 ha) and 15–20 hauls (two‐three nights) in a large lake (>50 ha). Fish collection techniques using a purse seine as described in this paper are likely to be effective for estimating recruit abundance of river herring in freshwater lakes across their range.

  14. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction

    PubMed Central

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-01-01

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469

  15. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction.

    PubMed

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-10-16

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions.

  16. An efficient deterministic-probabilistic approach to modeling regional groundwater flow: 1. Theory

    USGS Publications Warehouse

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-01-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  17. An Efficient Deterministic-Probabilistic Approach to Modeling Regional Groundwater Flow: 1. Theory

    NASA Astrophysics Data System (ADS)

    Yen, Chung-Cheng; Guymon, Gary L.

    1990-07-01

    An efficient probabilistic model is developed and cascaded with a deterministic model for predicting water table elevations in regional aquifers. The objective is to quantify model uncertainty where precise estimates of water table elevations may be required. The probabilistic model is based on the two-point probability method which only requires prior knowledge of uncertain variables mean and coefficient of variation. The two-point estimate method is theoretically developed and compared with the Monte Carlo simulation method. The results of comparisons using hypothetical determinisitic problems indicate that the two-point estimate method is only generally valid for linear problems where the coefficients of variation of uncertain parameters (for example, storage coefficient and hydraulic conductivity) is small. The two-point estimate method may be applied to slightly nonlinear problems with good results, provided coefficients of variation are small. In such cases, the two-point estimate method is much more efficient than the Monte Carlo method provided the number of uncertain variables is less than eight.

  18. Aerial Surveys Give New Estimates for Orangutans in Sabah, Malaysia

    PubMed Central

    Gimenez, Olivier; Ambu, Laurentius; Ancrenaz, Karine; Andau, Patrick; Goossens, Benoît; Payne, John; Sawang, Azri; Tuuga, Augustine; Lackman-Ancrenaz, Isabelle

    2005-01-01

    Great apes are threatened with extinction, but precise information about the distribution and size of most populations is currently lacking. We conducted orangutan nest counts in the Malaysian state of Sabah (North Borneo), using a combination of ground and helicopter surveys, and provided a way to estimate the current distribution and size of the populations living throughout the entire state. We show that the number of nests detected during aerial surveys is directly related to the estimated true animal density and that a helicopter is an efficient tool to provide robust estimates of orangutan numbers. Our results reveal that with a total estimated population size of about 11,000 individuals, Sabah is one of the main strongholds for orangutans in North Borneo. More than 60% of orangutans living in the state occur outside protected areas, in production forests that have been through several rounds of logging extraction and are still exploited for timber. The role of exploited forests clearly merits further investigation for orangutan conservation in Sabah. PMID:15630475

  19. Development of Precise Lunar Orbit Propagator and Lunar Polar Orbiter's Lifetime Analysis

    NASA Astrophysics Data System (ADS)

    Song, Young-Joo; Park, Sang-Young; Kim, Hae-Dong; Sim, Eun-Sup

    2010-06-01

    To prepare for a Korean lunar orbiter mission, a precise lunar orbit propagator; Yonsei precise lunar orbit propagator (YSPLOP) is developed. In the propagator, accelerations due to the Moon's non-spherical gravity, the point masses of the Earth, Moon, Sun, Mars, Jupiter and also, solar radiation pressures can be included. The developed propagator's performance is validated and propagation errors between YSPOLP and STK/Astrogator are found to have about maximum 4-m, in along-track direction during 30 days (Earth's time) of propagation. Also, it is found that the lifetime of a lunar polar orbiter is strongly affected by the different degrees and orders of the lunar gravity model, by a third body's gravitational attractions (especially the Earth), and by the different orbital inclinations. The reliable lifetime of circular lunar polar orbiter at about 100 km altitude is estimated to have about 160 days (Earth's time). However, to estimate the reasonable lifetime of circular lunar polar orbiter at about 100 km altitude, it is strongly recommended to consider at least 50 × 50 degrees and orders of the lunar gravity field. The results provided in this paper are expected to make further progress in the design fields of Korea's lunar orbiter missions.

  20. A Comparison of Energy Expenditure Estimation of Several Physical Activity Monitors

    PubMed Central

    Dannecker, Kathryn L.; Sazonova, Nadezhda A.; Melanson, Edward L.; Sazonov, Edward S.; Browning, Raymond C.

    2013-01-01

    Accurately and precisely estimating free-living energy expenditure (EE) is important for monitoring energy balance and quantifying physical activity. Recently, single and multi-sensor devices have been developed that can classify physical activities, potentially resulting in improved estimates of EE. PURPOSE To determine the validity of EE estimation of a footwear-based physical activity monitor and to compare this validity against a variety of research and consumer physical activity monitors. METHODS Nineteen healthy young adults (10 male, 9 female), completed a four-hour stay in a room calorimeter. Participants wore a footwear-based physical activity monitor, as well as Actical, Actigraph, IDEEA, DirectLife and Fitbit devices. Each individual performed a series of postures/activities. We developed models to estimate EE from the footwear-based device, and we used the manufacturer's software to estimate EE for all other devices. RESULTS Estimated EE using the shoe-based device was not significantly different than measured EE (476(20) vs. 478(18) kcal) (Mean (SE)), respectively, and had a root mean square error (RMSE) of (29.6 kcal (6.2%)). The IDEEA and DirectLlife estimates of EE were not significantly different than the measured EE but the Actigraph and Fitbit devices significantly underestimated EE. Root mean square errors were 93.5 (19%), 62.1 kcal (14%), 88.2 kcal (18%), 136.6 kcal (27%), 130.1 kcal (26%), and 143.2 kcal (28%) for Actical, DirectLife, IDEEA, Actigraph and Fitbit respectively. CONCLUSIONS The shoe based physical activity monitor provides a valid estimate of EE while the other physical activity monitors tested have a wide range of validity when estimating EE. Our results also demonstrate that estimating EE based on classification of physical activities can be more accurate and precise than estimating EE based on total physical activity. PMID:23669877

  1. Sonographic estimation of fetal weight: comparison of bias, precision and consistency using 12 different formulae.

    PubMed

    Anderson, N G; Jolley, I J; Wells, J E

    2007-08-01

    To determine the major sources of error in ultrasonographic assessment of fetal weight and whether they have changed over the last decade. We performed a prospective observational study in 1991 and again in 2000 of a mixed-risk pregnancy population, estimating fetal weight within 7 days of delivery. In 1991, the Rose and McCallum formula was used for 72 deliveries. Inter- and intraobserver agreement was assessed within this group. Bland-Altman measures of agreement from log data were calculated as ratios. We repeated the study in 2000 in 208 consecutive deliveries, comparing predicted and actual weights for 12 published equations using Bland-Altman and percentage error methods. We compared bias (mean percentage error), precision (SD percentage error), and their consistency across the weight ranges. 95% limits of agreement ranged from - 4.4% to + 3.3% for inter- and intraobserver estimates, but were - 18.0% to 24.0% for estimated and actual birth weight. There was no improvement in accuracy between 1991 and 2000. In 2000 only six of the 12 published formulae had overall bias within 7% and precision within 15%. There was greater bias and poorer precision in nearly all equations if the birth weight was < 1,000 g. Observer error is a relatively minor component of the error in estimating fetal weight; error due to the equation is a larger source of error. Improvements in ultrasound technology have not improved the accuracy of estimating fetal weight. Comparison of methods of estimating fetal weight requires statistical methods that can separate out bias, precision and consistency. Estimating fetal weight in the very low birth weight infant is subject to much greater error than it is in larger babies. Copyright (c) 2007 ISUOG. Published by John Wiley & Sons, Ltd.

  2. Applying a particle filtering technique for canola crop growth stage estimation in Canada

    NASA Astrophysics Data System (ADS)

    Sinha, Abhijit; Tan, Weikai; Li, Yifeng; McNairn, Heather; Jiao, Xianfeng; Hosseini, Mehdi

    2017-10-01

    Accurate crop growth stage estimation is important in precision agriculture as it facilitates improved crop management, pest and disease mitigation and resource planning. Earth observation imagery, specifically Synthetic Aperture Radar (SAR) data, can provide field level growth estimates while covering regional scales. In this paper, RADARSAT-2 quad polarization and TerraSAR-X dual polarization SAR data and ground truth growth stage data are used to model the influence of canola growth stages on SAR imagery extracted parameters. The details of the growth stage modeling work are provided, including a) the development of a new crop growth stage indicator that is continuous and suitable as the state variable in the dynamic estimation procedure; b) a selection procedure for SAR polarimetric parameters that is sensitive to both linear and nonlinear dependency between variables; and c) procedures for compensation of SAR polarimetric parameters for different beam modes. The data was collected over three crop growth seasons in Manitoba, Canada, and the growth model provides the foundation of a novel dynamic filtering framework for real-time estimation of canola growth stages using the multi-sensor and multi-mode SAR data. A description of the dynamic filtering framework that uses particle filter as the estimator is also provided in this paper.

  3. On the convergence of ionospheric constrained precise point positioning (IC-PPP) based on undifferential uncombined raw GNSS observations.

    PubMed

    Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing

    2013-11-18

    Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications.

  4. On the Convergence of Ionospheric Constrained Precise Point Positioning (IC-PPP) Based on Undifferential Uncombined Raw GNSS Observations

    PubMed Central

    Zhang, Hongping; Gao, Zhouzheng; Ge, Maorong; Niu, Xiaoji; Huang, Ling; Tu, Rui; Li, Xingxing

    2013-01-01

    Precise Point Positioning (PPP) has become a very hot topic in GNSS research and applications. However, it usually takes about several tens of minutes in order to obtain positions with better than 10 cm accuracy. This prevents PPP from being widely used in real-time kinematic positioning services, therefore, a large effort has been made to tackle the convergence problem. One of the recent approaches is the ionospheric delay constrained precise point positioning (IC-PPP) that uses the spatial and temporal characteristics of ionospheric delays and also delays from an a priori model. In this paper, the impact of the quality of ionospheric models on the convergence of IC-PPP is evaluated using the IGS global ionospheric map (GIM) updated every two hours and a regional satellite-specific correction model. Furthermore, the effect of the receiver differential code bias (DCB) is investigated by comparing the convergence time for IC-PPP with and without estimation of the DCB parameter. From the result of processing a large amount of data, on the one hand, the quality of the a priori ionosphere delays plays a very important role in IC-PPP convergence. Generally, regional dense GNSS networks can provide more precise ionosphere delays than GIM and can consequently reduce the convergence time. On the other hand, ignoring the receiver DCB may considerably extend its convergence, and the larger the DCB, the longer the convergence time. Estimating receiver DCB in IC-PPP is a proper way to overcome this problem. Therefore, current IC-PPP should be enhanced by estimating receiver DCB and employing regional satellite-specific ionospheric correction models in order to speed up its convergence for more practical applications. PMID:24253190

  5. ESTIMATING SAMPLE REQUIREMENTS FOR FIELD EVALUATIONS OF PESTICIDE LEACHING

    EPA Science Inventory

    A method is presented for estimating the number of samples needed to evaluate pesticide leaching threats to ground water at a desired level of precision. Sample size projections are based on desired precision (exhibited as relative tolerable error), level of confidence (90 or 95%...

  6. ANALYTICAL METHOD COMPARISONS BY ESTIMATES OF PRECISION AND LOWER DETECTION LIMIT

    EPA Science Inventory

    The paper describes the use of principal component analysis to estimate the operating precision of several different analytical instruments or methods simultaneously measuring a common sample of a material whose actual value is unknown. This approach is advantageous when none of ...

  7. Seismic displacements monitoring for 2015 Mw 7.8 Nepal earthquake with GNSS data

    NASA Astrophysics Data System (ADS)

    Geng, T.; Su, X.; Xie, X.

    2017-12-01

    The high-rate Global Positioning Satellite System (GNSS) has been recognized as one of the powerful tools for monitoring ground motions generated by seismic events. The high-rate GPS and BDS data collected during the 2015 Mw 7.8 Nepal earthquake have been analyzed using two methods, that are the variometric approach and Precise point positioning (PPP). The variometric approach is based on time differenced technique using only GNSS broadcast products to estimate velocity time series from tracking observations in real time, followed by an integration procedure on the velocities to derive the seismic event induced displacements. PPP is a positioning method to calculate precise positions at centimeter- or even millimeter-level accuracy with a single GNSS receiver using precise satellite orbit and clock products. The displacement motions with accuracy of 2 cm at far-field stations and 5 cm at near-field stations with great ground motions and static offsets up to 1-2 m could be achieved. The multi-GNSS, GPS + BDS, could provide higher accuracy displacements with the increasing of satellite numbers and the improvement of the Position Dilution of Precision (PDOP) values. Considering the time consumption of clock estimates and the precision of PPP solutions, 5 s GNSS satellite clock interval is suggested. In addition, the GNSS-derived displacements are in good agreement with those from strong motion data. These studies demonstrate the feasibility of real-time capturing seismic waves with multi-GNSS observations, which is of great promise for the purpose of earthquake early warning and rapid hazard assessment.

  8. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    NASA Astrophysics Data System (ADS)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  9. Weighted linear least squares estimation of diffusion MRI parameters: strengths, limitations, and pitfalls.

    PubMed

    Veraart, Jelle; Sijbers, Jan; Sunaert, Stefan; Leemans, Alexander; Jeurissen, Ben

    2013-11-01

    Linear least squares estimators are widely used in diffusion MRI for the estimation of diffusion parameters. Although adding proper weights is necessary to increase the precision of these linear estimators, there is no consensus on how to practically define them. In this study, the impact of the commonly used weighting strategies on the accuracy and precision of linear diffusion parameter estimators is evaluated and compared with the nonlinear least squares estimation approach. Simulation and real data experiments were done to study the performance of the weighted linear least squares estimators with weights defined by (a) the squares of the respective noisy diffusion-weighted signals; and (b) the squares of the predicted signals, which are reconstructed from a previous estimate of the diffusion model parameters. The negative effect of weighting strategy (a) on the accuracy of the estimator was surprisingly high. Multi-step weighting strategies yield better performance and, in some cases, even outperformed the nonlinear least squares estimator. If proper weighting strategies are applied, the weighted linear least squares approach shows high performance characteristics in terms of accuracy/precision and may even be preferred over nonlinear estimation methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. The research of radar target tracking observed information linear filter method

    NASA Astrophysics Data System (ADS)

    Chen, Zheng; Zhao, Xuanzhi; Zhang, Wen

    2018-05-01

    Aiming at the problems of low precision or even precision divergent is caused by nonlinear observation equation in radar target tracking, a new filtering algorithm is proposed in this paper. In this algorithm, local linearization is carried out on the observed data of the distance and angle respectively. Then the kalman filter is performed on the linearized data. After getting filtered data, a mapping operation will provide the posteriori estimation of target state. A large number of simulation results show that this algorithm can solve above problems effectively, and performance is better than the traditional filtering algorithm for nonlinear dynamic systems.

  11. Using Satellite Galaxies to Weigh the Milky Way

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2018-06-01

    Weighing galaxies is a tricky business especially when that galaxy is our own! In a recent study, scientists have tackled this problem by harnessing incredibly precise measurements of the motions of Milky-Way satellites.A Challenging MeasurementLocations of some of the 50 satellite galaxies known around the Milky Way. [AndrewRT]Our spot in the middle of our galaxys disk makes it difficult for us to assess the total mass of gas, dust, stars, and dark matter surrounding us; estimates for the Milky Ways mass span from 700 billion to 2 trillion solar masses! Pinning down this number is critical for better understanding the structure and dynamics of our local universe.So whats the key to precisely weighing the Milky Way? A new study led by Ekta Patel (University of Arizona) presented at the American Astronomical Society meeting two weeks ago suggests it may be the barely preceptible motions of the small satellite galaxies that orbit around the Milky Way. Around 50 Milky-Way satellites are currently known, and simulations suggest that there may be up to 100200 in total. By watching the motions of these satellites, we can trace the potential of their host the Milky Way and estimate its mass.The Illustris-Dark simulation evolves our universe to the present day, providing a view of how dark matter organizes itself into galaxy halos over time. [Illustris Collaboration]Tiny Motions of Tiny GalaxiesIn this era of precision astronomy, remarkable measurements are becoming possible. In their study, Patel and collaborators use years of proper-motion observations from the Hubble Space Telescope for nine satellite galaxies of the Milky Way. The precision needed for measurements like these is insane: watching these satellites move is roughly like watching a human hair grow at the distance of the Moon.Rather than using the instantaneous position and velocity measured for a satellite which changes over time during the satellites orbit Patel and collaborators demonstrate that the satellites specific angular momentum is a more useful parameter when attempting to estimate its host galaxys mass.For each of the nine individual satellite galaxies, the authors compare its measured momentum to that of 90,000 simulated satellite galaxies from the Illustris-Dark cosmological simulation. This matching is used to build a probability distribution for the mass of the host galaxy most likely to be orbited by such a satellite. The probability distributions for the nine satellite galaxies are then combined to find the best overall estimate for the Milky Ways mass.Tipping the ScaleTop: summary of the most likely Milky-Way mass estimated from each of the 9 satellite galaxies, using the instantaneous positions and velocities (left) and the momentum (right) of the satellites. The momentum method shows less scatter in the host masses. Bottom: probability distributions for the most likely Milky-Way mass for each of the satellites (colored curves) and combined (grey curve). Click for a better look. [Patel et al. 2018]Using this technique, Patel and collaborators find a mass of 0.96 trillion solar masses for the Milky Way. The error bars for their measurement are around 30% and while this is more confined than the broad range of past estimates, its not yet extremely precise. The beauty of Patel and collaborators method, however, is that it is both extendable and generalizable.The authors only had access to precise proper motions for nine satellite galaxies when they conducted their study but since then, the Gaia mission has provided measurements for 30 satellites, with more expected in the future. Including these additional satellites and using improved, higher-resolution cosmological simulations for comparison will continue to increase the precision of Patel and collaborators estimate in the future.In addition, this approach can also be used to weigh our neighboring Andromeda galaxy, or any other galaxy for which were able to get precise proper-motion measurements for its satellites. Keep an eye out in the future, as techniques like this continue to reveal more properties of our local universe.CitationEkta Patel et al 2018 ApJ 857 78. doi:10.3847/1538-4357/aab78f

  12. Effects of lidar pulse density and sample size on a model-assisted approach to estimate forest inventory variables

    Treesearch

    Jacob Strunk; Hailemariam Temesgen; Hans-Erik Andersen; James P. Flewelling; Lisa Madsen

    2012-01-01

    Using lidar in an area-based model-assisted approach to forest inventory has the potential to increase estimation precision for some forest inventory variables. This study documents the bias and precision of a model-assisted (regression estimation) approach to forest inventory with lidar-derived auxiliary variables relative to lidar pulse density and the number of...

  13. Error estimation in multitemporal InSAR deformation time series, with application to Lanzarote, Canary Islands

    NASA Astrophysics Data System (ADS)

    GonzáLez, Pablo J.; FernáNdez, José

    2011-10-01

    Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.

  14. Orbiting passive microwave sensor simulation applied to soil moisture estimation

    NASA Technical Reports Server (NTRS)

    Newton, R. W. (Principal Investigator); Clark, B. V.; Pitchford, W. M.; Paris, J. F.

    1979-01-01

    A sensor/scene simulation program was developed and used to determine the effects of scene heterogeneity, resolution, frequency, look angle, and surface and temperature relations on the performance of a spaceborne passive microwave system designed to estimate soil water information. The ground scene is based on classified LANDSAT images which provide realistic ground classes, as well as geometries. It was determined that the average sensitivity of antenna temperature to soil moisture improves as the antenna footprint size increased. Also, the precision (or variability) of the sensitivity changes as a function of resolution.

  15. Evaluation of a segment-based LANDSAT full-frame approach to corp area estimation

    NASA Technical Reports Server (NTRS)

    Bauer, M. E. (Principal Investigator); Hixson, M. M.; Davis, S. M.

    1981-01-01

    As the registration of LANDSAT full frames enters the realm of current technology, sampling methods should be examined which utilize other than the segment data used for LACIE. The effect of separating the functions of sampling for training and sampling for area estimation. The frame selected for analysis was acquired over north central Iowa on August 9, 1978. A stratification of he full-frame was defined. Training data came from segments within the frame. Two classification and estimation procedures were compared: statistics developed on one segment were used to classify that segment, and pooled statistics from the segments were used to classify a systematic sample of pixels. Comparisons to USDA/ESCS estimates illustrate that the full-frame sampling approach can provide accurate and precise area estimates.

  16. Field design factors affecting the precision of ryegrass forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision and accuracy of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to ...

  17. Hypothesis testing for band size detection of high-dimensional banded precision matrices.

    PubMed

    An, Baiguo; Guo, Jianhua; Liu, Yufeng

    2014-06-01

    Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.

  18. Estimation of lamb carcass composition from measurements of the speed of ultrasound in the soft tissues of live animals and carcasses.

    PubMed

    Miles, C A; Fursey, G A; Fisher, A V; Page, S J

    1991-01-01

    The application of the velocity of sound (VOS) technique to lamb carcasses in a previous study (Fisher & Page, 1986) measured composition at a hind limb and neck site but was not as precise as fat scores in predicting lean proportion. This study examines VOS measurements made at sites in the hind limbs and along the vertebral column in live sheep and carcasses. A group (A) comprising five breeds of males and females (n = 61) and a sub-group (B) of Scottish Blackface castrated males (n = 34) were studied, and the reciprocal velocity of ultrasound (RV) was measured on the live sheep immediately behind the shoulder and over the last rib using a fixed-distance transducer assembly operating at 5 MHz, and in the hind limbs at 2·25 MHz using the apparatus described by Miles et al. (1984). Corresponding measurements were made on the carcasses which were classified by a Meat and Livestock Commission fatstock officer and then dissected. Standard deviations of lean proportion were 4·48% (A) and 3·39% (B). The residual standard deviations (rsds) for groups A and B respectively were obtained using the following predictors: live mass (LM) 4·27% and 2·96%; LM + breed (B) 3·08% (group A only); LM + mean RV 2·36% and 2·07%; LM + B + mean RV 1·99% (group A only). Using carcass measurements: carcass mass (CM) 4·25% and 2·90%; CM + MLC fat score 2·99% and 2·12%; CM + RV last rib 2·89% and 2·02%. These data show that VOS measurements through dorsal sites of live sheep provide encouragingly precise estimates of carcass lean proportion. For carcasses, dorsal sites provide a less precise estimate of carcass lean but when carcass mass is included in multiple regression the precision of the correlation is comparable with that of multiple regression of fat score with carcass mass. Copyright © 1991. Published by Elsevier Ltd.

  19. Age, temperature and pressure of metamorphism in the Tasriwine Ophiolite Complex, Sirwa, Morocco

    NASA Astrophysics Data System (ADS)

    Samson, S. D.; Inglis, J.; Hefferan, K. P.; Admou, H.; Saquaque, A.

    2013-12-01

    Sm-Nd garnet-whole rock geochronology and phase equilbria modeling have been used to determine the age and conditions of regional metamorphism within the Tasriwine ophiolite complex,Sirwa, Morocco. Pressure and temperature estimates obtained using a NaCaKFMASHT phase diagram (pseudosection) and garnet core and rim compositions predict that garnet growth began at ~0.72GPa and ~615°C and ended at ~0.8GPa and ~640°C. A bulk garnet Sm-Nd age of 645.6 × 1.6 Ma, calculated from a four point isochron that combines whole rock, garnet full dissolution and two successively more aggressive partial dissolutions, provides a precise date for garnet formation and regional metamorphism. The age is nearly 20 million years younger than a previous age estimate of regional metamorphism of 663 × 14 Ma based upon a SHRIMP U-Pb date from rims on zircon from the Irri migmatite. The new data provide further constraints on the age and nature of regional metamorphism in the Anti-Atlas mountains and emphasizes that garnet growth during regional metamorphism may not necessarily coincide with magmatism/anatexis which predominate the signature witnessed by previous U-Pb studies. The ability to couple PT estimates for garnet formation with high precision Sm- Nd geochronology highlights the utility of garnet studies for uncovering the detailed metamorphic history of the Anti-Atlas mountain belt.

  20. Beyond total treatment effects in randomised controlled trials: Baseline measurement of intermediate outcomes needed to reduce confounding in mediation investigations.

    PubMed

    Landau, Sabine; Emsley, Richard; Dunn, Graham

    2018-06-01

    Random allocation avoids confounding bias when estimating the average treatment effect. For continuous outcomes measured at post-treatment as well as prior to randomisation (baseline), analyses based on (A) post-treatment outcome alone, (B) change scores over the treatment phase or (C) conditioning on baseline values (analysis of covariance) provide unbiased estimators of the average treatment effect. The decision to include baseline values of the clinical outcome in the analysis is based on precision arguments, with analysis of covariance known to be most precise. Investigators increasingly carry out explanatory analyses to decompose total treatment effects into components that are mediated by an intermediate continuous outcome and a non-mediated part. Traditional mediation analysis might be performed based on (A) post-treatment values of the intermediate and clinical outcomes alone, (B) respective change scores or (C) conditioning on baseline measures of both intermediate and clinical outcomes. Using causal diagrams and Monte Carlo simulation, we investigated the performance of the three competing mediation approaches. We considered a data generating model that included three possible confounding processes involving baseline variables: The first two processes modelled baseline measures of the clinical variable or the intermediate variable as common causes of post-treatment measures of these two variables. The third process allowed the two baseline variables themselves to be correlated due to past common causes. We compared the analysis models implied by the competing mediation approaches with this data generating model to hypothesise likely biases in estimators, and tested these in a simulation study. We applied the methods to a randomised trial of pragmatic rehabilitation in patients with chronic fatigue syndrome, which examined the role of limiting activities as a mediator. Estimates of causal mediation effects derived by approach (A) will be biased if one of the three processes involving baseline measures of intermediate or clinical outcomes is operating. Necessary assumptions for the change score approach (B) to provide unbiased estimates under either process include the independence of baseline measures and change scores of the intermediate variable. Finally, estimates provided by the analysis of covariance approach (C) were found to be unbiased under all the three processes considered here. When applied to the example, there was evidence of mediation under all methods but the estimate of the indirect effect depended on the approach used with the proportion mediated varying from 57% to 86%. Trialists planning mediation analyses should measure baseline values of putative mediators as well as of continuous clinical outcomes. An analysis of covariance approach is recommended to avoid potential biases due to confounding processes involving baseline measures of intermediate or clinical outcomes, and not simply for increased precision.

  1. Spatial variability effects on precision and power of forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Spatial analyses of yield trials are important, as they adjust cultivar means for spatial variation and improve the statistical precision of yield estimation. While the relative efficiency of spatial analysis has been frequently reported in several yield trials, its application on long-term forage y...

  2. Constraining the atmosphere of GJ 1214b using an optimal estimation technique

    NASA Astrophysics Data System (ADS)

    Barstow, J. K.; Aigrain, S.; Irwin, P. G. J.; Fletcher, L. N.; Lee, J.-M.

    2013-09-01

    We explore cloudy, extended H2-He atmosphere scenarios for the warm super-Earth GJ 1214b using an optimal estimation retrieval technique. This planet, orbiting an M4.5 star only 13 pc from the Earth, is of particular interest because it lies between the Earth and Neptune in size and may be a member of a new class of planet that is neither terrestrial nor gas giant. Its relatively flat transmission spectrum has so far made atmospheric characterization difficult. The Non-linear optimal Estimator for MultivariateE spectral analySIS (NEMESIS) algorithm is used to explore the degenerate model parameter space for a cloudy, H2-He-dominated atmosphere scenario. Optimal estimation is a data-led approach that allows solutions beyond the range permitted by ab initio equilibrium model atmosphere calculations, and as such prevents restriction from prior expectations. We show that optimal estimation retrieval is a powerful tool for this kind of study, and present an exploration of the degenerate atmospheric scenarios for GJ 1214b. Whilst we find a family of solutions that provide a very good fit to the data, the quality and coverage of these data are insufficient for us to more precisely determine the abundances of cloud and trace gases given an H2-He atmosphere, and we also cannot rule out the possibility of a high molecular weight atmosphere. Future ground- and space-based observations will provide the opportunity to confirm or rule out an extended H2-He atmosphere, but more precise constraints will be limited by intrinsic degeneracies in the retrieval problem, such as variations in cloud top pressure and temperature.

  3. Improving Aquatic Warbler Population Assessments by Accounting for Imperfect Detection

    PubMed Central

    Oppel, Steffen; Marczakiewicz, Piotr; Lachmann, Lars; Grzywaczewski, Grzegorz

    2014-01-01

    Monitoring programs designed to assess changes in population size over time need to account for imperfect detection and provide estimates of precision around annual abundance estimates. Especially for species dependent on conservation management, robust monitoring is essential to evaluate the effectiveness of management. Many bird species of temperate grasslands depend on specific conservation management to maintain suitable breeding habitat. One such species is the Aquatic Warbler (Acrocephalus paludicola), which breeds in open fen mires in Central Europe. Aquatic Warbler populations have so far been assessed using a complete survey that aims to enumerate all singing males over a large area. Because this approach provides no estimate of precision and does not account for observation error, detecting moderate population changes is challenging. From 2011 to 2013 we trialled a new line transect sampling monitoring design in the Biebrza valley, Poland, to estimate abundance of singing male Aquatic Warblers. We surveyed Aquatic Warblers repeatedly along 50 randomly placed 1-km transects, and used binomial mixture models to estimate abundances per transect. The repeated line transect sampling required 150 observer days, and thus less effort than the traditional ‘full count’ approach (175 observer days). Aquatic Warbler abundance was highest at intermediate water levels, and detection probability varied between years and was influenced by vegetation height. A power analysis indicated that our line transect sampling design had a power of 68% to detect a 20% population change over 10 years, whereas raw count data had a 9% power to detect the same trend. Thus, by accounting for imperfect detection we increased the power to detect population changes. We recommend to adopt the repeated line transect sampling approach for monitoring Aquatic Warblers in Poland and in other important breeding areas to monitor changes in population size and the effects of habitat management. PMID:24713994

  4. Precision cosmology from future lensed gravitational wave and electromagnetic signals.

    PubMed

    Liao, Kai; Fan, Xi-Long; Ding, Xuheng; Biesiada, Marek; Zhu, Zong-Hong

    2017-10-27

    The standard siren approach of gravitational wave cosmology appeals to the direct luminosity distance estimation through the waveform signals from inspiralling double compact binaries, especially those with electromagnetic counterparts providing redshifts. It is limited by the calibration uncertainties in strain amplitude and relies on the fine details of the waveform. The Einstein telescope is expected to produce 10 4 -10 5 gravitational wave detections per year, 50-100 of which will be lensed. Here, we report a waveform-independent strategy to achieve precise cosmography by combining the accurately measured time delays from strongly lensed gravitational wave signals with the images and redshifts observed in the electromagnetic domain. We demonstrate that just 10 such systems can provide a Hubble constant uncertainty of 0.68% for a flat lambda cold dark matter universe in the era of third-generation ground-based detectors.

  5. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  6. Estimation of suspended-sediment rating curves and mean suspended-sediment loads

    USGS Publications Warehouse

    Crawford, Charles G.

    1991-01-01

    A simulation study was done to evaluate: (1) the accuracy and precision of parameter estimates for the bias-corrected, transformed-linear and non-linear models obtained by the method of least squares; (2) the accuracy of mean suspended-sediment loads calculated by the flow-duration, rating-curve method using model parameters obtained by the alternative methods. Parameter estimates obtained by least squares for the bias-corrected, transformed-linear model were considerably more precise than those obtained for the non-linear or weighted non-linear model. The accuracy of parameter estimates obtained for the biascorrected, transformed-linear and weighted non-linear model was similar and was much greater than the accuracy obtained by non-linear least squares. The improved parameter estimates obtained by the biascorrected, transformed-linear or weighted non-linear model yield estimates of mean suspended-sediment load calculated by the flow-duration, rating-curve method that are more accurate and precise than those obtained for the non-linear model.

  7. Evaluating the accuracy of sampling to estimate central line-days: simplification of the National Healthcare Safety Network surveillance methods.

    PubMed

    Thompson, Nicola D; Edwards, Jonathan R; Bamberg, Wendy; Beldavs, Zintars G; Dumyati, Ghinwa; Godine, Deborah; Maloney, Meghan; Kainer, Marion; Ray, Susan; Thompson, Deborah; Wilson, Lucy; Magill, Shelley S

    2013-03-01

    To evaluate the accuracy of weekly sampling of central line-associated bloodstream infection (CLABSI) denominator data to estimate central line-days (CLDs). Obtained CLABSI denominator logs showing daily counts of patient-days and CLD for 6-12 consecutive months from participants and CLABSI numerators and facility and location characteristics from the National Healthcare Safety Network (NHSN). Convenience sample of 119 inpatient locations in 63 acute care facilities within 9 states participating in the Emerging Infections Program. Actual CLD and estimated CLD obtained from sampling denominator data on all single-day and 2-day (day-pair) samples were compared by assessing the distributions of the CLD percentage error. Facility and location characteristics associated with increased precision of estimated CLD were assessed. The impact of using estimated CLD to calculate CLABSI rates was evaluated by measuring the change in CLABSI decile ranking. The distribution of CLD percentage error varied by the day and number of days sampled. On average, day-pair samples provided more accurate estimates than did single-day samples. For several day-pair samples, approximately 90% of locations had CLD percentage error of less than or equal to ±5%. A lower number of CLD per month was most significantly associated with poor precision in estimated CLD. Most locations experienced no change in CLABSI decile ranking, and no location's CLABSI ranking changed by more than 2 deciles. Sampling to obtain estimated CLD is a valid alternative to daily data collection for a large proportion of locations. Development of a sampling guideline for NHSN users is underway.

  8. Novel Equations for Estimating Lean Body Mass in Patients With Chronic Kidney Disease.

    PubMed

    Tian, Xue; Chen, Yuan; Yang, Zhi-Kai; Qu, Zhen; Dong, Jie

    2018-05-01

    Simplified methods to estimate lean body mass (LBM), an important nutritional measure representing muscle mass and somatic protein, are lacking in nondialyzed patients with chronic kidney disease (CKD). We developed and tested 2 reliable equations for estimation of LBM in daily clinical practice. The development and validation groups both included 150 nondialyzed patients with CKD Stages 3 to 5. Two equations for estimating LBM based on mid-arm muscle circumference (MAMC) or handgrip strength (HGS) were developed and validated in CKD patients with dual-energy x-ray absorptiometry as referenced gold method. We developed and validated 2 equations for estimating LBM based on HGS and MAMC. These equations, which also incorporated sex, height, and weight, were developed and validated in CKD patients. The new equations were found to exhibit only small biases when compared with dual-energy x-ray absorptiometry, with median differences of 0.94 and 0.46 kg observed in the HGS and MAMC equations, respectively. Good precision and accuracy were achieved for both equations, as reflected by small interquartile ranges in the differences and in the percentages of estimates that were 20% of measured LBM. The bias, precision, and accuracy of each equation were found to be similar when it was applied to groups of patients divided by the median measured LBM, the median ratio of extracellular to total body water, and the stages of CKD. LBM estimated from MAMC or HGS were found to provide accurate estimates of LBM in nondialyzed patients with CKD. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  9. Estimating time of HIV-1 infection from next-generation sequence diversity

    PubMed Central

    2017-01-01

    Estimating the time since infection (TI) in newly diagnosed HIV-1 patients is challenging, but important to understand the epidemiology of the infection. Here we explore the utility of virus diversity estimated by next-generation sequencing (NGS) as novel biomarker by using a recent genome-wide longitudinal dataset obtained from 11 untreated HIV-1-infected patients with known dates of infection. The results were validated on a second dataset from 31 patients. Virus diversity increased linearly with time, particularly at 3rd codon positions, with little inter-patient variation. The precision of the TI estimate improved with increasing sequencing depth, showing that diversity in NGS data yields superior estimates to the number of ambiguous sites in Sanger sequences, which is one of the alternative biomarkers. The full advantage of deep NGS was utilized with continuous diversity measures such as average pairwise distance or site entropy, rather than the fraction of polymorphic sites. The precision depended on the genomic region and codon position and was highest when 3rd codon positions in the entire pol gene were used. For these data, TI estimates had a mean absolute error of around 1 year. The error increased only slightly from around 0.6 years at a TI of 6 months to around 1.1 years at 6 years. Our results show that virus diversity determined by NGS can be used to estimate time since HIV-1 infection many years after the infection, in contrast to most alternative biomarkers. We provide the regression coefficients as well as web tool for TI estimation. PMID:28968389

  10. Experimental Estimation of Entanglement at the Quantum Limit

    NASA Astrophysics Data System (ADS)

    Brida, Giorgio; Degiovanni, Ivo Pietro; Florio, Angela; Genovese, Marco; Giorda, Paolo; Meda, Alice; Paris, Matteo G. A.; Shurupov, Alexander

    2010-03-01

    Entanglement is the central resource of quantum information processing and the precise characterization of entangled states is a crucial issue for the development of quantum technologies. This leads to the necessity of a precise, experimental feasible measure of entanglement. Nevertheless, such measurements are limited both from experimental uncertainties and intrinsic quantum bounds. Here we present an experiment where the amount of entanglement of a family of two-qubit mixed photon states is estimated with the ultimate precision allowed by quantum mechanics.

  11. Analysis of HY2A precise orbit determination using DORIS

    NASA Astrophysics Data System (ADS)

    Gao, Fan; Peng, Bibo; Zhang, Yu; Evariste, Ngatchou Heutchi; Liu, Jihua; Wang, Xiaohui; Zhong, Min; Lin, Mingsen; Wang, Nazi; Chen, Runjing; Xu, Houze

    2015-03-01

    HY2A is the first Chinese marine dynamic environment satellite. The payloads include a radar altimeter to measure the sea surface height in combination with a high precision orbit to be determined from tracking data. Onboard satellite tracking includes GPS, SLR, and the DORIS DGXX receiver which delivers phase and pseudo-range measurements. CNES releases raw phase and pseudo-range measurements with RINEX DORIS 3.0 format and pre-processed Doppler range-rate with DORIS 2.2 data format. However, the VMSI software package developed by Van Martin Systems, Inc which is used to estimate HY2A DORIS orbits can only process Doppler range-rate but not the DORIS phase data which are available with much shorter latency. We have proposed a method of constructing the phase increment data, which are similar to range-rate data, from RINEX DORIS 3.0 phase data. We compute the HY2A orbits from June, 2013 to August, 2013 using the POD strategy described in this paper based on DORIS 2.2 range-rate data and our reconstructed phase increment data. The estimated orbits are evaluated by comparing with the CNES precise orbits and SLR residuals. Our DORIS-only orbits agree with the precise GPS + SLR + DORIS CNES orbits radially at 1-cm and about 3-cm in the other two directions. SLR test with the 50° cutoff elevation shows that the CNES orbit can achieve about 1.1-cm accuracy in radial direction and our DORIS-only POD solutions are slightly worse. In addition, other HY2A DORIS POD concerns are discussed in this paper. Firstly, we discuss the frequency offset values provided with the RINEX data and find that orbit accuracy for the case when the frequency offset is applied is worse than when it is not applied. Secondly, HY2A DORIS antenna z-offsets are estimated using two kinds of measurements from June, 2013 to August, 2013. The results show that the measurement errors contribute a total of about 2-cm difference of estimated z-offset. Finally, we estimate HY2A orbits selecting 3 days with severe geomagnetic storm activity and SLR residuals suggest that estimating a drag coefficient every 6 h without any constraint is sufficient for maintaining orbit accuracy.

  12. Improving regression-model-based streamwater constituent load estimates derived from serially correlated data

    USGS Publications Warehouse

    Aulenbach, Brent T.

    2013-01-01

    A regression-model based approach is a commonly used, efficient method for estimating streamwater constituent load when there is a relationship between streamwater constituent concentration and continuous variables such as streamwater discharge, season and time. A subsetting experiment using a 30-year dataset of daily suspended sediment observations from the Mississippi River at Thebes, Illinois, was performed to determine optimal sampling frequency, model calibration period length, and regression model methodology, as well as to determine the effect of serial correlation of model residuals on load estimate precision. Two regression-based methods were used to estimate streamwater loads, the Adjusted Maximum Likelihood Estimator (AMLE), and the composite method, a hybrid load estimation approach. While both methods accurately and precisely estimated loads at the model’s calibration period time scale, precisions were progressively worse at shorter reporting periods, from annually to monthly. Serial correlation in model residuals resulted in observed AMLE precision to be significantly worse than the model calculated standard errors of prediction. The composite method effectively improved upon AMLE loads for shorter reporting periods, but required a sampling interval of at least 15-days or shorter, when the serial correlations in the observed load residuals were greater than 0.15. AMLE precision was better at shorter sampling intervals and when using the shortest model calibration periods, such that the regression models better fit the temporal changes in the concentration–discharge relationship. The models with the largest errors typically had poor high flow sampling coverage resulting in unrepresentative models. Increasing sampling frequency and/or targeted high flow sampling are more efficient approaches to ensure sufficient sampling and to avoid poorly performing models, than increasing calibration period length.

  13. Bayesian analysis of U.S. hurricane climate

    USGS Publications Warehouse

    Elsner, James B.; Bossak, Brian H.

    2001-01-01

    Predictive climate distributions of U.S. landfalling hurricanes are estimated from observational records over the period 1851–2000. The approach is Bayesian, combining the reliable records of hurricane activity during the twentieth century with the less precise accounts of activity during the nineteenth century to produce a best estimate of the posterior distribution on the annual rates. The methodology provides a predictive distribution of future activity that serves as a climatological benchmark. Results are presented for the entire coast as well as for the Gulf Coast, Florida, and the East Coast. Statistics on the observed annual counts of U.S. hurricanes, both for the entire coast and by region, are similar within each of the three consecutive 50-yr periods beginning in 1851. However, evidence indicates that the records during the nineteenth century are less precise. Bayesian theory provides a rational approach for defining hurricane climate that uses all available information and that makes no assumption about whether the 150-yr record of hurricanes has been adequately or uniformly monitored. The analysis shows that the number of major hurricanes expected to reach the U.S. coast over the next 30 yr is 18 and the number of hurricanes expected to hit Florida is 20.

  14. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  15. Comparison of rainbow smelt age estimates from fin rays and otoliths

    USGS Publications Warehouse

    Walsh, M.G.; Maloy, A.P.; O'Brien, T. P.

    2008-01-01

    Rainbow smelt Osmerus mordax, although nonnative, are an important component of the offshore food web in the Laurentian Great Lakes. In Lake Ontario, we estimate ages of rainbow smelt annually to study population dynamics such as year-class strength and age-specific growth and mortality. Since the early 1980s, we have used pectoral fin rays to estimate rainbow smelt ages, but the sectioning and mounting of fin rays are time and labor intensive. Our objective was to assess the feasibility of using otoliths rather than fin rays to estimate rainbow smelt ages. Three readers interpreted the ages of 172 rainbow smelt (60-198 mm total length) based on thin sections of pectoral fin rays, whole otoliths with no preparation, and whole otoliths that had been cleared for 1 month in a 70:30 ethanol : glycerin solution. Bias was lower and precision was greater for fin rays than for otoliths; these results were consistent for comparisons within readers (first and second readings by one individual; three readers were used) and between readers (one reading for each reader within a pair). Both otolith methods appeared to misclassify age-1 rainbow smelt. Fin ray ages had the highest precision and provided the best approximation of age estimates inferred from the Lake Ontario population's length frequency distribution and from our understanding of this population. ?? American Fisheries Society 2008.

  16. Improved protocol and data analysis for accelerated shelf-life estimation of solid dosage forms.

    PubMed

    Waterman, Kenneth C; Carella, Anthony J; Gumkowski, Michael J; Lukulay, Patrick; MacDonald, Bruce C; Roy, Michael C; Shamblin, Sheri L

    2007-04-01

    To propose and test a new accelerated aging protocol for solid-state, small molecule pharmaceuticals which provides faster predictions for drug substance and drug product shelf-life. The concept of an isoconversion paradigm, where times in different temperature and humidity-controlled stability chambers are set to provide a critical degradant level, is introduced for solid-state pharmaceuticals. Reliable estimates for temperature and relative humidity effects are handled using a humidity-corrected Arrhenius equation, where temperature and relative humidity are assumed to be orthogonal. Imprecision is incorporated into a Monte-Carlo simulation to propagate the variations inherent in the experiment. In early development phases, greater imprecision in predictions is tolerated to allow faster screening with reduced sampling. Early development data are then used to design appropriate test conditions for more reliable later stability estimations. Examples are reported showing that predicted shelf-life values for lower temperatures and different relative humidities are consistent with the measured shelf-life values at those conditions. The new protocols and analyses provide accurate and precise shelf-life estimations in a reduced time from current state of the art.

  17. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates

    PubMed Central

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  18. Precision matrix expansion - efficient use of numerical simulations in estimating errors on cosmological parameters

    NASA Astrophysics Data System (ADS)

    Friedrich, Oliver; Eifler, Tim

    2018-01-01

    Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.

  19. Effect of Correlated Precision Errors on Uncertainty of a Subsonic Venturi Calibration

    NASA Technical Reports Server (NTRS)

    Hudson, S. T.; Bordelon, W. J., Jr.; Coleman, H. W.

    1996-01-01

    An uncertainty analysis performed in conjunction with the calibration of a subsonic venturi for use in a turbine test facility produced some unanticipated results that may have a significant impact in a variety of test situations. Precision uncertainty estimates using the preferred propagation techniques in the applicable American National Standards Institute/American Society of Mechanical Engineers standards were an order of magnitude larger than precision uncertainty estimates calculated directly from a sample of results (discharge coefficient) obtained at the same experimental set point. The differences were attributable to the effect of correlated precision errors, which previously have been considered negligible. An analysis explaining this phenomenon is presented. The article is not meant to document the venturi calibration, but rather to give a real example of results where correlated precision terms are important. The significance of the correlated precision terms could apply to many test situations.

  20. On the estimation of brain signal entropy from sparse neuroimaging data

    PubMed Central

    Grandy, Thomas H.; Garrett, Douglas D.; Schmiedek, Florian; Werkle-Bergner, Markus

    2016-01-01

    Multi-scale entropy (MSE) has been recently established as a promising tool for the analysis of the moment-to-moment variability of neural signals. Appealingly, MSE provides a measure of the predictability of neural operations across the multiple time scales on which the brain operates. An important limitation in the application of the MSE to some classes of neural signals is MSE’s apparent reliance on long time series. However, this sparse-data limitation in MSE computation could potentially be overcome via MSE estimation across shorter time series that are not necessarily acquired continuously (e.g., in fMRI block-designs). In the present study, using simulated, EEG, and fMRI data, we examined the dependence of the accuracy and precision of MSE estimates on the number of data points per segment and the total number of data segments. As hypothesized, MSE estimation across discontinuous segments was comparably accurate and precise, despite segment length. A key advance of our approach is that it allows the calculation of MSE scales not previously accessible from the native segment lengths. Consequently, our results may permit a far broader range of applications of MSE when gauging moment-to-moment dynamics in sparse and/or discontinuous neurophysiological data typical of many modern cognitive neuroscience study designs. PMID:27020961

  1. Use of generalized linear models and digital data in a forest inventory of Northern Utah

    USGS Publications Warehouse

    Moisen, Gretchen G.; Edwards, Thomas C.

    1999-01-01

    Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.

  2. Evaluation of Wavelet Denoising Methods for Small-Scale Joint Roughness Estimation Using Terrestrial Laser Scanning

    NASA Astrophysics Data System (ADS)

    Bitenc, M.; Kieffer, D. S.; Khoshelham, K.

    2015-08-01

    The precision of Terrestrial Laser Scanning (TLS) data depends mainly on the inherent random range error, which hinders extraction of small details from TLS measurements. New post processing algorithms have been developed that reduce or eliminate the noise and therefore enable modelling details at a smaller scale than one would traditionally expect. The aim of this research is to find the optimum denoising method such that the corrected TLS data provides a reliable estimation of small-scale rock joint roughness. Two wavelet-based denoising methods are considered, namely Discrete Wavelet Transform (DWT) and Stationary Wavelet Transform (SWT), in combination with different thresholding procedures. The question is, which technique provides a more accurate roughness estimates considering (i) wavelet transform (SWT or DWT), (ii) thresholding method (fixed-form or penalised low) and (iii) thresholding mode (soft or hard). The performance of denoising methods is tested by two analyses, namely method noise and method sensitivity to noise. The reference data are precise Advanced TOpometric Sensor (ATOS) measurements obtained on 20 × 30 cm rock joint sample, which are for the second analysis corrupted by different levels of noise. With such a controlled noise level experiments it is possible to evaluate the methods' performance for different amounts of noise, which might be present in TLS data. Qualitative visual checks of denoised surfaces and quantitative parameters such as grid height and roughness are considered in a comparative analysis of denoising methods. Results indicate that the preferred method for realistic roughness estimation is DWT with penalised low hard thresholding.

  3. Multiparameter Estimation in Networked Quantum Sensors

    NASA Astrophysics Data System (ADS)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-01

    We introduce a general model for a network of quantum sensors, and we use this model to consider the following question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. This immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or nonlinear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.

  4. Quantitative CT: technique dependence of volume estimation on pulmonary nodules

    NASA Astrophysics Data System (ADS)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Colsher, James; Amurao, Maxwell; Samei, Ehsan

    2012-03-01

    Current estimation of lung nodule size typically relies on uni- or bi-dimensional techniques. While new three-dimensional volume estimation techniques using MDCT have improved size estimation of nodules with irregular shapes, the effect of acquisition and reconstruction parameters on accuracy (bias) and precision (variance) of the new techniques has not been fully investigated. To characterize the volume estimation performance dependence on these parameters, an anthropomorphic chest phantom containing synthetic nodules was scanned and reconstructed with protocols across various acquisition and reconstruction parameters. Nodule volumes were estimated by a clinical lung analysis software package, LungVCAR. Precision and accuracy of the volume assessment were calculated across the nodules and compared between protocols via a generalized estimating equation analysis. Results showed that the precision and accuracy of nodule volume quantifications were dependent on slice thickness, with different dependences for different nodule characteristics. Other parameters including kVp, pitch, and reconstruction kernel had lower impact. Determining these technique dependences enables better volume quantification via protocol optimization and highlights the importance of consistent imaging parameters in sequential examinations.

  5. TerraSAR-X precise orbit determination with real-time GPS ephemerides

    NASA Astrophysics Data System (ADS)

    Wermuth, Martin; Hauschild, Andre; Montenbruck, Oliver; Kahle, Ralph

    TerraSAR-X is a German Synthetic Aperture Radar (SAR) satellite, which was launched in June 2007 from Baikonour. Its task is to acquire radar images of the Earth's surface. In order to locate the radar data takes precisely, the satellite is equipped with a high-quality dual-frequency GPS receiver -the Integrated Geodetic and Occultation Receiver (IGOR) provided by the GeoForschungsZentrum Potsdam (GFZ). Using GPS observations from the IGOR instrument in a reduced dynamic precise orbit determination (POD), the German Space Operations Center (DLR/GSOC) is computing rapid and science orbit products on a routine basis. The rapid orbit products arrive with a latency of about one hour after data reception with an accuracy of 10-20 cm. Science orbit products are computed with a latency of five days achieving an accuracy of about 5cm (3D-RMS). For active and future Earth observation missions, the availability of near real-time precise orbit information is becoming more and more important. Other applications of near real-time orbit products include the processing of GNSS radio occulation measurements for atmospheric sounding as well as altimeter measurements of ocean surface heights, which are nowadays employed in global weather and ocean circulation models with short latencies. For example after natural disasters it is necessary to evaluate the damage by satellite images as soon as possible. The latency and quality of POD results is mainly driven by the availability of precise GPS ephemerides. In order to have high-quality GPS ephemerides available at real-time, GSOC has developed the real-time clock estimation system RETICLE. The system receives NTRIP-data streams with GNSS observations from the global tracking network of IGS in real-time. Using the known station position, RETICLE estimates precise GPS satellite clock offsets and drifts based on the most recent available IGU predicted orbits. The clock offset estimates have an accuracy of better than 0.3 ns and are globally valid. The latency of the estimated clocks is approximately 7 seconds. Another limiting factor is the frequency of satellite downlinks and the latency of the data transfer from the ground station to the computation center. Therefore a near real-time scenario is examined in which the satellite has about one ground station contact per orbit or respectively one contact in 90 minutes. The results of the near real-time POD are evaluated in an internal consistency check and compared against the science orbit solution and laser ranging observations.

  6. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  7. Lake Erie Yellow perch age estimation based on three structures: Precision, processing times, and management implications

    USGS Publications Warehouse

    Vandergoot, C.S.; Bur, M.T.; Powell, K.A.

    2008-01-01

    Yellow perch Perca flavescens support economically important recreational and commercial fisheries in Lake Erie and are intensively managed. Age estimation represents an integral component in the management of Lake Erie yellow perch stocks, as age-structured population models are used to set safe harvest levels on an annual basis. We compared the precision associated with yellow perch (N = 251) age estimates from scales, sagittal otoliths, and anal spine sections and evaluated the time required to process and estimate age from each structure. Three readers of varying experience estimated ages. The precision (mean coefficient of variation) of estimates among readers was 1% for sagittal otoliths, 5-6% for anal spines, and 11-13% for scales. Agreement rates among readers were 94-95% for otoliths, 71-76% for anal spines, and 45-50% for scales. Systematic age estimation differences were evident among scale and anal spine readers; less-experienced readers tended to underestimate ages of yellow perch older than age 4 relative to estimates made by an experienced reader. Mean scale age tended to underestimate ages of age-6 and older fish relative to otolith ages estimated by an experienced reader. Total annual mortality estimates based on scale ages were 20% higher than those based on otolith ages; mortality estimates based on anal spine ages were 4% higher than those based on otolith ages. Otoliths required more removal and preparation time than scales and anal spines, but age estimation time was substantially lower for otoliths than for the other two structures. We suggest the use of otoliths or anal spines for age estimation in yellow perch (regardless of length) from Lake Erie and other systems where precise age estimates are necessary, because age estimation errors resulting from the use of scales could generate incorrect management decisions. ?? Copyright by the American Fisheries Society 2008.

  8. Dynamically heterogenous partitions and phylogenetic inference: an evaluation of analytical strategies with cytochrome b and ND6 gene sequences in cranes.

    PubMed

    Krajewski, C; Fain, M G; Buckley, L; King, D G

    1999-11-01

    ki ctes over whether molecular sequence data should be partitioned for phylogenetic analysis often confound two types of heterogeneity among partitions. We distinguish historical heterogeneity (i.e., different partitions have different evolutionary relationships) from dynamic heterogeneity (i.e., different partitions show different patterns of sequence evolution) and explore the impact of the latter on phylogenetic accuracy and precision with a two-gene, mitochondrial data set for cranes. The well-established phylogeny of cranes allows us to contrast tree-based estimates of relevant parameter values with estimates based on pairwise comparisons and to ascertain the effects of incorporating different amounts of process information into phylogenetic estimates. We show that codon positions in the cytochrome b and NADH dehydrogenase subunit 6 genes are dynamically heterogenous under both Poisson and invariable-sites + gamma-rates versions of the F84 model and that heterogeneity includes variation in base composition and transition bias as well as substitution rate. Estimates of transition-bias and relative-rate parameters from pairwise sequence comparisons were comparable to those obtained as tree-based maximum likelihood estimates. Neither rate-category nor mixed-model partitioning strategies resulted in a loss of phylogenetic precision relative to unpartitioned analyses. We suggest that weighted-average distances provide a computationally feasible alternative to direct maximum likelihood estimates of phylogeny for mixed-model analyses of large, dynamically heterogenous data sets. Copyright 1999 Academic Press.

  9. Enhanced GPS-based GRACE baseline determination by using a new strategy for ambiguity resolution and relative phase center variation corrections

    NASA Astrophysics Data System (ADS)

    Gu, Defeng; Ju, Bing; Liu, Junhong; Tu, Jia

    2017-09-01

    Precise relative position determination is a prerequisite for radar interferometry by formation flying satellites. It has been shown that this can be achieved by high-quality, dual-frequency GPS receivers that provide precise carrier-phase observations. The precise baseline determination between satellites flying in formation can significantly improve the accuracy of interferometric products, and has become a research interest. The key technologies of baseline determination using spaceborne dual-frequency GPS for gravity recovery and climate experiment (GRACE) formation are presented, including zero-difference (ZD) reduced dynamic orbit determination, double-difference (DD) reduced dynamic relative orbit determination, integer ambiguity resolution and relative receiver antenna phase center variation (PCV) estimation. We propose an independent baseline determination method based on a new strategy of integer ambiguity resolution and correction of relative receiver antenna PCVs, and implement the method in the NUDTTK software package. The algorithms have been tested using flight data over a period of 120 days from GRACE. With the original strategy of integer ambiguity resolution based on Melbourne-Wübbena (M-W) combinations, the average success rate is 85.6%, and the baseline precision is 1.13 mm. With the new strategy of integer ambiguity resolution based on a priori relative orbit, the average success rate and baseline precision are improved by 5.8% and 0.11 mm respectively. A relative ionosphere-free phase pattern estimation result is given in this study, and with correction of relative receiver antenna PCVs, the baseline precision is further significantly improved by 0.34 mm. For ZD reduced dynamic orbit determination, the orbit precision for each GRACE satellite A or B in three dimensions (3D) is about 2.5 cm compared to Jet Propulsion Laboratory (JPL) post science orbits. For DD reduced dynamic relative orbit determination, the final baseline precision for two GRACE satellites formation is 0.68 mm validated by K-Band Ranging (KBR) observations, and average ambiguity success rate of about 91.4% could be achieved.

  10. DoD Met Most Requirements of the Improper Payments Elimination and Recovery Act in FY 2014, but Improper Payment Estimates Were Unreliable

    DTIC Science & Technology

    2015-05-12

    Deficiencies That Affect the Reliability of Estimates ________________________________________6 Statistical Precision Could Be Improved... statistical precision of improper payments estimates in seven of the DoD payment programs through the use of stratified sample designs. DoD improper...payments not subject to sampling, which made the results statistically invalid. We made a recommendation to correct this problem in a previous report;4

  11. Sampling system for wheat (Triticum aestivum L) area estimation using digital LANDSAT MSS data and aerial photographs. [Brazil

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Moreira, M. A.; Chen, S. C.; Batista, G. T.

    1984-01-01

    A procedure to estimate wheat (Triticum aestivum L) area using sampling technique based on aerial photographs and digital LANDSAT MSS data is developed. Aerial photographs covering 720 square km are visually analyzed. To estimate wheat area, a regression approach is applied using different sample sizes and various sampling units. As the size of sampling unit decreased, the percentage of sampled area required to obtain similar estimation performance also decreased. The lowest percentage of the area sampled for wheat estimation with relatively high precision and accuracy through regression estimation is 13.90% using 10 square km as the sampling unit. Wheat area estimation using only aerial photographs is less precise and accurate than those obtained by regression estimation.

  12. Capacity and precision in an animal model of visual short-term memory

    PubMed Central

    Lara, Antonio H.; Wallis, Jonathan D.

    2013-01-01

    Temporary storage of information in visual short-term memory (VSTM) is a key component of many complex cognitive abilities. However, it is highly limited in capacity. Understanding the neurophysiological nature of this capacity limit will require a valid animal model of VSTM. We used a multiple-item color change detection task to measure macaque monkeys’ VSTM capacity. Subjects’ performance deteriorated and reaction times increased as a function of the number of items in memory. Additionally, we measured the precision of the memory representations by varying the distance between sample and test colors. In trials with similar sample and test colors, subjects made more errors compared to trials with highly discriminable colors. We modeled the error distribution as a Gaussian function and used this to estimate the precision of VSTM representations. We found that as the number of items in memory increases the precision of the representations decreases dramatically. Additionally, we found that focusing attention on one of the objects increases the precision with which that object is stored and degrading the precision of the remaining. These results are in line with recent findings in human psychophysics and provide a solid foundation for understanding the neurophysiological nature of the capacity limit of VSTM. PMID:22419756

  13. Absolute marine gravimetry with matter-wave interferometry.

    PubMed

    Bidel, Y; Zahzam, N; Blanchard, C; Bonnin, A; Cadoret, M; Bresson, A; Rouxel, D; Lequentrec-Lalancette, M F

    2018-02-12

    Measuring gravity from an aircraft or a ship is essential in geodesy, geophysics, mineral and hydrocarbon exploration, and navigation. Today, only relative sensors are available for onboard gravimetry. This is a major drawback because of the calibration and drift estimation procedures which lead to important operational constraints. Atom interferometry is a promising technology to obtain onboard absolute gravimeter. But, despite high performances obtained in static condition, no precise measurements were reported in dynamic. Here, we present absolute gravity measurements from a ship with a sensor based on atom interferometry. Despite rough sea conditions, we obtained precision below 10 -5  m s -2 . The atom gravimeter was also compared with a commercial spring gravimeter and showed better performances. This demonstration opens the way to the next generation of inertial sensors (accelerometer, gyroscope) based on atom interferometry which should provide high-precision absolute measurements from a moving platform.

  14. Pushing the limits of Monte Carlo simulations for the three-dimensional Ising model

    NASA Astrophysics Data System (ADS)

    Ferrenberg, Alan M.; Xu, Jiahao; Landau, David P.

    2018-04-01

    While the three-dimensional Ising model has defied analytic solution, various numerical methods like Monte Carlo, Monte Carlo renormalization group, and series expansion have provided precise information about the phase transition. Using Monte Carlo simulation that employs the Wolff cluster flipping algorithm with both 32-bit and 53-bit random number generators and data analysis with histogram reweighting and quadruple precision arithmetic, we have investigated the critical behavior of the simple cubic Ising Model, with lattice sizes ranging from 163 to 10243. By analyzing data with cross correlations between various thermodynamic quantities obtained from the same data pool, e.g., logarithmic derivatives of magnetization and derivatives of magnetization cumulants, we have obtained the critical inverse temperature Kc=0.221 654 626 (5 ) and the critical exponent of the correlation length ν =0.629 912 (86 ) with precision that exceeds all previous Monte Carlo estimates.

  15. Location precision analysis of stereo thermal anti-sniper detection system

    NASA Astrophysics Data System (ADS)

    He, Yuqing; Lu, Ya; Zhang, Xiaoyan; Jin, Weiqi

    2012-06-01

    Anti-sniper detection devices are the urgent requirement in modern warfare. The precision of the anti-sniper detection system is especially important. This paper discusses the location precision analysis of the anti-sniper detection system based on the dual-thermal imaging system. It mainly discusses the following two aspects which produce the error: the digital quantitative effects of the camera; effect of estimating the coordinate of bullet trajectory according to the infrared images in the process of image matching. The formula of the error analysis is deduced according to the method of stereovision model and digital quantitative effects of the camera. From this, we can get the relationship of the detecting accuracy corresponding to the system's parameters. The analysis in this paper provides the theory basis for the error compensation algorithms which are put forward to improve the accuracy of 3D reconstruction of the bullet trajectory in the anti-sniper detection devices.

  16. Processes and Procedures for Estimating Score Reliability and Precision

    ERIC Educational Resources Information Center

    Bardhoshi, Gerta; Erford, Bradley T.

    2017-01-01

    Precision is a key facet of test development, with score reliability determined primarily according to the types of error one wants to approximate and demonstrate. This article identifies and discusses several primary forms of reliability estimation: internal consistency (i.e., split-half, KR-20, a), test-retest, alternate forms, interscorer, and…

  17. Application of a whole-body pharmacokinetic model for targeted radionuclide therapy to NM404 and FLT

    NASA Astrophysics Data System (ADS)

    Grudzinski, Joseph J.; Floberg, John M.; Mudd, Sarah R.; Jeffery, Justin J.; Peterson, Eric T.; Nomura, Alice; Burnette, Ronald R.; Tomé, Wolfgang A.; Weichert, Jamey P.; Jeraj, Robert

    2012-03-01

    We have previously developed a model that provides relative dosimetry estimates for targeted radionuclide therapy (TRT) agents. The whole-body and tumor pharmacokinetic (PK) parameters of this model can be noninvasively measured with molecular imaging, providing a means of comparing potential TRT agents. Parameter sensitivities and noise will affect the accuracy and precision of the estimated PK values and hence dosimetry estimates. The aim of this work is to apply a PK model for TRT to two agents with different magnitudes of clearance rates, NM404 and FLT, explore parameter sensitivity with respect to time and investigate the effect of noise on parameter precision and accuracy. Twenty-three tumor bearing mice were injected with a ‘slow-clearing’ agent, 124I-NM404 (n = 10), or a ‘fast-clearing’ agent, 18F-FLT (3‧-deoxy-3‧-fluorothymidine) (n = 13) and imaged via micro-PET/CT pseudo-dynamically or dynamically, respectively. Regions of interest were drawn within the heart and tumor to create time-concentration curves for blood pool and tumor. PK analysis was performed to estimate the mean and standard error of the central compartment efflux-to-influx ratio (k12/k21), central elimination rate constant (kel), and tumor influx-to-efflux ratio (k34/k43), as well as the mean and standard deviation of the dosimetry estimates. NM404 and FLT parameter estimation results were used to analyze model accuracy and parameter sensitivity. The accuracy of the experimental sampling schedule was compared to that of an optimal sampling schedule found using Cramer-Rao lower bounds theory. Accuracy was assessed using correlation coefficient, bias and standard error of the estimate normalized to the mean (SEE/mean). The PK parameter estimation of NM404 yielded a central clearance, kel (0.009 ± 0.003 h-1), normal body retention, k12/k21 (0.69 ± 0.16), tumor retention, k34/k43 (1.44 ± 0.46) and predicted dosimetry, Dtumor (3.47 ± 1.24 Gy). The PK parameter estimation of FLT yielded a central elimination rate constant, kel (0.050 ± 0.025 min-1), normal body retention, k12/k21 (2.21 ± 0.62) and tumor retention, k34/k43 (0.65 ± 0.17), and predicted dosimetry, Dtumor (0.61 ± 0.20 Gy). Compared to experimental sampling, optimal sampling decreases the dosimetry bias and SEE/mean for NM404; however, it increases bias and decreases SEE/mean for FLT. For both NM404 and FLT, central compartment efflux rate constant, k12, and central compartment influx rate constant, k21, possess mirroring sensitivities at relatively early time points. The instantaneous concentration in the blood, C0, was most sensitive at early time points; central elimination, kel, and tumor efflux, k43, are most sensitive at later time points. A PK model for TRT was applied to both a slow-clearing, NM404, and a fast-clearing, FLT, agents in a xenograft murine model. NM404 possesses more favorable PK values according to the PK TRT model. The precise and accurate measurement of k12, k21, kel, k34 and k43 will translate into improved and precise dosimetry estimations. This work will guide the future use of this PK model for assessing the relative effectiveness of potential TRT agents.

  18. Precision of hard structures used to estimate age of mountain Whitefish (Prosopium williamsoni)

    USGS Publications Warehouse

    Watkins, Carson J.; Ross, Tyler J.; Hardy, Ryan S.; Quist, Michael C.

    2015-01-01

    The mountain whitefish (Prosopium williamsoni) is a widely distributed salmonid in western North America that has decreased in abundance over portions of its distribution due to anthropogenic disturbances. In this investigation, we examined precision of age estimates derived from scales, pectoral fin rays, and sagittal otoliths from 167 mountain whitefish. Otoliths and pectoral fin rays were mounted in epoxy and cross-sectioned before examination. Scales were pressed onto acetate slides and resulting impressions were examined. Between-reader precision (i.e., between 2 readers), between-reader variability, and reader confidence ratings were compared among hard structures. Coefficient of variation (CV) in age estimates was lowest and percentage of exact agreement (PA-0) was highest for scales (CV = 5.9; PA-0 = 70%) compared to pectoral fin rays (CV =11.0; PA-0 = 58%) and otoliths (CV = 12.3; PA-0 = 55%). Median confidence ratings were significantly different (P ≤ 0.05) among all structures, with scales having the highest median confidence. Reader confidence decreased with fish age for scales and pectoral fin rays, but reader confidence increased with fish age for otoliths. In general, age estimates were more precise and reader confidence was higher for scales compared to pectoral fin rays and otoliths. This research will help fisheries biologists in selecting the most appropriate hard structure to use for future age and growth studies on mountain whitefish. In turn, selection of the most precise hard structure will lead to better estimates of dynamic rate functions.

  19. A Comparison of Turbidity-Based and Streamflow-Based Estimates of Suspended-Sediment Concentrations in Three Chesapeake Bay Tributaries

    USGS Publications Warehouse

    Jastram, John D.; Moyer, Douglas; Hyer, Kenneth

    2009-01-01

    Fluvial transport of sediment into the Chesapeake Bay estuary is a persistent water-quality issue with major implications for the overall health of the bay ecosystem. Accurately and precisely estimating the suspended-sediment concentrations (SSC) and loads that are delivered to the bay, however, remains challenging. Although manual sampling of SSC produces an accurate series of point-in-time measurements, robust extrapolation to unmeasured periods (especially highflow periods) has proven to be difficult. Sediment concentrations typically have been estimated using regression relations between individual SSC values and associated streamflow values; however, suspended-sediment transport during storm events is extremely variable, and it is often difficult to relate a unique SSC to a given streamflow. With this limitation for estimating SSC, innovative approaches for generating detailed records of suspended-sediment transport are needed. One effective method for improved suspended-sediment determination involves the continuous monitoring of turbidity as a surrogate for SSC. Turbidity measurements are theoretically well correlated to SSC because turbidity represents a measure of water clarity that is directly influenced by suspended sediments; thus, turbidity-based estimation models typically are effective tools for generating SSC data. The U.S. Geological Survey, in cooperation with the U.S. Environmental Protection Agency Chesapeake Bay Program and Virginia Department of Environmental Quality, initiated continuous turbidity monitoring on three major tributaries of the bay - the James, Rappahannock, and North Fork Shenandoah Rivers - to evaluate the use of turbidity as a sediment surrogate in rivers that deliver sediment to the bay. Results of this surrogate approach were compared to the traditionally applied streamflow-based approach for estimating SSC. Additionally, evaluation and comparison of these two approaches were conducted for nutrient estimations. Results demonstrate that the application of turbidity-based estimation models provides an improved method for generating a continuous record of SSC, relative to the classical approach that uses streamflow as a surrogate for SSC. Turbidity-based estimates of SSC were found to be more accurate and precise than SSC estimates from streamflow-based approaches. The turbidity-based SSC estimation models explained 92 to 98 percent of the variability in SSC, while streamflow-based models explained 74 to 88 percent of the variability in SSC. Furthermore, the mean absolute error of turbidity-based SSC estimates was 50 to 87 percent less than the corresponding values from the streamflow-based models. Statistically significant differences were detected between the distributions of residual errors and estimates from the two approaches, indicating that the turbidity-based approach yields estimates of SSC with greater precision than the streamflow-based approach. Similar improvements were identified for turbidity-based estimates of total phosphorus, which is strongly related to turbidity because total phosphorus occurs predominantly in particulate form. Total nitrogen estimation models based on turbidity and streamflow generated estimates of similar quality, with the turbidity-based models providing slight improvements in the quality of estimations. This result is attributed to the understanding that nitrogen transport is dominated by dissolved forms that relate less directly to streamflow and turbidity. Improvements in concentration estimation resulted in improved estimates of load. Turbidity-based suspended-sediment loads estimated for the James River at Cartersville, VA, monitoring station exhibited tighter confidence interval bounds and a coefficient of variation of 12 percent, compared with a coefficient of variation of 38 percent for the streamflow-based load.

  20. Assessment of a high-SNR chemical-shift-encoded MRI with complex reconstruction for proton density fat fraction (PDFF) estimation overall and in the low-fat range.

    PubMed

    Park, Charlie C; Hooker, Catherine; Hooker, Jonathan C; Bass, Emily; Haufe, William; Schlein, Alexandra; Covarrubias, Yesenia; Heba, Elhamy; Bydder, Mark; Wolfson, Tanya; Gamst, Anthony; Loomba, Rohit; Schwimmer, Jeffrey; Hernando, Diego; Reeder, Scott B; Middleton, Michael; Sirlin, Claude B; Hamilton, Gavin

    2018-04-29

    Improving the signal-to-noise ratio (SNR) of chemical-shift-encoded MRI acquisition with complex reconstruction (MRI-C) may improve the accuracy and precision of noninvasive proton density fat fraction (PDFF) quantification in patients with hepatic steatosis. To assess the accuracy of high SNR (Hi-SNR) MRI-C versus standard MRI-C acquisition to estimate hepatic PDFF in adult and pediatric nonalcoholic fatty liver disease (NAFLD) using an MR spectroscopy (MRS) sequence as the reference standard. Prospective. In all, 231 adult and pediatric patients with known or suspected NAFLD. PDFF estimated at 3T by three MR techniques: standard MRI-C; a Hi-SNR MRI-C variant with increased slice thickness, decreased matrix size, and no parallel imaging; and MRS (reference standard). MRI-PDFF was measured by image analysts using a region of interest coregistered with the MRS-PDFF voxel. Linear regression analyses were used to assess accuracy and precision of MRI-estimated PDFF for MRS-PDFF as a function of MRI-PDFF using the standard and Hi-SNR MRI-C for all patients and for patients with MRS-PDFF <10%. In all, 271 exams from 231 patients were included (mean MRS-PDFF: 12.6% [SD: 10.4]; range: 0.9-41.9). High agreement between MRI-PDFF and MRS-PDFF was demonstrated across the overall range of PDFF, with a regression slope of 1.035 for the standard MRI-C and 1.008 for Hi-SNR MRI-C. Hi-SNR MRI-C, compared to standard MRI-C, provided small but statistically significant improvements in the slope (respectively, 1.008 vs. 1.035, P = 0.004) and mean bias (0.412 vs. 0.673, P < 0.0001) overall. In the low-fat patients only, Hi-SNR MRI-C provided improvements in the slope (1.058 vs. 1.190, P = 0.002), mean bias (0.168 vs. 0.368, P = 0.007), intercept (-0.153 vs. -0.796, P < 0.0001), and borderline improvement in the R 2 (0.888 vs. 0.813, P = 0.01). Compared to standard MRI-C, Hi-SNR MRI-C provides slightly higher MRI-PDFF estimation accuracy across the overall range of PDFF and improves both accuracy and precision in the low PDFF range. 1 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018. © 2018 International Society for Magnetic Resonance in Medicine.

  1. Implementation and testing of the gridded Vienna Mapping Function 1 (VMF1)

    NASA Astrophysics Data System (ADS)

    Kouba, J.

    2008-04-01

    The new gridded Vienna Mapping Function (VMF1) was implemented and compared to the well-established site-dependent VMF1, directly and by using precise point positioning (PPP) with International GNSS Service (IGS) Final orbits/clocks for a 1.5-year GPS data set of 11 globally distributed IGS stations. The gridded VMF1 data can be interpolated for any location and for any time after 1994, whereas the site-dependent VMF1 data are only available at selected IGS stations and only after 2004. Both gridded and site-dependent VMF1 PPP solutions agree within 1 and 2 mm for the horizontal and vertical position components, respectively, provided that respective VMF1 hydrostatic zenith path delays (ZPD) are used for hydrostatic ZPD mapping to slant delays. The total ZPD of the gridded and site-dependent VMF1 data agree with PPP ZPD solutions with RMS of 1.5 and 1.8 cm, respectively. Such precise total ZPDs could provide useful initial a priori ZPD estimates for kinematic PPP and regional static GPS solutions. The hydrostatic ZPDs of the gridded VMF1 compare with the site-dependent VMF1 ZPDs with RMS of 0.3 cm, subject to some biases and discontinuities of up to 4 cm, which are likely due to different strategies used in the generation of the site-dependent VMF1 data. The precision of gridded hydrostatic ZPD should be sufficient for accurate a priori hydrostatic ZPD mapping in all precise GPS and very long baseline interferometry (VLBI) solutions. Conversely, precise and globally distributed geodetic solutions of total ZPDs, which need to be linked to VLBI to control biases and stability, should also provide a consistent and stable reference frame for long-term and state-of-the-art numerical weather modeling.

  2. Precision of dual-energy X-ray absorptiometry of the knee and heel: methodology and implications for research to reduce bone mineral loss after spinal cord injury.

    PubMed

    Peppler, W T; Kim, W J; Ethans, K; Cowley, K C

    2017-05-01

    Methodological validation of dual-energy x-ray absorptiometry (DXA)-based measures of leg bone mineral density (BMD) based on the guidelines of the International Society for Clinical Densitometry. The primary objective of this study was to determine the precision of BMD estimates at the knee and heel using the manufacturer provided DXA acquisition algorithm. The secondary objective was to determine the smallest change in DXA-based measurement of BMD that should be surpassed (least significant change (LSC)) before suggesting that a biological change has occurred in the distal femur, proximal tibia and calcaneus. Academic Research Centre, Canada. Ten people with motor-complete SCI of at least 2 years duration and 10 people from the general population volunteered to have four DXA-based measurements taken of their femur, tibia and calcaneus. BMDs for seven regions of interest (RIs) were calculated, as were short-term precision (root-mean-square (RMS) standard deviation (g cm -2 ), RMS-coefficient of variation (RMS-CV, %)) and LSC. Overall, RMS-CV values were similar between SCI (3.63-10.20%, mean=5.3%) and able-bodied (1.85-5.73%, mean=4%) cohorts, despite lower absolute BMD values at each RIs in those with SCI (35%, heel to 54%, knee; P<0.0001). Precision was highest at the calcaneus and lowest at the femur. Except at the femur, RMS-CV values were under 6%. For DXA-based estimates of BMD at the distal femur, proximal tibia and calcaneus, these precision values suggest that LSC values >10% are needed to detect differences between treated and untreated groups in studies aimed at reducing bone mineral loss after SCI.

  3. Weak Value Amplification is Suboptimal for Estimation and Detection

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-01-01

    We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.

  4. Is digital photography an accurate and precise method for measuring range of motion of the shoulder and elbow?

    PubMed

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2018-03-01

    Accurate measurements of shoulder and elbow motion are required for the management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, shoulder flexion/abduction/internal rotation/external rotation and elbow flexion/extension were measured using visual estimation, goniometry, and digital photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard (motion capture analysis), while precision was defined by the proportion of measurements within the authors' definition of clinical significance (10° for all motions except for elbow extension where 5° was used). Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although statistically significant differences were found in measurement accuracy between the three techniques, none of these differences met the authors' definition of clinical significance. Precision of the measurements was significantly higher for both digital photography (shoulder abduction [93% vs. 74%, p < 0.001], shoulder internal rotation [97% vs. 83%, p = 0.001], and elbow flexion [93% vs. 65%, p < 0.001]) and goniometry (shoulder abduction [92% vs. 74%, p < 0.001] and shoulder internal rotation [94% vs. 83%, p = 0.008]) than visual estimation. Digital photography was more precise than goniometry for measurements of elbow flexion only [93% vs. 76%, p < 0.001]. There was no clinically significant difference in measurement accuracy between the three techniques for shoulder and elbow motion. Digital photography showed higher measurement precision compared to visual estimation for shoulder abduction, shoulder internal rotation, and elbow flexion. However, digital photography was only more precise than goniometry for measurements of elbow flexion. Overall digital photography shows equivalent accuracy to visual estimation and goniometry, but with higher precision than visual estimation. Copyright © 2017. Published by Elsevier B.V.

  5. Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks

    NASA Astrophysics Data System (ADS)

    Karpov, Kirill; Fedotova, Irina; Siemens, Eduard

    2017-07-01

    In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.

  6. On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases

    NASA Astrophysics Data System (ADS)

    Khodabandeh, A.; Teunissen, P. J. G.

    2018-06-01

    Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.

  7. Theory and Design Tools For Studies of Reactions to Abrupt Changes in Noise Exposure

    NASA Technical Reports Server (NTRS)

    Fields, James M.; Ehrlich, Gary E.; Zador, Paul; Shepherd, Kevin P. (Technical Monitor)

    2000-01-01

    Study plans, a pre-tested questionnaire, a sample design evaluation tool, a community publicity monitoring plan, and a theoretical framework have been developed to support combined social/acoustical surveys of residents' reactions to an abrupt change in environmental noise, Secondary analyses of more than 20 previous surveys provide estimates of three parameters of a study simulation model; within individual variability, between study wave variability, and between neighborhood variability in response to community noise. The simulation model predicts the precision of the results from social surveys of reactions to noise, including changes in noise. When the study simulation model analyzed the population distribution, noise exposure environments and feasible noise measurement program at a proposed noise change survey site, it was concluded that the site could not yield sufficient precise estimates of human reaction model to justify conducting a survey. Additional secondary analyses determined that noise reactions are affected by the season of the social survey.

  8. On the impact of GNSS ambiguity resolution: geometry, ionosphere, time and biases

    NASA Astrophysics Data System (ADS)

    Khodabandeh, A.; Teunissen, P. J. G.

    2017-11-01

    Integer ambiguity resolution (IAR) is the key to fast and precise GNSS positioning and navigation. Next to the positioning parameters, however, there are several other types of GNSS parameters that are of importance for a range of different applications like atmospheric sounding, instrumental calibrations or time transfer. As some of these parameters may still require pseudo-range data for their estimation, their response to IAR may differ significantly. To infer the impact of ambiguity resolution on the parameters, we show how the ambiguity-resolved double-differenced phase data propagate into the GNSS parameter solutions. For that purpose, we introduce a canonical decomposition of the GNSS network model that, through its decoupled and decorrelated nature, provides direct insight into which parameters, or functions thereof, gain from IAR and which do not. Next to this qualitative analysis, we present for the GNSS estimable parameters of geometry, ionosphere, timing and instrumental biases closed-form expressions of their IAR precision gains together with supporting numerical examples.

  9. Effects of RF profile on precision of quantitative T2 mapping using dual-echo steady-state acquisition.

    PubMed

    Wu, Pei-Hsin; Cheng, Cheng-Chieh; Wu, Ming-Long; Chao, Tzu-Cheng; Chung, Hsiao-Wen; Huang, Teng-Yi

    2014-01-01

    The dual echo steady-state (DESS) sequence has been shown successful in achieving fast T2 mapping with good precision. Under-estimation of T2, however, becomes increasingly prominent as the flip angle decreases. In 3D DESS imaging, therefore, the derived T2 values would become a function of the slice location in the presence of non-ideal slice profile of the excitation RF pulse. Furthermore, the pattern of slice-dependent variation in T2 estimates is dependent on the RF pulse waveform. Multi-slice 2D DESS imaging provides better inter-slice consistency, but the signal intensity is subject to integrated effects of within-slice distribution of the actual flip angle. Consequently, T2 measured using 2D DESS is prone to inaccuracy even at the designated flip angle of 90°. In this study, both phantom and human experiments demonstrate the above phenomena in good agreement with model prediction. © 2013.

  10. Improving size estimates of open animal populations by incorporating information on age

    USGS Publications Warehouse

    Manly, Bryan F.J.; McDonald, Trent L.; Amstrup, Steven C.; Regehr, Eric V.

    2003-01-01

    Around the world, a great deal of effort is expended each year to estimate the sizes of wild animal populations. Unfortunately, population size has proven to be one of the most intractable parameters to estimate. The capture-recapture estimation models most commonly used (of the Jolly-Seber type) are complicated and require numerous, sometimes questionable, assumptions. The derived estimates usually have large variances and lack consistency over time. In capture–recapture studies of long-lived animals, the ages of captured animals can often be determined with great accuracy and relative ease. We show how to incorporate age information into size estimates for open populations, where the size changes through births, deaths, immigration, and emigration. The proposed method allows more precise estimates of population size than the usual models, and it can provide these estimates from two sample occasions rather than the three usually required. Moreover, this method does not require specialized programs for capture-recapture data; researchers can derive their estimates using the logistic regression module in any standard statistical package.

  11. Estimation of construction and demolition waste volume generation in new residential buildings in Spain.

    PubMed

    Villoria Sáez, Paola; del Río Merino, Mercedes; Porras-Amores, César

    2012-02-01

    The management planning of construction and demolition (C&D) waste uses a single indicator which does not provide enough detailed information. Therefore the determination and implementation of other innovative and precise indicators should be determined. The aim of this research work is to improve existing C&D waste quantification tools in the construction of new residential buildings in Spain. For this purpose, several housing projects were studied to determine an estimation of C&D waste generated during their construction process. This paper determines the values of three indicators to estimate the generation of C&D waste in new residential buildings in Spain, itemizing types of waste and construction stages. The inclusion of two more accurate indicators, in addition to the global one commonly in use, provides a significant improvement in C&D waste quantification tools and management planning.

  12. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  13. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  14. GNSS global real-time augmentation positioning: Real-time precise satellite clock estimation, prototype system construction and performance analysis

    NASA Astrophysics Data System (ADS)

    Chen, Liang; Zhao, Qile; Hu, Zhigang; Jiang, Xinyuan; Geng, Changjiang; Ge, Maorong; Shi, Chuang

    2018-01-01

    Lots of ambiguities in un-differenced (UD) model lead to lower calculation efficiency, which isn't appropriate for the high-frequency real-time GNSS clock estimation, like 1 Hz. Mixed differenced model fusing UD pseudo-range and epoch-differenced (ED) phase observations has been introduced into real-time clock estimation. In this contribution, we extend the mixed differenced model for realizing multi-GNSS real-time clock high-frequency updating and a rigorous comparison and analysis on same conditions are performed to achieve the best real-time clock estimation performance taking the efficiency, accuracy, consistency and reliability into consideration. Based on the multi-GNSS real-time data streams provided by multi-GNSS Experiment (MGEX) and Wuhan University, GPS + BeiDou + Galileo global real-time augmentation positioning prototype system is designed and constructed, including real-time precise orbit determination, real-time precise clock estimation, real-time Precise Point Positioning (RT-PPP) and real-time Standard Point Positioning (RT-SPP). The statistical analysis of the 6 h-predicted real-time orbits shows that the root mean square (RMS) in radial direction is about 1-5 cm for GPS, Beidou MEO and Galileo satellites and about 10 cm for Beidou GEO and IGSO satellites. Using the mixed differenced estimation model, the prototype system can realize high-efficient real-time satellite absolute clock estimation with no constant clock-bias and can be used for high-frequency augmentation message updating (such as 1 Hz). The real-time augmentation message signal-in-space ranging error (SISRE), a comprehensive accuracy of orbit and clock and effecting the users' actual positioning performance, is introduced to evaluate and analyze the performance of GPS + BeiDou + Galileo global real-time augmentation positioning system. The statistical analysis of real-time augmentation message SISRE is about 4-7 cm for GPS, whlile 10 cm for Beidou IGSO/MEO, Galileo and about 30 cm for BeiDou GEO satellites. The real-time positioning results prove that the GPS + BeiDou + Galileo RT-PPP comparing to GPS-only can effectively accelerate convergence time by about 60%, improve the positioning accuracy by about 30% and obtain averaged RMS 4 cm in horizontal and 6 cm in vertical; additionally RT-SPP accuracy in the prototype system can realize positioning accuracy with about averaged RMS 1 m in horizontal and 1.5-2 m in vertical, which are improved by 60% and 70% to SPP based on broadcast ephemeris, respectively.

  15. Cross-Sectional HIV Incidence Surveillance: A Benchmarking of Approaches for Estimating the 'Mean Duration of Recent Infection'.

    PubMed

    Kassanjee, Reshma; De Angelis, Daniela; Farah, Marian; Hanson, Debra; Labuschagne, Jan Phillipus Lourens; Laeyendecker, Oliver; Le Vu, Stéphane; Tom, Brian; Wang, Rui; Welte, Alex

    2017-03-01

    The application of biomarkers for 'recent' infection in cross-sectional HIV incidence surveillance requires the estimation of critical biomarker characteristics. Various approaches have been employed for using longitudinal data to estimate the Mean Duration of Recent Infection (MDRI) - the average time in the 'recent' state. In this systematic benchmarking of MDRI estimation approaches, a simulation platform was used to measure accuracy and precision of over twenty approaches, in thirty scenarios capturing various study designs, subject behaviors and test dynamics that may be encountered in practice. Results highlight that assuming a single continuous sojourn in the 'recent' state can produce substantial bias. Simple interpolation provides useful MDRI estimates provided subjects are tested at regular intervals. Regression performs the best - while 'random effects' describe the subject-clustering in the data, regression models without random effects proved easy to implement, stable, and of similar accuracy in scenarios considered; robustness to parametric assumptions was improved by regressing 'recent'/'non-recent' classifications rather than continuous biomarker readings. All approaches were vulnerable to incorrect assumptions about subjects' (unobserved) infection times. Results provided show the relationships between MDRI estimation performance and the number of subjects, inter-visit intervals, missed visits, loss to follow-up, and aspects of biomarker signal and noise.

  16. Incorporating harvest rates into the sex-age-kill model for white-tailed deer

    USGS Publications Warehouse

    Norton, Andrew S.; Diefenbach, Duane R.; Rosenberry, Christopher S.; Wallingford, Bret D.

    2013-01-01

    Although monitoring population trends is an essential component of game species management, wildlife managers rarely have complete counts of abundance. Often, they rely on population models to monitor population trends. As imperfect representations of real-world populations, models must be rigorously evaluated to be applied appropriately. Previous research has evaluated population models for white-tailed deer (Odocoileus virginianus); however, the precision and reliability of these models when tested against empirical measures of variability and bias largely is untested. We were able to statistically evaluate the Pennsylvania sex-age-kill (PASAK) population model using realistic error measured using data from 1,131 radiocollared white-tailed deer in Pennsylvania from 2002 to 2008. We used these data and harvest data (number killed, age-sex structure, etc.) to estimate precision of abundance estimates, identify the most efficient harvest data collection with respect to precision of parameter estimates, and evaluate PASAK model robustness to violation of assumptions. Median coefficient of variation (CV) estimates by Wildlife Management Unit, 13.2% in the most recent year, were slightly above benchmarks recommended for managing game species populations. Doubling reporting rates by hunters or doubling the number of deer checked by personnel in the field reduced median CVs to recommended levels. The PASAK model was robust to errors in estimates for adult male harvest rates but was sensitive to errors in subadult male harvest rates, especially in populations with lower harvest rates. In particular, an error in subadult (1.5-yr-old) male harvest rates resulted in the opposite error in subadult male, adult female, and juvenile population estimates. Also, evidence of a greater harvest probability for subadult female deer when compared with adult (≥2.5-yr-old) female deer resulted in a 9.5% underestimate of the population using the PASAK model. Because obtaining appropriate sample sizes, by management unit, to estimate harvest rate parameters each year may be too expensive, assumptions of constant annual harvest rates may be necessary. However, if changes in harvest regulations or hunter behavior influence subadult male harvest rates, the PASAK model could provide an unreliable index to population changes. 

  17. Horvitz-Thompson survey sample methods for estimating large-scale animal abundance

    USGS Publications Warehouse

    Samuel, M.D.; Garton, E.O.

    1994-01-01

    Large-scale surveys to estimate animal abundance can be useful for monitoring population status and trends, for measuring responses to management or environmental alterations, and for testing ecological hypotheses about abundance. However, large-scale surveys may be expensive and logistically complex. To ensure resources are not wasted on unattainable targets, the goals and uses of each survey should be specified carefully and alternative methods for addressing these objectives always should be considered. During survey design, the impoflance of each survey error component (spatial design, propofiion of detected animals, precision in detection) should be considered carefully to produce a complete statistically based survey. Failure to address these three survey components may produce population estimates that are inaccurate (biased low), have unrealistic precision (too precise) and do not satisfactorily meet the survey objectives. Optimum survey design requires trade-offs in these sources of error relative to the costs of sampling plots and detecting animals on plots, considerations that are specific to the spatial logistics and survey methods. The Horvitz-Thompson estimators provide a comprehensive framework for considering all three survey components during the design and analysis of large-scale wildlife surveys. Problems of spatial and temporal (especially survey to survey) heterogeneity in detection probabilities have received little consideration, but failure to account for heterogeneity produces biased population estimates. The goal of producing unbiased population estimates is in conflict with the increased variation from heterogeneous detection in the population estimate. One solution to this conflict is to use an MSE-based approach to achieve a balance between bias reduction and increased variation. Further research is needed to develop methods that address spatial heterogeneity in detection, evaluate the effects of temporal heterogeneity on survey objectives and optimize decisions related to survey bias and variance. Finally, managers and researchers involved in the survey design process must realize that obtaining the best survey results requires an interactive and recursive process of survey design, execution, analysis and redesign. Survey refinements will be possible as further knowledge is gained on the actual abundance and distribution of the population and on the most efficient techniques for detection animals.

  18. Categorical Working Memory Representations are used in Delayed Estimation of Continuous Colors

    PubMed Central

    Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J

    2016-01-01

    In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In two experiments we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. PMID:27797548

  19. A framework for improving the cost-effectiveness of DSM program evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonnenblick, R.; Eto, J.

    The prudence of utility demand-side management (DSM) investments hinges on their performance, yet evaluating performance is complicated because the energy saved by DSM programs can never be observed directly but only inferred. This study frames and begins to answer the following questions: (1) how well do current evaluation methods perform in improving confidence in the measurement of energy savings produced by DSM programs; (2) in view of this performance, how can limited evaluation resources be best allocated to maximize the value of the information they provide? The authors review three major classes of methods for estimating annual energy savings: trackingmore » database (sometimes called engineering estimates), end-use metering, and billing analysis and examine them in light of the uncertainties in current estimates of DSM program measure lifetimes. The authors assess the accuracy and precision of each method and construct trade-off curves to examine the costs of increases in accuracy or precision. Several approaches for improving evaluations for the purpose of assessing program cost effectiveness are demonstrated. The methods can be easily generalized to other evaluation objectives, such as shared savings incentive payments.« less

  20. On the Simulation of Sea States with High Significant Wave Height for the Validation of Parameter Retrieval Algorithms for Future Altimetry Missions

    NASA Astrophysics Data System (ADS)

    Kuschenerus, Mieke; Cullen, Robert

    2016-08-01

    To ensure reliability and precision of wave height estimates for future satellite altimetry missions such as Sentinel 6, reliable parameter retrieval algorithms that can extract significant wave heights up to 20 m have to be established. The retrieved parameters, i.e. the retrieval methods need to be validated extensively on a wide range of possible significant wave heights. Although current missions require wave height retrievals up to 20 m, there is little evidence of systematic validation of parameter retrieval methods for sea states with wave heights above 10 m. This paper provides a definition of a set of simulated sea states with significant wave height up to 20 m, that allow simulation of radar altimeter response echoes for extreme sea states in SAR and low resolution mode. The simulated radar responses are used to derive significant wave height estimates, which can be compared with the initial models, allowing precision estimations of the applied parameter retrieval methods. Thus we establish a validation method for significant wave height retrieval for sea states causing high significant wave heights, to allow improved understanding and planning of future satellite altimetry mission validation.

  1. Categorical working memory representations are used in delayed estimation of continuous colors.

    PubMed

    Hardman, Kyle O; Vergauwe, Evie; Ricker, Timothy J

    2017-01-01

    In the last decade, major strides have been made in understanding visual working memory through mathematical modeling of color production responses. In the delayed color estimation task (Wilken & Ma, 2004), participants are given a set of colored squares to remember, and a few seconds later asked to reproduce those colors by clicking on a color wheel. The degree of error in these responses is characterized with mathematical models that estimate working memory precision and the proportion of items remembered by participants. A standard mathematical model of color memory assumes that items maintained in memory are remembered through memory for precise details about the particular studied shade of color. We contend that this model is incomplete in its present form because no mechanism is provided for remembering the coarse category of a studied color. In the present work, we remedy this omission and present a model of visual working memory that includes both continuous and categorical memory representations. In 2 experiments, we show that our new model outperforms this standard modeling approach, which demonstrates that categorical representations should be accounted for by mathematical models of visual working memory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. The Precision of Mapping Between Number Words and the Approximate Number System Predicts Children’s Formal Math Abilities

    PubMed Central

    Libertus, Melissa E.; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-01-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive Approximate Number System (ANS), and by using words and symbols to count and represent numbers exactly. Further, by the time they are five years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children’s math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation – mapping accuracy and variability – might each relate to math performance. Here, we address these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. PMID:27348475

  3. The precision of mapping between number words and the approximate number system predicts children's formal math abilities.

    PubMed

    Libertus, Melissa E; Odic, Darko; Feigenson, Lisa; Halberda, Justin

    2016-10-01

    Children can represent number in at least two ways: by using their non-verbal, intuitive approximate number system (ANS) and by using words and symbols to count and represent numbers exactly. Furthermore, by the time they are 5years old, children can map between the ANS and number words, as evidenced by their ability to verbally estimate numbers of items without counting. How does the quality of the mapping between approximate and exact numbers relate to children's math abilities? The role of the ANS-number word mapping in math competence remains controversial for at least two reasons. First, previous work has not examined the relation between verbal estimation and distinct subtypes of math abilities. Second, previous work has not addressed how distinct components of verbal estimation-mapping accuracy and variability-might each relate to math performance. Here, we addressed these gaps by measuring individual differences in ANS precision, verbal number estimation, and formal and informal math abilities in 5- to 7-year-old children. We found that verbal estimation variability, but not estimation accuracy, predicted formal math abilities, even when controlling for age, expressive vocabulary, and ANS precision, and that it mediated the link between ANS precision and overall math ability. These findings suggest that variability in the ANS-number word mapping may be especially important for formal math abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  5. ExpertEyes: open-source, high-definition eyetracking.

    PubMed

    Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas

    2015-03-01

    ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.

  6. Effective theory for the nonrigid rotor in an electromagnetic field: Toward accurate and precise calculations of E2 transitions in deformed nuclei

    DOE PAGES

    Coello Pérez, Eduardo A.; Papenbrock, Thomas F.

    2015-07-27

    In this paper, we present a model-independent approach to electric quadrupole transitions of deformed nuclei. Based on an effective theory for axially symmetric systems, the leading interactions with electromagnetic fields enter as minimal couplings to gauge potentials, while subleading corrections employ gauge-invariant nonminimal couplings. This approach yields transition operators that are consistent with the Hamiltonian, and the power counting of the effective theory provides us with theoretical uncertainty estimates. We successfully test the effective theory in homonuclear molecules that exhibit a large separation of scales. For ground-state band transitions of rotational nuclei, the effective theory describes data well within theoreticalmore » uncertainties at leading order. To probe the theory at subleading order, data with higher precision would be valuable. For transitional nuclei, next-to-leading-order calculations and the high-precision data are consistent within the theoretical uncertainty estimates. In addition, we study the faint interband transitions within the effective theory and focus on the E2 transitions from the 0 2 + band (the “β band”) to the ground-state band. Here the predictions from the effective theory are consistent with data for several nuclei, thereby proposing a solution to a long-standing challenge.« less

  7. Optimal firing rate estimation

    NASA Technical Reports Server (NTRS)

    Paulin, M. G.; Hoffman, L. F.

    2001-01-01

    We define a measure for evaluating the quality of a predictive model of the behavior of a spiking neuron. This measure, information gain per spike (Is), indicates how much more information is provided by the model than if the prediction were made by specifying the neuron's average firing rate over the same time period. We apply a maximum Is criterion to optimize the performance of Gaussian smoothing filters for estimating neural firing rates. With data from bullfrog vestibular semicircular canal neurons and data from simulated integrate-and-fire neurons, the optimal bandwidth for firing rate estimation is typically similar to the average firing rate. Precise timing and average rate models are limiting cases that perform poorly. We estimate that bullfrog semicircular canal sensory neurons transmit in the order of 1 bit of stimulus-related information per spike.

  8. Precision phase estimation based on weak-value amplification

    NASA Astrophysics Data System (ADS)

    Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei

    2017-02-01

    In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.

  9. A Comparison of Item Selection Procedures Using Different Ability Estimation Methods in Computerized Adaptive Testing Based on the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Ho, Tsung-Han

    2010-01-01

    Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…

  10. Forward hadron calorimeter at MPD/NICA

    NASA Astrophysics Data System (ADS)

    Golubeva, M.; Guber, F.; Ivashkin, A.; Izvestnyy, A.; Kurepin, A.; Morozov, S.; Parfenov, P.; Petukhov, O.; Taranenko, A.; Selyuzhenkov, I.; Svintsov, I.

    2017-01-01

    Forward hadron calorimeter (FHCAL) at MPD/NICA experimental setup is described. The main purpose of the FHCAL is to provide an experimental measurement of a heavy-ion collision centrality (impact parameter) and orientation of its reaction plane. Precise event-by-event estimate of these basic observables is crucial for many physics phenomena studies to be performed by the MPD experiment. The simulation results of FHCAL performance are presented.

  11. Predicting defoliation by the gypsy moth using egg mass counts and a helper variable

    Treesearch

    Michael E. Montgomery

    1991-01-01

    Traditionally, counts of egg masses have been used to predict defoliation by the gypsy moth. Regardless of the method and precision used to obtain the counts, estimates of egg mass density alone often do not provide satisfactory predictions of defoliation. Although defoliation levels greater than 50% are seldom observed if egg mass densities are less than 600 per...

  12. Developing Confidence Limits For Reliability Of Software

    NASA Technical Reports Server (NTRS)

    Hayhurst, Kelly J.

    1991-01-01

    Technique developed for estimating reliability of software by use of Moranda geometric de-eutrophication model. Pivotal method enables straightforward construction of exact bounds with associated degree of statistical confidence about reliability of software. Confidence limits thus derived provide precise means of assessing quality of software. Limits take into account number of bugs found while testing and effects of sampling variation associated with random order of discovering bugs.

  13. The NASA Meter Class Autonomous Telescope: Ascension Island

    DTIC Science & Technology

    2013-09-01

    understand the debris environment by providing high fidelity data in a timely manner to protect satellites and spacecraft in orbit around the Earth...gigabytes of image data nightly. With fainter detection limits, precision detection, acquisition and tracking of targets, multi-color photometry ...ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for

  14. Quantum Hamiltonian identification from measurement time traces.

    PubMed

    Zhang, Jun; Sarovar, Mohan

    2014-08-22

    Precise identification of parameters governing quantum processes is a critical task for quantum information and communication technologies. In this Letter, we consider a setting where system evolution is determined by a parametrized Hamiltonian, and the task is to estimate these parameters from temporal records of a restricted set of system observables (time traces). Based on the notion of system realization from linear systems theory, we develop a constructive algorithm that provides estimates of the unknown parameters directly from these time traces. We illustrate the algorithm and its robustness to measurement noise by applying it to a one-dimensional spin chain model with variable couplings.

  15. High accuracy LADAR scene projector calibration sensor development

    NASA Astrophysics Data System (ADS)

    Kim, Hajin J.; Cornell, Michael C.; Naumann, Charles B.; Bowden, Mark H.

    2008-04-01

    A sensor system for the characterization of infrared laser radar scene projectors has been developed. Available sensor systems do not provide sufficient range resolution to evaluate the high precision LADAR projector systems developed by the U.S. Army Research, Development and Engineering Command (RDECOM) Aviation and Missile Research, Development and Engineering Center (AMRDEC). With timing precision capability to a fraction of a nanosecond, it can confirm the accuracy of simulated return pulses from a nominal range of up to 6.5 km to a resolution of 4cm. Increased range can be achieved through firmware reconfiguration. Two independent amplitude triggers measure both rise and fall time providing a judgment of pulse shape and allowing estimation of the contained energy. Each return channel can measure up to 32 returns per trigger characterizing each return pulse independently. Currently efforts include extending the capability to 8 channels. This paper outlines the development, testing, capabilities and limitations of this new sensor system.

  16. Automated semantic indexing of figure captions to improve radiology image retrieval.

    PubMed

    Kahn, Charles E; Rubin, Daniel L

    2009-01-01

    We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Estimated precision was 0.897 (95% confidence interval, 0.857-0.937). Estimated recall was 0.930 (95% confidence interval, 0.838-1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval.

  17. Bayesian WLS/GLS regression for regional skewness analysis for regions with large crest stage gage networks

    USGS Publications Warehouse

    Veilleux, Andrea G.; Stedinger, Jery R.; Eash, David A.

    2012-01-01

    This paper summarizes methodological advances in regional log-space skewness analyses that support flood-frequency analysis with the log Pearson Type III (LP3) distribution. A Bayesian Weighted Least Squares/Generalized Least Squares (B-WLS/B-GLS) methodology that relates observed skewness coefficient estimators to basin characteristics in conjunction with diagnostic statistics represents an extension of the previously developed B-GLS methodology. B-WLS/B-GLS has been shown to be effective in two California studies. B-WLS/B-GLS uses B-WLS to generate stable estimators of model parameters and B-GLS to estimate the precision of those B-WLS regression parameters, as well as the precision of the model. The study described here employs this methodology to develop a regional skewness model for the State of Iowa. To provide cost effective peak-flow data for smaller drainage basins in Iowa, the U.S. Geological Survey operates a large network of crest stage gages (CSGs) that only record flow values above an identified recording threshold (thus producing a censored data record). CSGs are different from continuous-record gages, which record almost all flow values and have been used in previous B-GLS and B-WLS/B-GLS regional skewness studies. The complexity of analyzing a large CSG network is addressed by using the B-WLS/B-GLS framework along with the Expected Moments Algorithm (EMA). Because EMA allows for the censoring of low outliers, as well as the use of estimated interval discharges for missing, censored, and historic data, it complicates the calculations of effective record length (and effective concurrent record length) used to describe the precision of sample estimators because the peak discharges are no longer solely represented by single values. Thus new record length calculations were developed. The regional skewness analysis for the State of Iowa illustrates the value of the new B-WLS/BGLS methodology with these new extensions.

  18. Using spatiotemporal statistical models to estimate animal abundance and infer ecological dynamics from survey counts

    USGS Publications Warehouse

    Conn, Paul B.; Johnson, Devin S.; Ver Hoef, Jay M.; Hooten, Mevin B.; London, Joshua M.; Boveng, Peter L.

    2015-01-01

    Ecologists often fit models to survey data to estimate and explain variation in animal abundance. Such models typically require that animal density remains constant across the landscape where sampling is being conducted, a potentially problematic assumption for animals inhabiting dynamic landscapes or otherwise exhibiting considerable spatiotemporal variation in density. We review several concepts from the burgeoning literature on spatiotemporal statistical models, including the nature of the temporal structure (i.e., descriptive or dynamical) and strategies for dimension reduction to promote computational tractability. We also review several features as they specifically relate to abundance estimation, including boundary conditions, population closure, choice of link function, and extrapolation of predicted relationships to unsampled areas. We then compare a suite of novel and existing spatiotemporal hierarchical models for animal count data that permit animal density to vary over space and time, including formulations motivated by resource selection and allowing for closed populations. We gauge the relative performance (bias, precision, computational demands) of alternative spatiotemporal models when confronted with simulated and real data sets from dynamic animal populations. For the latter, we analyze spotted seal (Phoca largha) counts from an aerial survey of the Bering Sea where the quantity and quality of suitable habitat (sea ice) changed dramatically while surveys were being conducted. Simulation analyses suggested that multiple types of spatiotemporal models provide reasonable inference (low positive bias, high precision) about animal abundance, but have potential for overestimating precision. Analysis of spotted seal data indicated that several model formulations, including those based on a log-Gaussian Cox process, had a tendency to overestimate abundance. By contrast, a model that included a population closure assumption and a scale prior on total abundance produced estimates that largely conformed to our a priori expectation. Although care must be taken to tailor models to match the study population and survey data available, we argue that hierarchical spatiotemporal statistical models represent a powerful way forward for estimating abundance and explaining variation in the distribution of dynamical populations.

  19. Comparison of low cost measurement techniques for long-term monitoring of atmospheric ammonia.

    PubMed

    Sutton, M A; Miners, B; Tang, Y S; Milford, C; Wyers, G P; Duyzer, J H; Fowler, D

    2001-10-01

    An inter-comparison of techniques for long-term sampling of atmospheric ammonia (NH3) was conducted with a view to establishing a national network with > 50 sites. Key requirements were for: a low cost system, simplicity and durability to enable a postal exchange with local site operators, a precision of < +/- 20% for monthly sampling at expected NH3 concentrations of 1-2 micrograms m-3, a detection limit sufficient to resolve the small NH3 concentrations (< 0.2 microgram m-3) expected in remote parts of the UK, and a quantitative means to establish quality control. Five sampling methods were compared: A, a commercially available membrane diffusion tube (exposed in triplicate), with membranes removed immediately after sampling; B, the above method, with the membranes left in place until analysis; C, open-ended diffusion tubes (exposed with 4 replicates); D, a new active sampling diffusion denuder system; and E, an active sampling bubbler system. Method D consisted of two 0.1 m acid coated glass denuders in series with sampling at approximately 0.3 l min-1. These methods were deployed at 6 locations in the UK and the Netherlands and compared against reference estimates. Method D was the most precise and sensitive of the techniques compared, with a detection limit of < 0.1 microgram m-3. The bubbler provided a less precise estimate of NH3 concentration, and also suffered several practical drawbacks. The diffusion tubes were found to correlate with the reference at high concentrations (> 3 micrograms m-3), but were less precise and overestimated NH3 at smaller concentrations. Of the passive methods, A was the most precise and C the least precise. On the basis of the results, method D has been implemented in the national network, together with application of method A to explore spatial variability in regions with expected high NH3 concentrations.

  20. Sub-sampling genetic data to estimate black bear population size: A case study

    USGS Publications Warehouse

    Tredick, C.A.; Vaughan, M.R.; Stauffer, D.F.; Simek, S.L.; Eason, T.

    2007-01-01

    Costs for genetic analysis of hair samples collected for individual identification of bears average approximately US$50 [2004] per sample. This can easily exceed budgetary allowances for large-scale studies or studies of high-density bear populations. We used 2 genetic datasets from 2 areas in the southeastern United States to explore how reducing costs of analysis by sub-sampling affected precision and accuracy of resulting population estimates. We used several sub-sampling scenarios to create subsets of the full datasets and compared summary statistics, population estimates, and precision of estimates generated from these subsets to estimates generated from the complete datasets. Our results suggested that bias and precision of estimates improved as the proportion of total samples used increased, and heterogeneity models (e.g., Mh[CHAO]) were more robust to reduced sample sizes than other models (e.g., behavior models). We recommend that only high-quality samples (>5 hair follicles) be used when budgets are constrained, and efforts should be made to maximize capture and recapture rates in the field.

  1. Nonlinear Quantum Metrology of Many-Body Open Systems

    NASA Astrophysics Data System (ADS)

    Beau, M.; del Campo, A.

    2017-07-01

    We introduce general bounds for the parameter estimation error in nonlinear quantum metrology of many-body open systems in the Markovian limit. Given a k -body Hamiltonian and p -body Lindblad operators, the estimation error of a Hamiltonian parameter using a Greenberger-Horne-Zeilinger state as a probe is shown to scale as N-[k -(p /2 )], surpassing the shot-noise limit for 2 k >p +1 . Metrology equivalence between initial product states and maximally entangled states is established for p ≥1 . We further show that one can estimate the system-environment coupling parameter with precision N-(p /2 ), while many-body decoherence enhances the precision to N-k in the noise-amplitude estimation of a fluctuating k -body Hamiltonian. For the long-range Ising model, we show that the precision of this parameter beats the shot-noise limit when the range of interactions is below a threshold value.

  2. Frozen lattice and absorptive model for high angle annular dark field scanning transmission electron microscopy: A comparison study in terms of integrated intensity and atomic column position measurement.

    PubMed

    Alania, M; Lobato, I; Van Aert, S

    2018-01-01

    In this paper, both the frozen lattice (FL) and the absorptive potential (AP) approximation models are compared in terms of the integrated intensity and the precision with which atomic columns can be located from an image acquired using high angle annular dark field (HAADF) scanning transmission electron microscopy (STEM). The comparison is made for atoms of Cu, Ag, and Au. The integrated intensity is computed for both an isolated atomic column and an atomic column inside an FCC structure. The precision has been computed using the so-called Cramér-Rao Lower Bound (CRLB), which provides a theoretical lower bound on the variance with which parameters can be estimated. It is shown that the AP model results into accurate measurements for the integrated intensity only for small detector ranges under relatively low angles and for small thicknesses. In terms of the attainable precision, both methods show similar results indicating picometer range precision under realistic experimental conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Use of instantaneous streamflow measurements to improve regression estimates of index flow for the summer month of lowest streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, David J.

    2011-01-01

    In Michigan, index flow Q50 is a streamflow characteristic defined as the minimum of median flows for July, August, and September. The state of Michigan uses index flow estimates to help regulate large (greater than 100,000 gallons per day) water withdrawals to prevent adverse effects on characteristic fish populations. At sites where long-term streamgages are located, index flows are computed directly from continuous streamflow records as GageQ50. In an earlier study, a multiple-regression equation was developed to estimate index flows IndxQ50 at ungaged sites. The index equation explains about 94 percent of the variability of index flows at 147 (index) streamgages by use of six explanatory variables describing soil type, aquifer transmissivity, land cover, and precipitation characteristics. This report extends the results of the previous study, by use of Monte Carlo simulations, to evaluate alternative flow estimators, DiscQ50, IntgQ50, SiteQ50, and AugmQ50. The Monte Carlo simulations treated each of the available index streamgages, in turn, as a miscellaneous site where streamflow conditions are described by one or more instantaneous measurements of flow. In the simulations, instantaneous flows were approximated by daily mean flows at the corresponding site. All estimators use information that can be obtained from instantaneous flow measurements and contemporaneous daily mean flow data from nearby long-term streamgages. The efficacy of these estimators was evaluated over a set of measurement intensities in which the number of simulated instantaneous flow measurements ranged from 1 to 100 at a site. The discrete measurement estimator DiscQ50 is based on a simple linear regression developed between information on daily mean flows at five or more streamgages near the miscellaneous site and their corresponding GageQ50 index flows. The regression relation then was used to compute a DiscQ50 estimate at the miscellaneous site by use of the simulated instantaneous flow measurement. This process was repeated to develop a set of DiscQ50 estimates for all simulated instantaneous measurements, a weighted DiscQ50 estimate was formed from this set. Results indicated that the expected value of this weighted estimate was more precise than the IndxQ50 estimate for all measurement intensities evaluated. The integrated index-flow estimator, IntgQ50, was formed by computing a weighted average of the index estimate IndxQ50 and the DiscQ50 estimate. Results indicated that the IntgQ50 estimator was more precise than the DiscQ50 estimator at low measurement intensities of one to two measurements. At greater measurement intensities, the precision of the IntgQ50 estimator converges to the DiscQ50 estimator. Neither the DiscQ50 nor the IntgQ50 estimators provided site-specific estimates. In particular, although expected values of DiscQ50 and IntgQ50 estimates converge with increasing measurement intensity, they do not necessarily converge to the site-specific value of Q50. The site estimator of flow, SiteQ50, was developed to facilitate this convergence at higher measurement intensities. This is accomplished by use of the median of simulated instantaneous flow values for each measurement intensity level. A weighted estimate of the median and information associated with the IntgQ50 estimate was used to form the SiteQ50 estimate. Initial simulations indicate that the SiteQ50 estimator generally has greater precision than the IntgQ50 estimator at measurement intensities greater than 3, however, additional analysis is needed to identify streamflow conditions under which instantaneous measurements will produce estimates that generally converge to the index flows. A preliminary augmented index regression equation was developed, which contains the index regression estimate and two additional variables associated with base-flow recession characteristics. When these recession variables were estimated as the medians of recession parameters compute

  4. Black hole spectroscopy: Systematic errors and ringdown energy estimates

    NASA Astrophysics Data System (ADS)

    Baibhav, Vishal; Berti, Emanuele; Cardoso, Vitor; Khanna, Gaurav

    2018-02-01

    The relaxation of a distorted black hole to its final state provides important tests of general relativity within the reach of current and upcoming gravitational wave facilities. In black hole perturbation theory, this phase consists of a simple linear superposition of exponentially damped sinusoids (the quasinormal modes) and of a power-law tail. How many quasinormal modes are necessary to describe waveforms with a prescribed precision? What error do we incur by only including quasinormal modes, and not tails? What other systematic effects are present in current state-of-the-art numerical waveforms? These issues, which are basic to testing fundamental physics with distorted black holes, have hardly been addressed in the literature. We use numerical relativity waveforms and accurate evolutions within black hole perturbation theory to provide some answers. We show that (i) a determination of the fundamental l =m =2 quasinormal frequencies and damping times to within 1% or better requires the inclusion of at least the first overtone, and preferably of the first two or three overtones; (ii) a determination of the black hole mass and spin with precision better than 1% requires the inclusion of at least two quasinormal modes for any given angular harmonic mode (ℓ , m ). We also improve on previous estimates and fits for the ringdown energy radiated in the various multipoles. These results are important to quantify theoretical (as opposed to instrumental) limits in parameter estimation accuracy and tests of general relativity allowed by ringdown measurements with high signal-to-noise ratio gravitational wave detectors.

  5. Fusing Continuous-Valued Medical Labels Using a Bayesian Model.

    PubMed

    Zhu, Tingting; Dunkley, Nic; Behar, Joachim; Clifton, David A; Clifford, Gari D

    2015-12-01

    With the rapid increase in volume of time series medical data available through wearable devices, there is a need to employ automated algorithms to label data. Examples of labels include interventions, changes in activity (e.g. sleep) and changes in physiology (e.g. arrhythmias). However, automated algorithms tend to be unreliable resulting in lower quality care. Expert annotations are scarce, expensive, and prone to significant inter- and intra-observer variance. To address these problems, a Bayesian Continuous-valued Label Aggregator (BCLA) is proposed to provide a reliable estimation of label aggregation while accurately infer the precision and bias of each algorithm. The BCLA was applied to QT interval (pro-arrhythmic indicator) estimation from the electrocardiogram using labels from the 2006 PhysioNet/Computing in Cardiology Challenge database. It was compared to the mean, median, and a previously proposed Expectation Maximization (EM) label aggregation approaches. While accurately predicting each labelling algorithm's bias and precision, the root-mean-square error of the BCLA was 11.78 ± 0.63 ms, significantly outperforming the best Challenge entry (15.37 ± 2.13 ms) as well as the EM, mean, and median voting strategies (14.76 ± 0.52, 17.61 ± 0.55, and 14.43 ± 0.57 ms respectively with p < 0.0001). The BCLA could therefore provide accurate estimation for medical continuous-valued label tasks in an unsupervised manner even when the ground truth is not available.

  6. Estimating the number of recreational anglers for a given waterbody

    USGS Publications Warehouse

    Pope, Kevin L.; Powell, Larkin A.; Harmon, Brian S.; Pegg, Mark A.; Chizinski, Christopher J.

    2017-01-01

    Knowing how many anglers use a given body of water is paramount for understanding components of a fishery related to angling pressure and harvest, yet no study has attempted to provide an estimate of the population size of anglers for a given waterbody. Here, we use information from creel surveys in a removal-sampling framework to estimate total numbers of anglers using six reservoirs in Nebraska, USA, and we examine the influence of the duration of sampling period on those estimates. Population estimates (N ± SE) of unique anglers were 2050 ± 45 for Branched Oak Lake, 1992 ± 29 for Calamus Reservoir, 929 ± 10 for Harlan County Reservoir, 985 ± 24 for Lake McConaughy, 1277 ± 24 for Merritt Reservoir, and 916 ± 18 for Pawnee Lake during April–October 2015. Shortening the sampling period by one or more months generally resulted in a greater effect on estimates of precision than on estimates of overall abundance. No relationship existed between abundances of unique anglers and angling pressures across reservoirs and sampling duration, indicative of a decoupling of angler abundance and angling pressure. The approach outlined herein has potential to provide defendable answers to “how many are there?”, questions we ask when subjects cannot be marked, which should provide new insights about angler populations and subpopulations.

  7. Comparing the cohort design and the nested case–control design in the presence of both time-invariant and time-dependent treatment and competing risks: bias and precision

    PubMed Central

    Austin, Peter C; Anderson, Geoffrey M; Cigsar, Candemir; Gruneir, Andrea

    2012-01-01

    Purpose Observational studies using electronic administrative healthcare databases are often used to estimate the effects of treatments and exposures. Traditionally, a cohort design has been used to estimate these effects, but increasingly, studies are using a nested case–control (NCC) design. The relative statistical efficiency of these two designs has not been examined in detail. Methods We used Monte Carlo simulations to compare these two designs in terms of the bias and precision of effect estimates. We examined three different settings: (A) treatment occurred at baseline, and there was a single outcome of interest; (B) treatment was time varying, and there was a single outcome; and C treatment occurred at baseline, and there was a secondary event that competed with the primary event of interest. Comparisons were made of percentage bias, length of 95% confidence interval, and mean squared error (MSE) as a combined measure of bias and precision. Results In Setting A, bias was similar between designs, but the cohort design was more precise and had a lower MSE in all scenarios. In Settings B and C, the cohort design was more precise and had a lower MSE in all scenarios. In both Settings B and C, the NCC design tended to result in estimates with greater bias compared with the cohort design. Conclusions We conclude that in a range of settings and scenarios, the cohort design is superior in terms of precision and MSE. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22653805

  8. Multiparameter Estimation in Networked Quantum Sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  9. Multiparameter Estimation in Networked Quantum Sensors

    DOE PAGES

    Proctor, Timothy J.; Knott, Paul A.; Dunningham, Jacob A.

    2018-02-21

    We introduce a general model for a network of quantum sensors, and we use this model to consider the question: When can entanglement between the sensors, and/or global measurements, enhance the precision with which the network can measure a set of unknown parameters? We rigorously answer this question by presenting precise theorems proving that for a broad class of problems there is, at most, a very limited intrinsic advantage to using entangled states or global measurements. Moreover, for many estimation problems separable states and local measurements are optimal, and can achieve the ultimate quantum limit on the estimation uncertainty. Thismore » immediately implies that there are broad conditions under which simultaneous estimation of multiple parameters cannot outperform individual, independent estimations. Our results apply to any situation in which spatially localized sensors are unitarily encoded with independent parameters, such as when estimating multiple linear or non-linear optical phase shifts in quantum imaging, or when mapping out the spatial profile of an unknown magnetic field. We conclude by showing that entangling the sensors can enhance the estimation precision when the parameters of interest are global properties of the entire network.« less

  10. Where in the world are my field plots? Using GPS effectively in environmental field studies

    USGS Publications Warehouse

    Johnson, Chris E.; Barton, Christopher C.

    2004-01-01

    Global positioning system (GPS) technology is rapidly replacing tape, compass, and traditional surveying instruments as the preferred tool for estimating the positions of environmental research sites. One important problem, however, is that it can be difficult to estimate the uncertainty of GPS-derived positions. Sources of error include various satellite- and site-related factors, such as forest canopy and topographic obstructions. In a case study from the Hubbard Brook Experimental Forest in New Hampshire, hand-held, mapping-grade GPS receivers generally estimated positions with 1–5 m precision in open, unobstructed settings, and 20–30 m precision under forest canopy. Surveying-grade receivers achieved precisions of 10 cm or less, even in challenging terrain. Users can maximize the quality of their GPS measurements by “mission planning” to take advantage of high-quality satellite conditions. Repeated measurements and simultaneous data collection at multiple points can be used to assess accuracy and precision.

  11. Precision Timing of PSR J0437-4715: An Accurate Pulsar Distance, a High Pulsar Mass, and a Limit on the Variation of Newton's Gravitational Constant

    NASA Astrophysics Data System (ADS)

    Verbiest, J. P. W.; Bailes, M.; van Straten, W.; Hobbs, G. B.; Edwards, R. T.; Manchester, R. N.; Bhat, N. D. R.; Sarkissian, J. M.; Jacoby, B. A.; Kulkarni, S. R.

    2008-05-01

    Analysis of 10 years of high-precision timing data on the millisecond pulsar PSR J0437-4715 has resulted in a model-independent kinematic distance based on an apparent orbital period derivative, dot Pb , determined at the 1.5% level of precision (Dk = 157.0 +/- 2.4 pc), making it one of the most accurate stellar distance estimates published to date. The discrepancy between this measurement and a previously published parallax distance estimate is attributed to errors in the DE200 solar system ephemerides. The precise measurement of dot Pb allows a limit on the variation of Newton's gravitational constant, |Ġ/G| <= 23 × 10-12 yr-1. We also constrain any anomalous acceleration along the line of sight to the pulsar to |a⊙/c| <= 1.5 × 10-18 s-1 at 95% confidence, and derive a pulsar mass, mpsr = 1.76 +/- 0.20 M⊙, one of the highest estimates so far obtained.

  12. A comparison of energy expenditure estimation of several physical activity monitors.

    PubMed

    Dannecker, Kathryn L; Sazonova, Nadezhda A; Melanson, Edward L; Sazonov, Edward S; Browning, Raymond C

    2013-11-01

    Accurately and precisely estimating free-living energy expenditure (EE) is important for monitoring energy balance and quantifying physical activity. Recently, single and multisensor devices have been developed that can classify physical activities, potentially resulting in improved estimates of EE. This study aimed to determine the validity of EE estimation of a footwear-based physical activity monitor and to compare this validity against a variety of research and consumer physical activity monitors. Nineteen healthy young adults (10 men, 9 women) completed a 4-h stay in a room calorimeter. Participants wore a footwear-based physical activity monitor as well as Actical, ActiGraph, IDEEA, DirectLife, and Fitbit devices. Each individual performed a series of postures/activities. We developed models to estimate EE from the footwear-based device, and we used the manufacturer's software to estimate EE for all other devices. Estimated EE using the shoe-based device was not significantly different than measured EE (mean ± SE; 476 ± 20 vs 478 ± 18 kcal, respectively) and had a root-mean-square error of 29.6 kcal (6.2%). The IDEEA and the DirectLlife estimates of EE were not significantly different than the measured EE, but the ActiGraph and the Fitbit devices significantly underestimated EE. Root-mean-square errors were 93.5 (19%), 62.1 kcal (14%), 88.2 kcal (18%), 136.6 kcal (27%), 130.1 kcal (26%), and 143.2 kcal (28%) for Actical, DirectLife, IDEEA, ActiGraph, and Fitbit, respectively. The shoe-based physical activity monitor provides a valid estimate of EE, whereas the other physical activity monitors tested have a wide range of validity when estimating EE. Our results also demonstrate that estimating EE based on classification of physical activities can be more accurate and precise than estimating EE based on total physical activity.

  13. The GLAS Algorithm Theoretical Basis Document for Precision Orbit Determination (POD)

    NASA Technical Reports Server (NTRS)

    Rim, Hyung Jin; Yoon, S. P.; Schultz, Bob E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) was the sole instrument for NASA's Ice, Cloud and land Elevation Satellite (ICESat) laser altimetry mission. The primary purpose of the ICESat mission was to make ice sheet elevation measurements of the polar regions. Additional goals were to measure the global distribution of clouds and aerosols and to map sea ice, land topography and vegetation. ICESat was the benchmark Earth Observing System (EOS) mission to be used to determine the mass balance of the ice sheets, as well as for providing cloud property information, especially for stratospheric clouds common over polar areas. The GLAS instrument operated from 2003 to 2009 and provided multi-year elevation data needed to determine changes in sea ice freeboard, land topography and vegetation around the globe, in addition to elevation changes of the Greenland and Antarctic ice sheets. This document describes the Precision Orbit Determination (POD) algorithm for the ICESat mission. The problem of determining an accurate ephemeris for an orbiting satellite involves estimating the position and velocity of the satellite from a sequence of observations. The ICESatGLAS elevation measurements must be very accurately geolocated, combining precise orbit information with precision pointing information. The ICESat mission POD requirement states that the position of the instrument should be determined with an accuracy of 5 and 20 cm (1-s) in radial and horizontal components, respectively, to meet the science requirements for determining elevation change.

  14. Accuracy and precision of two indirect methods for estimating canopy fuels

    Treesearch

    Abran Steele-Feldman; Elizabeth Reinhardt; Russell A. Parsons

    2006-01-01

    We compared the accuracy and precision of digital hemispherical photography and the LI-COR LAI-2000 plant canopy analyzer as predictors of canopy fuels. We collected data on 12 plots in western Montana under a variety of lighting and sky conditions, and used a variety of processing methods to compute estimates. Repeated measurements from each method displayed...

  15. Improved shear wave group velocity estimation method based on spatiotemporal peak and thresholding motion search

    PubMed Central

    Amador, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F.; Urban, Matthew W.

    2017-01-01

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocities values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index (BMI), ultrasound scanners, scanning protocols, ultrasound image quality, etc. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this study, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time (spatiotemporal peak, STP); the second method applies an amplitude filter (spatiotemporal thresholding, STTH) to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared to TTP in phantom. Moreover, in a cohort of 14 healthy subjects STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared to conventional TTP. PMID:28092532

  16. Improved Shear Wave Group Velocity Estimation Method Based on Spatiotemporal Peak and Thresholding Motion Search.

    PubMed

    Amador Carrascal, Carolina; Chen, Shigao; Manduca, Armando; Greenleaf, James F; Urban, Matthew W

    2017-04-01

    Quantitative ultrasound elastography is increasingly being used in the assessment of chronic liver disease. Many studies have reported ranges of liver shear wave velocity values for healthy individuals and patients with different stages of liver fibrosis. Nonetheless, ongoing efforts exist to stabilize quantitative ultrasound elastography measurements by assessing factors that influence tissue shear wave velocity values, such as food intake, body mass index, ultrasound scanners, scanning protocols, and ultrasound image quality. Time-to-peak (TTP) methods have been routinely used to measure the shear wave velocity. However, there is still a need for methods that can provide robust shear wave velocity estimation in the presence of noisy motion data. The conventional TTP algorithm is limited to searching for the maximum motion in time profiles at different spatial locations. In this paper, two modified shear wave speed estimation algorithms are proposed. The first method searches for the maximum motion in both space and time [spatiotemporal peak (STP)]; the second method applies an amplitude filter [spatiotemporal thresholding (STTH)] to select points with motion amplitude higher than a threshold for shear wave group velocity estimation. The two proposed methods (STP and STTH) showed higher precision in shear wave velocity estimates compared with TTP in phantom. Moreover, in a cohort of 14 healthy subjects, STP and STTH methods improved both the shear wave velocity measurement precision and the success rate of the measurement compared with conventional TTP.

  17. Second Iteration of Photogrammetric Pipeline to Enhance the Accuracy of Image Pose Estimation

    NASA Astrophysics Data System (ADS)

    Nguyen, T. G.; Pierrot-Deseilligny, M.; Muller, J.-M.; Thom, C.

    2017-05-01

    In classical photogrammetric processing pipeline, the automatic tie point extraction plays a key role in the quality of achieved results. The image tie points are crucial to pose estimation and have a significant influence on the precision of calculated orientation parameters. Therefore, both relative and absolute orientations of the 3D model can be affected. By improving the precision of image tie point measurement, one can enhance the quality of image orientation. The quality of image tie points is under the influence of several factors such as the multiplicity, the measurement precision and the distribution in 2D images as well as in 3D scenes. In complex acquisition scenarios such as indoor applications and oblique aerial images, tie point extraction is limited while only image information can be exploited. Hence, we propose here a method which improves the precision of pose estimation in complex scenarios by adding a second iteration to the classical processing pipeline. The result of a first iteration is used as a priori information to guide the extraction of new tie points with better quality. Evaluated with multiple case studies, the proposed method shows its validity and its high potiential for precision improvement.

  18. Can real time location system technology (RTLS) provide useful estimates of time use by nursing personnel?

    PubMed

    Jones, Terry L; Schlegel, Cara

    2014-02-01

    Accurate, precise, unbiased, reliable, and cost-effective estimates of nursing time use are needed to insure safe staffing levels. Direct observation of nurses is costly, and conventional surrogate measures have limitations. To test the potential of electronic capture of time and motion through real time location systems (RTLS), a pilot study was conducted to assess efficacy (method agreement) of RTLS time use; inter-rater reliability of RTLS time-use estimates; and associated costs. Method agreement was high (mean absolute difference = 28 seconds); inter-rater reliability was high (ICC = 0.81-0.95; mean absolute difference = 2 seconds); and costs for obtaining RTLS time-use estimates on a single nursing unit exceeded $25,000. Continued experimentation with RTLS to obtain time-use estimates for nursing staff is warranted. © 2013 Wiley Periodicals, Inc.

  19. Estimating malaria transmission from humans to mosquitoes in a noisy landscape

    PubMed Central

    Reiner, Robert C.; Guerra, Carlos; Donnelly, Martin J.; Bousema, Teun; Drakeley, Chris; Smith, David L.

    2015-01-01

    A basic quantitative understanding of malaria transmission requires measuring the probability a mosquito becomes infected after feeding on a human. Parasite prevalence in mosquitoes is highly age-dependent, and the unknown age-structure of fluctuating mosquito populations impedes estimation. Here, we simulate mosquito infection dynamics, where mosquito recruitment is modelled seasonally with fractional Brownian noise, and we develop methods for estimating mosquito infection rates. We find that noise introduces bias, but the magnitude of the bias depends on the ‘colour' of the noise. Some of these problems can be overcome by increasing the sampling frequency, but estimates of transmission rates (and estimated reductions in transmission) are most accurate and precise if they combine parity, oocyst rates and sporozoite rates. These studies provide a basis for evaluating the adequacy of various entomological sampling procedures for measuring malaria parasite transmission from humans to mosquitoes and for evaluating the direct transmission-blocking effects of a vaccine. PMID:26400195

  20. [Estimation with the capture-recapture method of the number of economic immigrants in Mallorca].

    PubMed

    Ramos Monserrat, M; March Cerdá, J C

    2002-05-15

    estimate the number of irregular economic immigrants in Mallorca. We used the capture-recapture method, an indirect method based on contrasts of data from two or more sources. Data were obtained from the Delegación de Gobierno (police and immigration authority), Comisiones Obreras (labor union), and institutions that provide health-related services to immigrants. Individuals were identified by birth date and country of origin. The total number of economic immigrants estimated with this method was 39 392. According to the Delegación de Gobierno data, the number of regular immigrants on the date of our inquiry was 9000. With the capture-recapture method, the number of irregular immigrants in Mallorca was therefore estimated at 30 000. The capture-recapture method can be useful to estimate the population of irregular immigrants in a given area at a given time, if sufficiently precise information on the identity of each individual can be obtained.

  1. Monitoring forest areas from continental to territorial levels using a sample of medium spatial resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Eva, Hugh; Carboni, Silvia; Achard, Frédéric; Stach, Nicolas; Durieux, Laurent; Faure, Jean-François; Mollicone, Danilo

    A global systematic sampling scheme has been developed by the UN FAO and the EC TREES project to estimate rates of deforestation at global or continental levels at intervals of 5 to 10 years. This global scheme can be intensified to produce results at the national level. In this paper, using surrogate observations, we compare the deforestation estimates derived from these two levels of sampling intensities (one, the global, for the Brazilian Amazon the other, national, for French Guiana) to estimates derived from the official inventories. We also report the precisions that are achieved due to sampling errors and, in the case of French Guiana, compare such precision with the official inventory precision. We extract nine sample data sets from the official wall-to-wall deforestation map derived from satellite interpretations produced for the Brazilian Amazon for the year 2002 to 2003. This global sampling scheme estimate gives 2.81 million ha of deforestation (mean from nine simulated replicates) with a standard error of 0.10 million ha. This compares with the full population estimate from the wall-to-wall interpretations of 2.73 million ha deforested, which is within one standard error of our sampling test estimate. The relative difference between the mean estimate from sampling approach and the full population estimate is 3.1%, and the standard error represents 4.0% of the full population estimate. This global sampling is then intensified to a territorial level with a case study over French Guiana to estimate deforestation between the years 1990 and 2006. For the historical reference period, 1990, Landsat-5 Thematic Mapper data were used. A coverage of SPOT-HRV imagery at 20 m × 20 m resolution acquired at the Cayenne receiving station in French Guiana was used for year 2006. Our estimates from the intensified global sampling scheme over French Guiana are compared with those produced by the national authority to report on deforestation rates under the Kyoto protocol rules for its overseas department. The latter estimates come from a sample of nearly 17,000 plots analyzed from same spatial imagery acquired between year 1990 and year 2006. This sampling scheme is derived from the traditional forest inventory methods carried out by IFN (Inventaire Forestier National). Our intensified global sampling scheme leads to an estimate of 96,650 ha deforested between 1990 and 2006, which is within the 95% confidence interval of the IFN sampling scheme, which gives an estimate of 91,722 ha, representing a relative difference from the IFN of 5.4%. These results demonstrate that the intensification of the global sampling scheme can provide forest area change estimates close to those achieved by official forest inventories (<6%), with precisions of between 4% and 7%, although we only estimate errors from sampling, not from the use of surrogate data. Such methods could be used by developing countries to demonstrate that they are fulfilling requirements for reducing emissions from deforestation in the framework of an REDD (Reducing Emissions from Deforestation in Developing Countries) mechanism under discussion within the United Nations Framework Convention on Climate Change (UNFCCC). Monitoring systems at national levels in tropical countries can also benefit from pan-tropical and regional observations, to ensure consistency between different national monitoring systems.

  2. Viral Load Criteria and Threshold Optimization to Improve HIV Incidence Assay Characteristics - A CEPHIA Analysis

    PubMed Central

    Kassanjee, Reshma; Pilcher, Christopher D; Busch, Michael P; Murphy, Gary; Facente, Shelley N; Keating, Sheila M; Mckinney, Elaine; Marson, Kara; Price, Matthew A; Martin, Jeffrey N; Little, Susan J; Hecht, Frederick M; Kallas, Esper G; Welte, Alex

    2016-01-01

    Objective Assays for classifying HIV infections as ‘recent’ or ‘non-recent’ for incidence surveillance fail to simultaneously achieve large mean durations of ‘recent’ infection (MDRIs) and low ‘false-recent’ rates (FRRs), particularly in virally suppressed persons. The potential for optimizing recent infection testing algorithms (RITAs), by introducing viral load criteria and tuning thresholds used to dichotomize quantitative measures, is explored. Design The Consortium for the Evaluation and Performance of HIV Incidence Assays characterized over 2000 possible RITAs constructed from seven assays (LAg, BED, Less-sensitive Vitros, Vitros Avidity, BioRad Avidity, Architect Avidity and Geenius) applied to 2500 diverse specimens. Methods MDRIs were estimated using regression, and FRRs as observed ‘recent’ proportions, in various specimen sets. Context-specific FRRs were estimated for hypothetical scenarios. FRRs were made directly comparable by constructing RITAs with the same MDRI through the tuning of thresholds. RITA utility was summarized by the precision of incidence estimation. Results All assays produce high FRRs amongst treated subjects and elite controllers (10%-80%). Viral load testing reduces FRRs, but diminishes MDRIs. Context-specific FRRs vary substantially by scenario – BioRad Avidity and LAg provided the lowest FRRs and highest incidence precision in scenarios considered. Conclusions The introduction of a low viral load threshold provides crucial improvements in RITAs. However, it does not eliminate non-zero FRRs, and MDRIs must be consistently estimated. The tuning of thresholds is essential for comparing and optimizing the use of assays. The translation of directly measured FRRs into context-specific FRRs critically affects their magnitudes and our understanding of the utility of assays. PMID:27454561

  3. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  4. Covariate Imbalance and Precision in Measuring Treatment Effects

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven

    2011-01-01

    Covariate adjustment can increase the precision of estimates by removing unexplained variance from the error in randomized experiments, although chance covariate imbalance tends to counteract the improvement in precision. The author develops an easy measure to examine chance covariate imbalance in randomization by standardizing the average…

  5. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  6. Non-convex Statistical Optimization for Sparse Tensor Graphical Model

    PubMed Central

    Sun, Wei; Wang, Zhaoran; Liu, Han; Cheng, Guang

    2016-01-01

    We consider the estimation of sparse graphical models that characterize the dependency structure of high-dimensional tensor-valued data. To facilitate the estimation of the precision matrix corresponding to each way of the tensor, we assume the data follow a tensor normal distribution whose covariance has a Kronecker product structure. The penalized maximum likelihood estimation of this model involves minimizing a non-convex objective function. In spite of the non-convexity of this estimation problem, we prove that an alternating minimization algorithm, which iteratively estimates each sparse precision matrix while fixing the others, attains an estimator with the optimal statistical rate of convergence as well as consistent graph recovery. Notably, such an estimator achieves estimation consistency with only one tensor sample, which is unobserved in previous work. Our theoretical results are backed by thorough numerical studies. PMID:28316459

  7. Proficiency testing as a basis for estimating uncertainty of measurement: application to forensic alcohol and toxicology quantitations.

    PubMed

    Wallace, Jack

    2010-05-01

    While forensic laboratories will soon be required to estimate uncertainties of measurement for those quantitations reported to the end users of the information, the procedures for estimating this have been little discussed in the forensic literature. This article illustrates how proficiency test results provide the basis for estimating uncertainties in three instances: (i) For breath alcohol analyzers the interlaboratory precision is taken as a direct measure of uncertainty. This approach applies when the number of proficiency tests is small. (ii) For blood alcohol, the uncertainty is calculated from the differences between the laboratory's proficiency testing results and the mean quantitations determined by the participants; this approach applies when the laboratory has participated in a large number of tests. (iii) For toxicology, either of these approaches is useful for estimating comparability between laboratories, but not for estimating absolute accuracy. It is seen that data from proficiency tests enable estimates of uncertainty that are empirical, simple, thorough, and applicable to a wide range of concentrations.

  8. Palila abundance estimates and trend

    USGS Publications Warehouse

    Camp, Richad; Banko, Paul C.

    2012-01-01

    The Palila (Loxioides bailleui) is an endangered, seed-eating, finch-billed honeycreeper found only on Hawai`i Island. Once occurring on the islands of Kaua`i and O`ahu and Mauna Loa and Hualālai volcanoes of Hawai`i, Palila are now found only in subalpine, dry-forest habitats on Mauna Kea (Banko et al. 2002). Previous analyses showed that Palila numbers fluctuated throughout the 1980s and 1990s but declined rapidly and steadily since 2003 (Jacobi et al. 1996, Leonard et al. 2008, Banko et al. 2009, Gorresen et al. 2009, Banko et al. in press). The aim of this report is to update abundance estimates for the Palila based on the 2012 surveys. We assess Palila trends over two periods: 1) the long-term trend during 1998–2012 and 2) the short-term trajectory between 2003 and 2012. The first period evaluates the population trend for the entire time series since additional transects were established (Johnson et al. 2006). These additional transects were established to produce a more precise population estimate and provide more complete coverage of the Palila range. The initial year for short-term trajectory was chosen subjectively to coincide with the recent decline in the Palila population. Additionally, stations in the core Palila habitat were surveyed on two occasions in 2012, thus allowing us to address the question of how repeat samples improve estimate precision.

  9. Satellite laser ranging to low Earth orbiters: orbit and network validation

    NASA Astrophysics Data System (ADS)

    Arnold, Daniel; Montenbruck, Oliver; Hackel, Stefan; Sośnica, Krzysztof

    2018-04-01

    Satellite laser ranging (SLR) to low Earth orbiters (LEOs) provides optical distance measurements with mm-to-cm-level precision. SLR residuals, i.e., differences between measured and modeled ranges, serve as a common figure of merit for the quality assessment of orbits derived by radiometric tracking techniques. We discuss relevant processing standards for the modeling of SLR observations and highlight the importance of line-of-sight-dependent range corrections for the various types of laser retroreflector arrays. A 1-3 cm consistency of SLR observations and GPS-based precise orbits is demonstrated for a wide range of past and present LEO missions supported by the International Laser Ranging Service (ILRS). A parameter estimation approach is presented to investigate systematic orbit errors and it is shown that SLR validation of LEO satellites is not only able to detect radial but also along-track and cross-track offsets. SLR residual statistics clearly depend on the employed precise orbit determination technique (kinematic vs. reduced-dynamic, float vs. fixed ambiguities) but also reveal pronounced differences in the ILRS station performance. Using the residual-based parameter estimation approach, corrections to ILRS station coordinates, range biases, and timing offsets are derived. As a result, root-mean-square residuals of 5-10 mm have been achieved over a 1-year data arc in 2016 using observations from a subset of high-performance stations and ambiguity-fixed orbits of four LEO missions. As a final contribution, we demonstrate that SLR can not only validate single-satellite orbit solutions but also precise baseline solutions of formation flying missions such as GRACE, TanDEM-X, and Swarm.

  10. Application of GPS Measurements for Ionospheric and Tropospheric Modelling

    NASA Astrophysics Data System (ADS)

    Rajendra Prasad, P.; Abdu, M. A.; Furlan, Benedito. M. P.; Koiti Kuga, Hélio

    military navigation. The DOD's primary purposes were to use the system in precision weapon delivery and providing a capability that would help reverse the proliferation of navigation systems in military. Subsequently, it was very quickly realized that civil use and scientific utility would far outstrip military use. A variety of scientific applications are uniquely suited to precise positioning capabilities. The relatively high precision, low cost, mobility and convenience of GPS receivers make positioning attractive. The other applications being precise time measurement, surveying and geodesy purposes apart from orbit and attitude determination along with many user services. The system operates by transmitting radio waves from satellites to receivers on the ground, aircraft, or other satellites. These signals are used to calculate location very accurately. Standard Positioning Services (SPS) which restricts access to Coarse/Access (C/A) code and carrier signals on the L1 frequency only. The accuracy thus provided by SPS fall short of most of the accuracy requirements of users. The upper atmosphere is ionized by the ultra violet radiation from the sun. The significant errors in positioning can result when the signals are refracted and slowed by ionospheric conditions, the parameter of the ionosphere that produces most effects on GPS signals is the total number of electrons in the ionospheric propagation path. This integrated number of electrons, called Total Electron Content (TEC) varies, not only from day to night, time of the year and solar flux cycle, but also with geomagnetic latitude and longitude. Being plasma the ionosphere affects the radio waves propagating through it. Effects of scintillation on GPS satellite navigation systems operating at L1 (1.5754 GHz), L2 (1.2276 GHz) frequencies have not been estimated accurately. It is generally recognized that GPS navigation systems are vulnerable in the polar and especially in the equatorial region during the solar maximum period. In the equatorial region the irregularity structures are highly elongated in the north-south direction and are discrete in the east-west direction with dimensions of several hundred km. With such spatial distribution of irregularities needs to determine how often the GPS receivers fails to provide navigation aid with the available constellation. The effects of scintillation on the performance of GPS navigation systems in the equatorial region can be analyzed through commissioning few ground receivers. Incidentally there are few GPS receivers near these latitudes. Despite the recent advances in the ionosphere and tropospheric delay modeling for geodetic applications of GPS, the models currently used are not very precise. The conventional and operational ionosphere models viz. Klobuchar, Bent, and IRI models have certain limitations in providing very precise accuracies at all latitudes. The troposphere delay modeling also suffers in accuracy. The advances made in both computing power and knowledge of the atmosphere leads to make an effort to upgrade some of these models for improving delay corrections in GPS navigation. The ionospheric group delay corrections for orbit determination can be minimized using duel frequency. However in single frequency measurements the group delay correction is an involved task. In this paper an investigation is carried out to estimate the model coefficients of ionosphere along with precise orbit determination modeling using GPS measurements. The locations of the ground-based receivers near equator are known very exactly. Measurements from these ground stations to a precisely known satellite carrying duel receiver is used for orbit determination. The ionosphere model parameters can be refined corresponding to spatially distributed GPS receivers spread over Brazil. The tropospheric delay effects are not significant for the satellites by choosing appropriate elevation angle. However it needs to be analyzed for user like aircraft for an effective use. In this paper brief description of GPS data utilization, Navigational message, orbit computation and precise orbit determination and Ionosphere and troposphere models are summarized. The methodology towards refining ionosphere model coefficients is presented. Some of the plots and results related to orbit determination are presented. The study demonstrated the feasibility of estimating ionosphere group delay at specific latitudes and could be improved through refining some of the model coefficients using GPS measurements. It is possible to accurately determine the tropospheric delay, which may be used for an aircraft in flight without access to real time meteorological information.

  11. The Azimuth Structure of Nuclear Collisions — I

    NASA Astrophysics Data System (ADS)

    Trainor, Thomas A.; Kettler, David T.

    We describe azimuth structure commonly associated with elliptic and directed flow in the context of 2D angular autocorrelations for the purpose of precise separation of so-called nonflow (mainly minijets) from flow. We extend the Fourier-transform description of azimuth structure to include power spectra and autocorrelations related by the Wiener-Khintchine theorem. We analyze several examples of conventional flow analysis in that context and question the relevance of reaction plane estimation to flow analysis. We introduce the 2D angular autocorrelation with examples from data analysis and describe a simulation exercise which demonstrates precise separation of flow and nonflow using the 2D autocorrelation method. We show that an alternative correlation measure based on Pearson's normalized covariance provides a more intuitive measure of azimuth structure.

  12. The critical distance in laser-induced plasmas: an operative definition

    NASA Astrophysics Data System (ADS)

    Delle Side, D.; Giuffreda, E.; Nassisi, V.

    2016-05-01

    We propose a method to estimate a precise value for the critical distance Lcr after which three-body recombination stops to produce charge losses in an expanding laser-induced plasma. We show in particular that the total charge collected has a ``reversed sigmoid'' shape as a function of the target-to-detector distance. Fitting the total charge data with a logistic related function, we could consider as Lcr the intercept of the tangent to this curve in its inflection point. Furthermore, this value scales well with theoretical predictions. From the application point of view, this could be of great practical interest, since it provide a reliable way to precisely determine the geometry of the extraction system in Laser Ion Sources.

  13. State Estimation Using Dependent Evidence Fusion: Application to Acoustic Resonance-Based Liquid Level Measurement.

    PubMed

    Xu, Xiaobin; Li, Zhenghui; Li, Guo; Zhou, Zhe

    2017-04-21

    Estimating the state of a dynamic system via noisy sensor measurement is a common problem in sensor methods and applications. Most state estimation methods assume that measurement noise and state perturbations can be modeled as random variables with known statistical properties. However in some practical applications, engineers can only get the range of noises, instead of the precise statistical distributions. Hence, in the framework of Dempster-Shafer (DS) evidence theory, a novel state estimatation method by fusing dependent evidence generated from state equation, observation equation and the actual observations of the system states considering bounded noises is presented. It can be iteratively implemented to provide state estimation values calculated from fusion results at every time step. Finally, the proposed method is applied to a low-frequency acoustic resonance level gauge to obtain high-accuracy measurement results.

  14. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis-Hastings Markov Chain Monte Carlo algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Hongrui; Wang, Cheng; Wang, Ying; Gao, Xiong; Yu, Chen

    2017-06-01

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLE confidence interval and thus more precise estimation by using the related information from regional gage stations. The Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.

  15. Automated Semantic Indexing of Figure Captions to Improve Radiology Image Retrieval

    PubMed Central

    Kahn, Charles E.; Rubin, Daniel L.

    2009-01-01

    Objective We explored automated concept-based indexing of unstructured figure captions to improve retrieval of images from radiology journals. Design The MetaMap Transfer program (MMTx) was used to map the text of 84,846 figure captions from 9,004 peer-reviewed, English-language articles to concepts in three controlled vocabularies from the UMLS Metathesaurus, version 2006AA. Sampling procedures were used to estimate the standard information-retrieval metrics of precision and recall, and to evaluate the degree to which concept-based retrieval improved image retrieval. Measurements Precision was estimated based on a sample of 250 concepts. Recall was estimated based on a sample of 40 concepts. The authors measured the impact of concept-based retrieval to improve upon keyword-based retrieval in a random sample of 10,000 search queries issued by users of a radiology image search engine. Results Estimated precision was 0.897 (95% confidence interval, 0.857–0.937). Estimated recall was 0.930 (95% confidence interval, 0.838–1.000). In 5,535 of 10,000 search queries (55%), concept-based retrieval found results not identified by simple keyword matching; in 2,086 searches (21%), more than 75% of the results were found by concept-based search alone. Conclusion Concept-based indexing of radiology journal figure captions achieved very high precision and recall, and significantly improved image retrieval. PMID:19261938

  16. Fat fraction bias correction using T1 estimates and flip angle mapping.

    PubMed

    Yang, Issac Y; Cui, Yifan; Wiens, Curtis N; Wade, Trevor P; Friesen-Waldner, Lanette J; McKenzie, Charles A

    2014-01-01

    To develop a new method of reducing T1 bias in proton density fat fraction (PDFF) measured with iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL). PDFF maps reconstructed from high flip angle IDEAL measurements were simulated and acquired from phantoms and volunteer L4 vertebrae. T1 bias was corrected using a priori T1 values for water and fat, both with and without flip angle correction. Signal-to-noise ratio (SNR) maps were used to measure precision of the reconstructed PDFF maps. PDFF measurements acquired using small flip angles were then compared to both sets of corrected large flip angle measurements for accuracy and precision. Simulations show similar results in PDFF error between small flip angle measurements and corrected large flip angle measurements as long as T1 estimates were within one standard deviation from the true value. Compared to low flip angle measurements, phantom and in vivo measurements demonstrate better precision and accuracy in PDFF measurements if images were acquired at a high flip angle, with T1 bias corrected using T1 estimates and flip angle mapping. T1 bias correction of large flip angle acquisitions using estimated T1 values with flip angle mapping yields fat fraction measurements of similar accuracy and superior precision compared to low flip angle acquisitions. Copyright © 2013 Wiley Periodicals, Inc.

  17. Abbreviation definition identification based on automatic precision estimates.

    PubMed

    Sohn, Sunghwan; Comeau, Donald C; Kim, Won; Wilbur, W John

    2008-09-25

    The rapid growth of biomedical literature presents challenges for automatic text processing, and one of the challenges is abbreviation identification. The presence of unrecognized abbreviations in text hinders indexing algorithms and adversely affects information retrieval and extraction. Automatic abbreviation definition identification can help resolve these issues. However, abbreviations and their definitions identified by an automatic process are of uncertain validity. Due to the size of databases such as MEDLINE only a small fraction of abbreviation-definition pairs can be examined manually. An automatic way to estimate the accuracy of abbreviation-definition pairs extracted from text is needed. In this paper we propose an abbreviation definition identification algorithm that employs a variety of strategies to identify the most probable abbreviation definition. In addition our algorithm produces an accuracy estimate, pseudo-precision, for each strategy without using a human-judged gold standard. The pseudo-precisions determine the order in which the algorithm applies the strategies in seeking to identify the definition of an abbreviation. On the Medstract corpus our algorithm produced 97% precision and 85% recall which is higher than previously reported results. We also annotated 1250 randomly selected MEDLINE records as a gold standard. On this set we achieved 96.5% precision and 83.2% recall. This compares favourably with the well known Schwartz and Hearst algorithm. We developed an algorithm for abbreviation identification that uses a variety of strategies to identify the most probable definition for an abbreviation and also produces an estimated accuracy of the result. This process is purely automatic.

  18. Real-time estimation of BDS/GPS high-rate satellite clock offsets using sequential least squares

    NASA Astrophysics Data System (ADS)

    Fu, Wenju; Yang, Yuanxi; Zhang, Qin; Huang, Guanwen

    2018-07-01

    The real-time precise satellite clock product is one of key prerequisites for real-time Precise Point Positioning (PPP). The accuracy of the 24-hour predicted satellite clock product with 15 min sampling interval and an update of 6 h provided by the International GNSS Service (IGS) is only 3 ns, which could not meet the needs of all real-time PPP applications. The real-time estimation of high-rate satellite clock offsets is an efficient method for improving the accuracy. In this paper, the sequential least squares method to estimate real-time satellite clock offsets with high sample rate is proposed to improve the computational speed by applying an optimized sparse matrix operation to compute the normal equation and using special measures to take full advantage of modern computer power. The method is first applied to BeiDou Navigation Satellite System (BDS) and provides real-time estimation with a 1 s sample rate. The results show that the amount of time taken to process a single epoch is about 0.12 s using 28 stations. The Standard Deviation (STD) and Root Mean Square (RMS) of the real-time estimated BDS satellite clock offsets are 0.17 ns and 0.44 ns respectively when compared to German Research Center for Geosciences (GFZ) final clock products. The positioning performance of the real-time estimated satellite clock offsets is evaluated. The RMSs of the real-time BDS kinematic PPP in east, north, and vertical components are 7.6 cm, 6.4 cm and 19.6 cm respectively. The method is also applied to Global Positioning System (GPS) with a 10 s sample rate and the computational time of most epochs is less than 1.5 s with 75 stations. The STD and RMS of the real-time estimated GPS satellite clocks are 0.11 ns and 0.27 ns, respectively. The accuracies of 5.6 cm, 2.6 cm and 7.9 cm in east, north, and vertical components are achieved for the real-time GPS kinematic PPP.

  19. Parameter estimation in plasmonic QED

    NASA Astrophysics Data System (ADS)

    Jahromi, H. Rangani

    2018-03-01

    We address the problem of parameter estimation in the presence of plasmonic modes manipulating emitted light via the localized surface plasmons in a plasmonic waveguide at the nanoscale. The emitter that we discuss is the nitrogen vacancy centre (NVC) in diamond modelled as a qubit. Our goal is to estimate the β factor measuring the fraction of emitted energy captured by waveguide surface plasmons. The best strategy to obtain the most accurate estimation of the parameter, in terms of the initial state of the probes and different control parameters, is investigated. In particular, for two-qubit estimation, it is found although we may achieve the best estimation at initial instants by using the maximally entangled initial states, at long times, the optimal estimation occurs when the initial state of the probes is a product one. We also find that decreasing the interqubit distance or increasing the propagation length of the plasmons improve the precision of the estimation. Moreover, decrease of spontaneous emission rate of the NVCs retards the quantum Fisher information (QFI) reduction and therefore the vanishing of the QFI, measuring the precision of the estimation, is delayed. In addition, if the phase parameter of the initial state of the two NVCs is equal to πrad, the best estimation with the two-qubit system is achieved when initially the NVCs are maximally entangled. Besides, the one-qubit estimation has been also analysed in detail. Especially, we show that, using a two-qubit probe, at any arbitrary time, enhances considerably the precision of estimation in comparison with one-qubit estimation.

  20. Estimating liver cancer deaths in Thailand based on verbal autopsy study.

    PubMed

    Waeto, Salwa; Pipatjaturon, Nattakit; Tongkumchum, Phattrawan; Choonpradub, Chamnein; Saelim, Rattikan; Makaje, Nifatamah

    2014-01-01

    Liver cancer mortality is high in Thailand but utility of related vital statistics is limited due to national vital registration (VR) data being under reported for specific causes of deaths. Accurate methodologies and reliable supplementary data are needed to provide worthy national vital statistics. This study aimed to model liver cancer deaths based on verbal autopsy (VA) study in 2005 to provide more accurate estimates of liver cancer deaths than those reported. The results were used to estimate number of liver cancer deaths during 2000-2009. A verbal autopsy (VA) was carried out in 2005 based on a sample of 9,644 deaths from nine provinces and it provided reliable information on causes of deaths by gender, age group, location of deaths in or outside hospital, and causes of deaths of the VR database. Logistic regression was used to model liver cancer deaths and other variables. The estimated probabilities from the model were applied to liver cancer deaths in the VR database, 2000-2009. Thus, the more accurately VA-estimated numbers of liver cancer deaths were obtained. The model fits the data quite well with sensitivity 0.64. The confidence intervals from statistical model provide the estimates and their precisions. The VA-estimated numbers of liver cancer deaths were higher than the corresponding VR database with inflation factors 1.56 for males and 1.64 for females. The statistical methods used in this study can be applied to available mortality data in developing countries where their national vital registration data are of low quality and supplementary reliable data are available.

  1. Consumer product chemical weight fractions from ingredient lists.

    PubMed

    Isaacs, Kristin K; Phillips, Katherine A; Biryol, Derya; Dionisio, Kathie L; Price, Paul S

    2018-05-01

    Assessing human exposures to chemicals in consumer products requires composition information. However, comprehensive composition data for products in commerce are not generally available. Many consumer products have reported ingredient lists that are constructed using specific guidelines. A probabilistic model was developed to estimate quantitative weight fraction (WF) values that are consistent with the rank of an ingredient in the list, the number of reported ingredients, and labeling rules. The model provides the mean, median, and 95% upper and lower confidence limit WFs for ingredients of any rank in lists of any length. WFs predicted by the model compared favorably with those reported on Material Safety Data Sheets. Predictions for chemicals known to provide specific functions in products were also found to reasonably agree with reported WFs. The model was applied to a selection of publicly available ingredient lists, thereby estimating WFs for 1293 unique ingredients in 1123 products in 81 product categories. Predicted WFs, although less precise than reported values, can be estimated for large numbers of product-chemical combinations and thus provide a useful source of data for high-throughput or screening-level exposure assessments.

  2. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.

    PubMed

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-06-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.

  3. STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Zou, Hui

    2014-01-01

    Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560

  4. Growth and mortality of larval sunfish in backwaters of the upper Mississippi River

    USGS Publications Warehouse

    Zigler, S.J.; Jennings, C.A.

    1993-01-01

    The authors estimated the growth and mortality of larval sunfish Lepomis spp. in backwater habitats of the upper Mississippi River with an otolith-based method and a length-based method. Fish were sampled with plankton nets at one station in Navigation Pools 8 and 14 in 1989 and at two stations in Pool 8 in 1990. For both methods, growth was modeled with an exponential equation, and instantaneous mortality was estimated by regressing the natural logarithm of fish catch for each 1-mm size-group against the estimated age of the group, which was derived from the growth equations. At two of the stations, the otolith-based method provided more precise estimates of sunfish growth than the length-based method. We were able to compare length-based and otolith-based estimates of sunfish mortality only at the two stations where we caught the largest numbers of sunfish. Estimates of mortality were similar for both methods in Pool 14, where catches were higher, but the length-based method gave significantly higher estimates in Pool 8, where the catches were lower. The otolith- based method required more laboratory analysis, but provided better estimates of the growth and mortality than the length-based method when catches were low. However, the length-based method was more cost- effective for estimating growth and mortality when catches were large.

  5. Comparison of two viewing methods for estimating largemouth bass and walleye ages from sectioned otoliths and dorsal spines

    USGS Publications Warehouse

    Wegleitner, Eric J.; Isermann, Daniel A.

    2017-01-01

    Many biologists use digital images for estimating ages of fish, but the use of images could lead to differences in age estimates and precision because image capture can produce changes in light and clarity compared to directly viewing structures through a microscope. We used sectioned sagittal otoliths from 132 Largemouth Bass Micropterus salmoides and sectioned dorsal spines and otoliths from 157 Walleyes Sander vitreus to determine whether age estimates and among‐reader precision were similar when annuli were enumerated directly through a microscope or from digital images. Agreement of ages between viewing methods for three readers were highest for Largemouth Bass otoliths (75–89% among readers), followed by Walleye otoliths (63–70%) and Walleye dorsal spines (47–64%). Most discrepancies (72–96%) were ±1 year, and differences were more prevalent for age‐5 and older fish. With few exceptions, mean ages estimated from digital images were similar to ages estimated via directly viewing the structures through the microscope, and among‐reader precision did not vary between viewing methods for each structure. However, the number of disagreements we observed suggests that biologists should assess potential differences in age structure that could arise if images of calcified structures are used in the age estimation process.

  6. [Krigle estimation and its simulated sampling of Chilo suppressalis population density].

    PubMed

    Yuan, Zheming; Bai, Lianyang; Wang, Kuiwu; Hu, Xiangyue

    2004-07-01

    In order to draw up a rational sampling plan for the larvae population of Chilo suppressalis, an original population and its two derivative populations, random population and sequence population, were sampled and compared with random sampling, gap-range-random sampling, and a new systematic sampling integrated Krigle interpolation and random original position. As for the original population whose distribution was up to aggregative and dependence range in line direction was 115 cm (6.9 units), gap-range-random sampling in line direction was more precise than random sampling. Distinguishing the population pattern correctly is the key to get a better precision. Gap-range-random sampling and random sampling are fit for aggregated population and random population, respectively, but both of them are difficult to apply in practice. Therefore, a new systematic sampling named as Krigle sample (n = 441) was developed to estimate the density of partial sample (partial estimation, n = 441) and population (overall estimation, N = 1500). As for original population, the estimated precision of Krigle sample to partial sample and population was better than that of investigation sample. With the increase of the aggregation intensity of population, Krigel sample was more effective than investigation sample in both partial estimation and overall estimation in the appropriate sampling gap according to the dependence range.

  7. Photoacoustic-based sO2 estimation through excised bovine prostate tissue with interstitial light delivery.

    PubMed

    Mitcham, Trevor; Taghavi, Houra; Long, James; Wood, Cayla; Fuentes, David; Stefan, Wolfgang; Ward, John; Bouchard, Richard

    2017-09-01

    Photoacoustic (PA) imaging is capable of probing blood oxygen saturation (sO 2 ), which has been shown to correlate with tissue hypoxia, a promising cancer biomarker. However, wavelength-dependent local fluence changes can compromise sO 2 estimation accuracy in tissue. This work investigates using PA imaging with interstitial irradiation and local fluence correction to assess precision and accuracy of sO 2 estimation of blood samples through ex vivo bovine prostate tissue ranging from 14% to 100% sO 2 . Study results for bovine blood samples at distances up to 20 mm from the irradiation source show that local fluence correction improved average sO 2 estimation error from 16.8% to 3.2% and maintained an average precision of 2.3% when compared to matched CO-oximeter sO 2 measurements. This work demonstrates the potential for future clinical translation of using fluence-corrected and interstitially driven PA imaging to accurately and precisely assess sO 2 at depth in tissue with high resolution.

  8. Using an electronic compass to determine telemetry azimuths

    USGS Publications Warehouse

    Cox, R.R.; Scalf, J.D.; Jamison, B.E.; Lutz, R.S.

    2002-01-01

    Researchers typically collect azimuths from known locations to estimate locations of radiomarked animals. Mobile, vehicle-mounted telemetry receiving systems frequently are used to gather azimuth data. Use of mobile systems typically involves estimating the vehicle's orientation to grid north (vehicle azimuth), recording an azimuth to the transmitter relative to the vehicle azimuth from a fixed rosette around the antenna mast (relative azimuth), and subsequently calculating an azimuth to the transmitter (animal azimuth). We incorporated electronic compasses into standard null-peak antenna systems by mounting the compass sensors atop the antenna masts and evaluated the precision of this configuration. This system increased efficiency by eliminating vehicle orientation and calculations to determine animal azimuths and produced estimates of precision (azimuth SD=2.6 deg., SE=0.16 deg.) similar to systems that required orienting the mobile system to grid north. Using an electronic compass increased efficiency without sacrificing precision and should produce more accurate estimates of locations when marked animals are moving or when vehicle orientation is problematic.

  9. Geometrically constrained kinematic global navigation satellite systems positioning: Implementation and performance

    NASA Astrophysics Data System (ADS)

    Asgari, Jamal; Mohammadloo, Tannaz H.; Amiri-Simkooei, Ali Reza

    2015-09-01

    GNSS kinematic techniques are capable of providing precise coordinates in extremely short observation time-span. These methods usually determine the coordinates of an unknown station with respect to a reference one. To enhance the precision, accuracy, reliability and integrity of the estimated unknown parameters, GNSS kinematic equations are to be augmented by possible constraints. Such constraints could be derived from the geometric relation of the receiver positions in motion. This contribution presents the formulation of the constrained kinematic global navigation satellite systems positioning. Constraints effectively restrict the definition domain of the unknown parameters from the three-dimensional space to a subspace defined by the equation of motion. To test the concept of the constrained kinematic positioning method, the equation of a circle is employed as a constraint. A device capable of moving on a circle was made and the observations from 11 positions on the circle were analyzed. Relative positioning was conducted by considering the center of the circle as the reference station. The equation of the receiver's motion was rewritten in the ECEF coordinates system. A special attention is drawn onto how a constraint is applied to kinematic positioning. Implementing the constraint in the positioning process provides much more precise results compared to the unconstrained case. This has been verified based on the results obtained from the covariance matrix of the estimated parameters and the empirical results using kinematic positioning samples as well. The theoretical standard deviations of the horizontal components are reduced by a factor ranging from 1.24 to 2.64. The improvement on the empirical standard deviation of the horizontal components ranges from 1.08 to 2.2.

  10. Validation of PROMIS ® Physical Function computerized adaptive tests for orthopaedic foot and ankle outcome research.

    PubMed

    Hung, Man; Baumhauer, Judith F; Latt, L Daniel; Saltzman, Charles L; SooHoo, Nelson F; Hunt, Kenneth J

    2013-11-01

    In 2012, the American Orthopaedic Foot & Ankle Society(®) established a national network for collecting and sharing data on treatment outcomes and improving patient care. One of the network's initiatives is to explore the use of computerized adaptive tests (CATs) for patient-level outcome reporting. We determined whether the CAT from the NIH Patient Reported Outcome Measurement Information System(®) (PROMIS(®)) Physical Function (PF) item bank provides efficient, reliable, valid, precise, and adequately covered point estimates of patients' physical function. After informed consent, 288 patients with a mean age of 51 years (range, 18-81 years) undergoing surgery for common foot and ankle problems completed a web-based questionnaire. Efficiency was determined by time for test administration. Reliability was assessed with person and item reliability estimates. Validity evaluation included content validity from expert review and construct validity measured against the PROMIS(®) Pain CAT and patient responses based on tradeoff perceptions. Precision was assessed by standard error of measurement (SEM) across patients' physical function levels. Instrument coverage was based on a person-item map. Average time of test administration was 47 seconds. Reliability was 0.96 for person and 0.99 for item. Construct validity against the Pain CAT had an r value of -0.657 (p < 0.001). Precision had an SEM of less than 3.3 (equivalent to a Cronbach's alpha of ≥ 0.90) across a broad range of function. Concerning coverage, the ceiling effect was 0.32% and there was no floor effect. The PROMIS(®) PF CAT appears to be an excellent method for measuring outcomes for patients with foot and ankle surgery. Further validation of the PROMIS(®) item banks may ultimately provide a valid and reliable tool for measuring patient-reported outcomes after injuries and treatment.

  11. An Evaluation of a Flight Deck Interval Management Algorithm Including Delayed Target Trajectories

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.; Underwood, Matthew C.; Barmore, Bryan; Leonard, Robert D.

    2014-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature air traffic management technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise timebased scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain precise in-trail spacing. During high demand operations, TMA-TM may produce a schedule and corresponding aircraft trajectories that include delay to ensure that a particular aircraft will be properly spaced from other aircraft at each schedule waypoint. These delayed trajectories are not communicated to the automation onboard the aircraft, forcing the IM aircraft to use the published speeds to estimate the target aircraft's estimated time of arrival. As a result, the aircraft performing IM operations may follow an aircraft whose TMA-TM generated trajectories have substantial speed deviations from the speeds expected by the spacing algorithm. Previous spacing algorithms were not designed to handle this magnitude of uncertainty. A simulation was conducted to examine a modified spacing algorithm with the ability to follow aircraft flying delayed trajectories. The simulation investigated the use of the new spacing algorithm with various delayed speed profiles and wind conditions, as well as several other variables designed to simulate real-life variability. The results and conclusions of this study indicate that the new spacing algorithm generally exhibits good performance; however, some types of target aircraft speed profiles can cause the spacing algorithm to command less than optimal speed control behavior.

  12. Using multiple data types and integrated population models to improve our knowledge of apex predator population dynamics.

    PubMed

    Bled, Florent; Belant, Jerrold L; Van Daele, Lawrence J; Svoboda, Nathan; Gustine, David; Hilderbrand, Grant; Barnes, Victor G

    2017-11-01

    Current management of large carnivores is informed using a variety of parameters, methods, and metrics; however, these data are typically considered independently. Sharing information among data types based on the underlying ecological, and recognizing observation biases, can improve estimation of individual and global parameters. We present a general integrated population model (IPM), specifically designed for brown bears ( Ursus arctos ), using three common data types for bear ( U . spp.) populations: repeated counts, capture-mark-recapture, and litter size. We considered factors affecting ecological and observation processes for these data. We assessed the practicality of this approach on a simulated population and compared estimates from our model to values used for simulation and results from count data only. We then present a practical application of this general approach adapted to the constraints of a case study using historical data available for brown bears on Kodiak Island, Alaska, USA. The IPM provided more accurate and precise estimates than models accounting for repeated count data only, with credible intervals including the true population 94% and 5% of the time, respectively. For the Kodiak population, we estimated annual average litter size (within one year after birth) to vary between 0.45 [95% credible interval: 0.43; 0.55] and 1.59 [1.55; 1.82]. We detected a positive relationship between salmon availability and adult survival, with survival probabilities greater for females than males. Survival probabilities increased from cubs to yearlings to dependent young ≥2 years old and decreased with litter size. Linking multiple information sources based on ecological and observation mechanisms can provide more accurate and precise estimates, to better inform management. IPMs can also reduce data collection efforts by sharing information among agencies and management units. Our approach responds to an increasing need in bear populations' management and can be readily adapted to other large carnivores.

  13. Study of biological communities subject to imperfect detection: Bias and precision of community N-mixture abundance models in small-sample situations

    USGS Publications Warehouse

    Yamaura, Yuichi; Kery, Marc; Royle, Andy

    2016-01-01

    Community N-mixture abundance models for replicated counts provide a powerful and novel framework for drawing inferences related to species abundance within communities subject to imperfect detection. To assess the performance of these models, and to compare them to related community occupancy models in situations with marginal information, we used simulation to examine the effects of mean abundance (λ¯: 0.1, 0.5, 1, 5), detection probability (p¯: 0.1, 0.2, 0.5), and number of sampling sites (n site : 10, 20, 40) and visits (n visit : 2, 3, 4) on the bias and precision of species-level parameters (mean abundance and covariate effect) and a community-level parameter (species richness). Bias and imprecision of estimates decreased when any of the four variables (λ¯, p¯, n site , n visit ) increased. Detection probability p¯ was most important for the estimates of mean abundance, while λ¯ was most influential for covariate effect and species richness estimates. For all parameters, increasing n site was more beneficial than increasing n visit . Minimal conditions for obtaining adequate performance of community abundance models were n site  ≥ 20, p¯ ≥ 0.2, and λ¯ ≥ 0.5. At lower abundance, the performance of community abundance and community occupancy models as species richness estimators were comparable. We then used additive partitioning analysis to reveal that raw species counts can overestimate β diversity both of species richness and the Shannon index, while community abundance models yielded better estimates. Community N-mixture abundance models thus have great potential for use with community ecology or conservation applications provided that replicated counts are available.

  14. Using multiple data types and integrated population models to improve our knowledge of apex predator population dynamics

    USGS Publications Warehouse

    Bled, Florent; Belant, Jerrold L.; Van Daele, Lawrence J.; Svoboda, Nathan; Gustine, David D.; Hilderbrand, Grant V.; Barnes, Victor G.

    2017-01-01

    Current management of large carnivores is informed using a variety of parameters, methods, and metrics; however, these data are typically considered independently. Sharing information among data types based on the underlying ecological, and recognizing observation biases, can improve estimation of individual and global parameters. We present a general integrated population model (IPM), specifically designed for brown bears (Ursus arctos), using three common data types for bear (U. spp.) populations: repeated counts, capture–mark–recapture, and litter size. We considered factors affecting ecological and observation processes for these data. We assessed the practicality of this approach on a simulated population and compared estimates from our model to values used for simulation and results from count data only. We then present a practical application of this general approach adapted to the constraints of a case study using historical data available for brown bears on Kodiak Island, Alaska, USA. The IPM provided more accurate and precise estimates than models accounting for repeated count data only, with credible intervals including the true population 94% and 5% of the time, respectively. For the Kodiak population, we estimated annual average litter size (within one year after birth) to vary between 0.45 [95% credible interval: 0.43; 0.55] and 1.59 [1.55; 1.82]. We detected a positive relationship between salmon availability and adult survival, with survival probabilities greater for females than males. Survival probabilities increased from cubs to yearlings to dependent young ≥2 years old and decreased with litter size. Linking multiple information sources based on ecological and observation mechanisms can provide more accurate and precise estimates, to better inform management. IPMs can also reduce data collection efforts by sharing information among agencies and management units. Our approach responds to an increasing need in bear populations’ management and can be readily adapted to other large carnivores.

  15. Resolution and Orbit Reconstruction of Spectroscopic Binary Stars with the Palomar Testbed Interferometer

    NASA Astrophysics Data System (ADS)

    Boden, A. F.; Lane, B. F.; Creech-Eakman, M. J.; Queloz, D.; Koresko, C. D.

    2000-05-01

    The Palomar Testbed Interferometer (PTI) is a long-baseline near-infrared interferometer located at Palomar Observatory. For the past several years we have had an ongoing program of resolving and reconstructing the visual and physical orbits of spectroscopic binary stars with PTI, with the goal of obtaining precise dynamical mass estimates and other physical parameters. We will present a number of new visual and physical orbit determinations derived from integrated reductions of PTI visibility and archival and new spectroscopic radial velocity data. The systems for which we will discuss our orbit models are: iota Pegasi (HD 210027), 64 Psc (HD 4676), 12 Boo (HD 123999), 75 Cnc (HD 78418), 47 And (HD 8374), HD 205539, BY Draconis (HDE 234677), and 3 Boo (HD 120064), and 3 Boo (HD 120064). All of these systems are double-lined binary systems (SB2), and integrated astrometric/radial velocity orbit modeling provides precise fundamental parameters (mass, luminosity) and system distance determinations comparable with Hipparcos precisions.

  16. Measuring Constructs in Family Science: How Can Item Response Theory Improve Precision and Validity?

    PubMed Central

    Gordon, Rachel A.

    2014-01-01

    This article provides family scientists with an understanding of contemporary measurement perspectives and the ways in which item response theory (IRT) can be used to develop measures with desired evidence of precision and validity for research uses. The article offers a nontechnical introduction to some key features of IRT, including its orientation toward locating items along an underlying dimension and toward estimating precision of measurement for persons with different levels of that same construct. It also offers a didactic example of how the approach can be used to refine conceptualization and operationalization of constructs in the family sciences, using data from the National Longitudinal Survey of Youth 1979 (n = 2,732). Three basic models are considered: (a) the Rasch and (b) two-parameter logistic models for dichotomous items and (c) the Rating Scale Model for multicategory items. Throughout, the author highlights the potential for researchers to elevate measurement to a level on par with theorizing and testing about relationships among constructs. PMID:25663714

  17. Markov random field based automatic image alignment for electron tomography.

    PubMed

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  18. Evidence-based Guidelines for Precision Risk Stratification-Based Screening (PRSBS) for Colorectal Cancer: Lessons learned from the US Armed Forces: Consensus and Future Directions

    PubMed Central

    Avital, Itzhak; Langan, Russell C.; Summers, Thomas A.; Steele, Scott R.; Waldman, Scott A.; Backman, Vadim; Yee, Judy; Nissan, Aviram; Young, Patrick; Womeldorph, Craig; Mancusco, Paul; Mueller, Renee; Noto, Khristian; Grundfest, Warren; Bilchik, Anton J.; Protic, Mladjan; Daumer, Martin; Eberhardt, John; Man, Yan Gao; Brücher, Björn LDM; Stojadinovic, Alexander

    2013-01-01

    Colorectal cancer (CRC) is the third most common cause of cancer-related death in the United States (U.S.), with estimates of 143,460 new cases and 51,690 deaths for the year 2012. Numerous organizations have published guidelines for CRC screening; however, these numerical estimates of incidence and disease-specific mortality have remained stable from years prior. Technological, genetic profiling, molecular and surgical advances in our modern era should allow us to improve risk stratification of patients with CRC and identify those who may benefit from preventive measures, early aggressive treatment, alternative treatment strategies, and/or frequent surveillance for the early detection of disease recurrence. To better negotiate future economic constraints and enhance patient outcomes, ultimately, we propose to apply the principals of personalized and precise cancer care to risk-stratify patients for CRC screening (Precision Risk Stratification-Based Screening, PRSBS). We believe that genetic, molecular, ethnic and socioeconomic disparities impact oncological outcomes in general, those related to CRC, in particular. This document highlights evidence-based screening recommendations and risk stratification methods in response to our CRC working group private-public consensus meeting held in March 2012. Our aim was to address how we could improve CRC risk stratification-based screening, and to provide a vision for the future to achieving superior survival rates for patients diagnosed with CRC. PMID:23459409

  19. Non-destructive evaluation of coating thickness using guided waves

    NASA Astrophysics Data System (ADS)

    Ostiguy, Pierre-Claude; Quaegebeur, Nicolas; Masson, Patrice

    2015-04-01

    Among existing strategies for non-destructive evaluation of coating thickness, ultrasonic methods based on the measurement of the Time-of-Flight (ToF) of high frequency bulk waves propagating through the thickness of a structure are widespread. However, these methods only provide a very localized measurement of the coating thickness and the precision on the results is largely affected by the surface roughness, porosity or multi-layered nature of the host structure. Moreover, since the measurement is very local, inspection of large surfaces can be time consuming. This article presents a robust methodology for coating thickness estimation based on the generation and measurement of guided waves. Guided waves have the advantage over ultrasonic bulk waves of being less sensitive to surface roughness, and of measuring an average thickness over a wider area, thus reducing the time required to inspect large surfaces. The approach is based on an analytical multi-layer model and intercorrelation of reference and measured signals. The method is first assessed numerically for an aluminum plate, where it is demonstrated that coating thickness can be measured within a precision of 5 micrometers using the S0 mode at frequencies below 500 kHz. Then, an experimental validation is conducted and results show that coating thicknesses in the range of 10 to 200 micrometers can be estimated within a precision of 10 micrometers of the exact coating thickness on this type of structure.

  20. Power and Precision in Confirmatory Factor Analytic Tests of Measurement Invariance

    ERIC Educational Resources Information Center

    Meade, Adam W.; Bauer, Daniel J.

    2007-01-01

    This study investigates the effects of sample size, factor overdetermination, and communality on the precision of factor loading estimates and the power of the likelihood ratio test of factorial invariance in multigroup confirmatory factor analysis. Although sample sizes are typically thought to be the primary determinant of precision and power,…

  1. Measurement precision and noise analysis of CCD cameras

    NASA Astrophysics Data System (ADS)

    Wu, ZhenSen; Li, Zhiyang; Zhang, Ping

    1993-09-01

    CHINA The lirait precision of CCD camera with 1O. bit analogue to digital conversion is estimated in this paper . The noise effect on ineasurenent precision and the noise characteristics are analyzed in details. The noise process means are also discussed and the diagram of noise properties is given in this paper.

  2. An approach for real-time fast point positioning of the BeiDou Navigation Satellite System using augmentation information

    NASA Astrophysics Data System (ADS)

    Tu, Rui; Zhang, Rui; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun

    2018-07-01

    This study proposes an approach to facilitate real-time fast point positioning of the BeiDou Navigation Satellite System (BDS) based on regional augmentation information. We term this as the precise positioning based on augmentation information (BPP) approach. The coordinates of the reference stations were highly constrained to extract the augmentation information, which contained not only the satellite orbit clock error correlated with the satellite running state, but also included the atmosphere error and unmodeled error, which are correlated with the spatial and temporal states. Based on these mixed augmentation corrections, a precise point positioning (PPP) model could be used for the coordinates estimation of the user stations, and the float ambiguity could be easily fixed for the single-difference between satellites. Thus, this technique provided a quick and high-precision positioning service. Three different datasets with small, medium, and large baselines (0.6 km, 30 km and 136 km) were used to validate the feasibility and effectiveness of the proposed BPP method. The validations showed that using the BPP model, 1–2 cm positioning service can be provided in a 100 km wide area after just 2 s of initialization. Thus, as the proposed approach not only capitalized on both PPP and RTK but also provided consistent application, it can be used for area augmentation positioning.

  3. Arrhenius time-scaled least squares: a simple, robust approach to accelerated stability data analysis for bioproducts.

    PubMed

    Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F

    2014-08-01

    Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  4. A Fast Approximate Algorithm for Mapping Long Reads to Large Reference Databases.

    PubMed

    Jain, Chirag; Dilthey, Alexander; Koren, Sergey; Aluru, Srinivas; Phillippy, Adam M

    2018-04-30

    Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long-read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this article, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290 × faster than Burrows-Wheeler Aligner-MEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and >60,000 genomes.

  5. Precision and Accuracy of Analysis for Boron in ITP Samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tovo, L.L.

    'Inductively Coupled Plasma Emission Spectroscopy (ICPES) has been used by the Analytical Development Section (ADS) to measure boron in catalytic tetraphenylboron decomposition studies performed by the Waste Processing Technology (WPT) section. Analysis of these samples is complicated due to the presence of high concentrations of sodium and organic compounds. Previously, we found signal suppression in samples analyzed "as received". We suspected that the suppression was due to the high organic concentration (up to 0.01 molar organic decomposition products) in the samples. When the samples were acid digested prior to analysis, the suppression was eliminated. The precision of the reported boronmore » concentration was estimated as 10 percent based on the known precision of the inorganic boron standard used for calibration and quality control check of the ICPES analysis. However, a precision better than 10 percent was needed to evaluate ITP process operating parameters. Therefore, the purpose of this work was (1) to measure, instead of estimating, the precision of the boron measurement on ITP samples and (2) to determine the optimum precision attainable with current instrumentation.'« less

  6. Integration of a synthetic vision system with airborne laser range scanner-based terrain referenced navigation for precision approach guidance

    NASA Astrophysics Data System (ADS)

    Uijt de Haag, Maarten; Campbell, Jacob; van Graas, Frank

    2005-05-01

    Synthetic Vision Systems (SVS) provide pilots with a virtual visual depiction of the external environment. When using SVS for aircraft precision approach guidance systems accurate positioning relative to the runway with a high level of integrity is required. Precision approach guidance systems in use today require ground-based electronic navigation components with at least one installation at each airport, and in many cases multiple installations to service approaches to all qualifying runways. A terrain-referenced approach guidance system is envisioned to provide precision guidance to an aircraft without the use of ground-based electronic navigation components installed at the airport. This autonomy makes it a good candidate for integration with an SVS. At the Ohio University Avionics Engineering Center (AEC), work has been underway in the development of such a terrain referenced navigation system. When used in conjunction with an Inertial Measurement Unit (IMU) and a high accuracy/resolution terrain database, this terrain referenced navigation system can provide navigation and guidance information to the pilot on a SVS or conventional instruments. The terrain referenced navigation system, under development at AEC, operates on similar principles as other terrain navigation systems: a ground sensing sensor (in this case an airborne laser scanner) gathers range measurements to the terrain; this data is then matched in some fashion with an onboard terrain database to find the most likely position solution and used to update an inertial sensor-based navigator. AEC's system design differs from today's common terrain navigators in its use of a high resolution terrain database (~1 meter post spacing) in conjunction with an airborne laser scanner which is capable of providing tens of thousands independent terrain elevation measurements per second with centimeter-level accuracies. When combined with data from an inertial navigator the high resolution terrain database and laser scanner system is capable of providing near meter-level horizontal and vertical position estimates. Furthermore, the system under development capitalizes on 1) The position and integrity benefits provided by the Wide Area Augmentation System (WAAS) to reduce the initial search space size and; 2) The availability of high accuracy/resolution databases. This paper presents results from flight tests where the terrain reference navigator is used to provide guidance cues for a precision approach.

  7. Tunka-Rex: energy reconstruction with a single antenna station

    NASA Astrophysics Data System (ADS)

    Hiller, R.; Bezyazeekov, P. A.; Budnev, N. M.; Fedorov, O.; Gress, O. A.; Haungs, A.; Huege, T.; Kazarina, Y.; Kleifges, M.; Korosteleva, E. E.; Kostunin, D.; Krömer, O.; Kungel, V.; Kuzmichev, L. A.; Lubsandorzhiev, N.; Mirgazov, R. R.; Monkhoev, R.; Osipova, E. A.; Pakhorukov, A.; Pankov, L.; Prosin, V. V.; Rubtsov, G. I.; Schröder, F. G.; Wischnewski, R.; Zagorodnikov, A.

    2017-03-01

    The Tunka-Radio extension (Tunka-Rex) is a radio detector for air showers in Siberia. From 2012 to 2014, Tunka-Rex operated exclusively together with its host experiment, the air-Cherenkov array Tunka-133, which provided trigger, data acquisition, and an independent air-shower reconstruction. It was shown that the air-shower energy can be reconstructed by Tunka-Rex with a precision of 15% for events with signal in at least 3 antennas, using the radio amplitude at a distance of 120 m from the shower axis as an energy estimator. Using the reconstruction from the host experiment Tunka-133 for the air-shower geometry (shower core and direction), the energy estimator can in principle already be obtained with measurements from a single antenna, close to the reference distance. We present a method for event selection and energy reconstruction, requiring only one antenna, and achieving a precision of about 20%. This method increases the effective detector area and lowers thresholds for zenith angle and energy, resulting in three times more events than in the standard reconstruction.

  8. Hysteresis modeling of magnetic shape memory alloy actuator based on Krasnosel'skii-Pokrovskii model.

    PubMed

    Zhou, Miaolei; Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.

  9. Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model

    PubMed Central

    Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730

  10. Temporal processing dysfunction in schizophrenia.

    PubMed

    Carroll, Christine A; Boggs, Jennifer; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P

    2008-07-01

    Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the pathophysiology of schizophrenia, there remains a paucity of research directly examining overt timing performance in the disorder. Accordingly, the present study investigated timing in schizophrenia using a well-established task of time perception. Twenty-three individuals with schizophrenia and 22 non-psychiatric control participants completed a temporal bisection task, which required participants to make temporal judgments about auditory and visually presented durations ranging from 300 to 600 ms. Both schizophrenia and control groups displayed greater visual compared to auditory timing variability, with no difference between groups in the visual modality. However, individuals with schizophrenia exhibited less temporal precision than controls in the perception of auditory durations. These findings correlated with parameter estimates obtained from a quantitative model of time estimation, and provide evidence of a fundamental deficit in temporal auditory precision in schizophrenia.

  11. Hard choices in assessing survival past dams — a comparison of single- and paired-release strategies

    USGS Publications Warehouse

    Zydlewski, Joseph D.; Stich, Daniel S.; Sigourney, Douglas B.

    2017-01-01

    Mark–recapture models are widely used to estimate survival of salmon smolts migrating past dams. Paired releases have been used to improve estimate accuracy by removing components of mortality not attributable to the dam. This method is accompanied by reduced precision because (i) sample size is reduced relative to a single, large release; and (ii) variance calculations inflate error. We modeled an idealized system with a single dam to assess trade-offs between accuracy and precision and compared methods using root mean squared error (RMSE). Simulations were run under predefined conditions (dam mortality, background mortality, detection probability, and sample size) to determine scenarios when the paired release was preferable to a single release. We demonstrate that a paired-release design provides a theoretical advantage over a single-release design only at large sample sizes and high probabilities of detection. At release numbers typical of many survival studies, paired release can result in overestimation of dam survival. Failures to meet model assumptions of a paired release may result in further overestimation of dam-related survival. Under most conditions, a single-release strategy was preferable.

  12. High precision determination of the melting points of water TIP4P/2005 and water TIP4P/Ice models by the direct coexistence technique

    NASA Astrophysics Data System (ADS)

    Conde, M. M.; Rovere, M.; Gallo, P.

    2017-12-01

    An exhaustive study by molecular dynamics has been performed to analyze the factors that enhance the precision of the technique of direct coexistence for a system of ice and liquid water. The factors analyzed are the stochastic nature of the method, the finite size effects, and the influence of the initial ice configuration used. The results obtained show that the precision of estimates obtained through the technique of direct coexistence is markedly affected by the effects of finite size, requiring systems with a large number of molecules to reduce the error bar of the melting point. This increase in size causes an increase in the simulation time, but the estimate of the melting point with a great accuracy is important, for example, in studies on the ice surface. We also verified that the choice of the initial ice Ih configuration with different proton arrangements does not significantly affect the estimate of the melting point. Importantly this study leads us to estimate the melting point at ambient pressure of two of the most popular models of water, TIP4P/2005 and TIP4P/Ice, with the greatest precision to date.

  13. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  14. Inverse probability weighting for covariate adjustment in randomized studies.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling

    2014-02-20

    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Notes From the Field: Secondary Task Precision for Cognitive Load Estimation During Virtual Reality Surgical Simulation Training.

    PubMed

    Rasmussen, Sebastian R; Konge, Lars; Mikkelsen, Peter T; Sørensen, Mads S; Andersen, Steven A W

    2016-03-01

    Cognitive load (CL) theory suggests that working memory can be overloaded in complex learning tasks such as surgical technical skills training, which can impair learning. Valid and feasible methods for estimating the CL in specific learning contexts are necessary before the efficacy of CL-lowering instructional interventions can be established. This study aims to explore secondary task precision for the estimation of CL in virtual reality (VR) surgical simulation and also investigate the effects of CL-modifying factors such as simulator-integrated tutoring and repeated practice. Twenty-four participants were randomized for visual assistance by a simulator-integrated tutor function during the first 5 of 12 repeated mastoidectomy procedures on a VR temporal bone simulator. Secondary task precision was found to be significantly lower during simulation compared with nonsimulation baseline, p < .001. Contrary to expectations, simulator-integrated tutoring and repeated practice did not have an impact on secondary task precision. This finding suggests that even though considerable changes in CL are reflected in secondary task precision, it lacks sensitivity. In contrast, secondary task reaction time could be more sensitive, but requires substantial postprocessing of data. Therefore, future studies on the effect of CL modifying interventions should weigh the pros and cons of the various secondary task measurements. © The Author(s) 2015.

  16. Precision and accuracy of commonly used dental age estimation charts for the New Zealand population.

    PubMed

    Baylis, Stephanie; Bassed, Richard

    2017-08-01

    Little research has been undertaken for the New Zealand population in the field of dental age estimation. This research to date indicates there are differences in dental developmental rates between the New Zealand population and other global population groups, and within the New Zealand population itself. Dental age estimation methods range from dental development charts to complex biometric analysis. Dental development charts are not the most accurate method of dental age estimation, but are time saving in their use. They are an excellent screening tool, particularly for post-mortem identification purposes, and for assessing variation from population norms in living individuals. The aim of this study was to test the precision and accuracy of three dental development charts (Schour and Massler, Blenkin and Taylor, and the London Atlas), used to estimate dental age of a sample of New Zealand juveniles between the ages of 5 and 18 years old (n=875). Percentage 'best fit' to correct age category and to expected chart stage were calculated to determine which chart was the most precise for the sample. Chronological ages were compared to estimated dental ages using a two-tailed paired t-test (P<0.05) for each of the three methods. The mean differences between CA and DA were calculated to determine bias and the absolute mean differences were calculated to indicate accuracy. The results of this study show that while accuracy and precision were low for all charts tested against the New Zealand population sample, the Blenkin and Taylor Australian charts performed best overall. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Modeling trends from North American Breeding Bird Survey data: a spatially explicit approach

    USGS Publications Warehouse

    Bled, Florent; Sauer, John R.; Pardieck, Keith L.; Doherty, Paul; Royle, J. Andy

    2013-01-01

    Population trends, defined as interval-specific proportional changes in population size, are often used to help identify species of conservation interest. Efficient modeling of such trends depends on the consideration of the correlation of population changes with key spatial and environmental covariates. This can provide insights into causal mechanisms and allow spatially explicit summaries at scales that are of interest to management agencies. We expand the hierarchical modeling framework used in the North American Breeding Bird Survey (BBS) by developing a spatially explicit model of temporal trend using a conditional autoregressive (CAR) model. By adopting a formal spatial model for abundance, we produce spatially explicit abundance and trend estimates. Analyses based on large-scale geographic strata such as Bird Conservation Regions (BCR) can suffer from basic imbalances in spatial sampling. Our approach addresses this issue by providing an explicit weighting based on the fundamental sample allocation unit of the BBS. We applied the spatial model to three species from the BBS. Species have been chosen based upon their well-known population change patterns, which allows us to evaluate the quality of our model and the biological meaning of our estimates. We also compare our results with the ones obtained for BCRs using a nonspatial hierarchical model (Sauer and Link 2011). Globally, estimates for mean trends are consistent between the two approaches but spatial estimates provide much more precise trend estimates in regions on the edges of species ranges that were poorly estimated in non-spatial analyses. Incorporating a spatial component in the analysis not only allows us to obtain relevant and biologically meaningful estimates for population trends, but also enables us to provide a flexible framework in order to obtain trend estimates for any area.

  18. A direct-measurement technique for estimating discharge-chamber lifetime. [for ion thrusters

    NASA Technical Reports Server (NTRS)

    Beattie, J. R.; Garvin, H. L.

    1982-01-01

    The use of short-term measurement techniques for predicting the wearout of ion thrusters resulting from sputter-erosion damage is investigated. The laminar-thin-film technique is found to provide high precision erosion-rate data, although the erosion rates are generally substantially higher than those found during long-term erosion tests, so that the results must be interpreted in a relative sense. A technique for obtaining absolute measurements is developed using a masked-substrate arrangement. This new technique provides a means for estimating the lifetimes of critical discharge-chamber components based on direct measurements of sputter-erosion depths obtained during short-duration (approximately 1 hr) tests. Results obtained using the direct-measurement technique are shown to agree with sputter-erosion depths calculated for the plasma conditions of the test. The direct-measurement approach is found to be applicable to both mercury and argon discharge-plasma environments and will be useful for estimating the lifetimes of inert gas and extended performance mercury ion thrusters currently under development.

  19. Counting Jobs and Economic Impacts from Distributed Wind in the United States (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tegen, S.

    This conference poster describes the distributed wind Jobs and Economic Development Imapcts (JEDI) model. The goal of this work is to provide a model that estimates jobs and other economic effects associated with the domestic distributed wind industry. The distributed wind JEDI model is a free input-output model that estimates employment and other impacts resulting from an investment in distributed wind installations. Default inputs are from installers and industry experts and are based on existing projects. User input can be minimal (use defaults) or very detailed for more precise results. JEDI can help evaluate potential scenarios, current or future; informmore » stakeholders and decision-makers; assist businesses in evaluating economic development impacts and estimating jobs; assist government organizations with planning and evaluating and developing communities.« less

  20. Estimating potency for the Emax-model without attaining maximal effects.

    PubMed

    Schoemaker, R C; van Gerven, J M; Cohen, A F

    1998-10-01

    The most widely applied model relating drug concentrations to effects is the Emax model. In practice, concentration-effect relationships often deviate from a simple linear relationship but without reaching a clear maximum because a further increase in concentration might be associated with unacceptable or distorting side effects. The parameters for the Emax model can only be estimated with reasonable precision if the curve shows sign of reaching a maximum, otherwise both EC50 and Emax estimates may be extremely imprecise. This paper provides a solution by introducing a new parameter (S0) equal to Emax/EC50 that can be used to characterize potency adequately even if there are no signs of a clear maximum. Simulations are presented to investigate the nature of the new parameter and published examples are used as illustration.

  1. Enhancing local health department disaster response capacity with rapid community needs assessments: validation of a computerized program for binary attribute cluster sampling.

    PubMed

    Groenewold, Matthew R

    2006-01-01

    Local health departments are among the first agencies to respond to disasters or other mass emergencies. However, they often lack the ability to handle large-scale events. Plans including locally developed and deployed tools may enhance local response. Simplified cluster sampling methods can be useful in assessing community needs after a sudden-onset, short duration event. Using an adaptation of the methodology used by the World Health Organization Expanded Programme on Immunization (EPI), a Microsoft Access-based application for two-stage cluster sampling of residential addresses in Louisville/Jefferson County Metro, Kentucky was developed. The sampling frame was derived from geographically referenced data on residential addresses and political districts available through the Louisville/Jefferson County Information Consortium (LOJIC). The program randomly selected 30 clusters, defined as election precincts, from within the area of interest, and then, randomly selected 10 residential addresses from each cluster. The program, called the Rapid Assessment Tools Package (RATP), was tested in terms of accuracy and precision using data on a dichotomous characteristic of residential addresses available from the local tax assessor database. A series of 30 samples were produced and analyzed with respect to their precision and accuracy in estimating the prevalence of the study attribute. Point estimates with 95% confidence intervals were calculated by determining the proportion of the study attribute values in each of the samples and compared with the population proportion. To estimate the design effect, corresponding simple random samples of 300 addresses were taken after each of the 30 cluster samples. The sample proportion fell within +/-10 absolute percentage points of the true proportion in 80% of the samples. In 93.3% of the samples, the point estimate fell within +/-12.5%, and 96.7% fell within +/-15%. All of the point estimates fell within +/-20% of the true proportion. Estimates of the design effect ranged from 0.926 to 1.436 (mean = 1.157, median = 1.170) for the 30 samples. Although prospective evaluation of its performance in field trials or a real emergency is required to confirm its utility, this study suggests that the RATP, a locally designed and deployed tool, may provide population-based estimates of community needs or the extent of event-related consequences that are precise enough to serve as the basis for the initial post-event decisions regarding relief efforts.

  2. A highly precise frequency-based method for estimating the tension of an inclined cable with unknown boundary conditions

    NASA Astrophysics Data System (ADS)

    Ma, Lin

    2017-11-01

    This paper develops a method for precisely determining the tension of an inclined cable with unknown boundary conditions. First, the nonlinear motion equation of an inclined cable is derived, and a numerical model of the motion of the cable is proposed using the finite difference method. The proposed numerical model includes the sag-extensibility, flexural stiffness, inclination angle and rotational stiffness at two ends of the cable. Second, the influence of the dynamic parameters of the cable on its frequencies is discussed in detail, and a method for precisely determining the tension of an inclined cable is proposed based on the derivatives of the eigenvalues of the matrices. Finally, a multiparameter identification method is developed that can simultaneously identify multiple parameters, including the rotational stiffness at two ends. This scheme is applicable to inclined cables with varying sag, varying flexural stiffness and unknown boundary conditions. Numerical examples indicate that the method provides good precision. Because the parameters of cables other than tension (e.g., the flexural stiffness and rotational stiffness at the ends) are not accurately known in practical engineering, the multiparameter identification method could further improve the accuracy of cable tension measurements.

  3. Unmanned Aerial System (UAS)-based phenotyping of soybean using multi-sensor data fusion and extreme learning machine

    NASA Astrophysics Data System (ADS)

    Maimaitijiang, Maitiniyazi; Ghulam, Abduwasit; Sidike, Paheding; Hartling, Sean; Maimaitiyiming, Matthew; Peterson, Kyle; Shavers, Ethan; Fishman, Jack; Peterson, Jim; Kadam, Suhas; Burken, Joel; Fritschi, Felix

    2017-12-01

    Estimating crop biophysical and biochemical parameters with high accuracy at low-cost is imperative for high-throughput phenotyping in precision agriculture. Although fusion of data from multiple sensors is a common application in remote sensing, less is known on the contribution of low-cost RGB, multispectral and thermal sensors to rapid crop phenotyping. This is due to the fact that (1) simultaneous collection of multi-sensor data using satellites are rare and (2) multi-sensor data collected during a single flight have not been accessible until recent developments in Unmanned Aerial Systems (UASs) and UAS-friendly sensors that allow efficient information fusion. The objective of this study was to evaluate the power of high spatial resolution RGB, multispectral and thermal data fusion to estimate soybean (Glycine max) biochemical parameters including chlorophyll content and nitrogen concentration, and biophysical parameters including Leaf Area Index (LAI), above ground fresh and dry biomass. Multiple low-cost sensors integrated on UASs were used to collect RGB, multispectral, and thermal images throughout the growing season at a site established near Columbia, Missouri, USA. From these images, vegetation indices were extracted, a Crop Surface Model (CSM) was advanced, and a model to extract the vegetation fraction was developed. Then, spectral indices/features were combined to model and predict crop biophysical and biochemical parameters using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Learning Machine based Regression (ELR) techniques. Results showed that: (1) For biochemical variable estimation, multispectral and thermal data fusion provided the best estimate for nitrogen concentration and chlorophyll (Chl) a content (RMSE of 9.9% and 17.1%, respectively) and RGB color information based indices and multispectral data fusion exhibited the largest RMSE 22.6%; the highest accuracy for Chl a + b content estimation was obtained by fusion of information from all three sensors with an RMSE of 11.6%. (2) Among the plant biophysical variables, LAI was best predicted by RGB and thermal data fusion while multispectral and thermal data fusion was found to be best for biomass estimation. (3) For estimation of the above mentioned plant traits of soybean from multi-sensor data fusion, ELR yields promising results compared to PLSR and SVR in this study. This research indicates that fusion of low-cost multiple sensor data within a machine learning framework can provide relatively accurate estimation of plant traits and provide valuable insight for high spatial precision in agriculture and plant stress assessment.

  4. Quantum preservation of the measurements precision using ultra-short strong pulses in exact analytical solution

    NASA Astrophysics Data System (ADS)

    Berrada, K.; Eleuch, H.

    2017-09-01

    Various schemes have been proposed to improve the parameter-estimation precision. In the present work, we suggest an alternative method to preserve the estimation precision by considering a model that closely describes a realistic experimental scenario. We explore this active way to control and enhance the measurements precision for a two-level quantum system interacting with classical electromagnetic field using ultra-short strong pulses with an exact analytical solution, i.e. beyond the rotating wave approximation. In particular, we investigate the variation of the precision with a few cycles pulse and a smooth phase jump over a finite time interval. We show that by acting on the shape of the phase transient and other parameters of the considered system, the amount of information may be increased and has smaller decay rate in the long time. These features make two-level systems incorporated in ultra-short, of-resonant and gradually changing phase good candidates for implementation of schemes for the quantum computation and the coherent information processing.

  5. Measurements of experimental precision for trials with cowpea (Vigna unguiculata L. Walp.) genotypes.

    PubMed

    Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G

    2016-05-09

    The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.

  6. THE IMPACT OF POINT-SOURCE SUBTRACTION RESIDUALS ON 21 cm EPOCH OF REIONIZATION ESTIMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trott, Cathryn M.; Wayth, Randall B.; Tingay, Steven J., E-mail: cathryn.trott@curtin.edu.au

    Precise subtraction of foreground sources is crucial for detecting and estimating 21 cm H I signals from the Epoch of Reionization (EoR). We quantify how imperfect point-source subtraction due to limitations of the measurement data set yields structured residual signal in the data set. We use the Cramer-Rao lower bound, as a metric for quantifying the precision with which a parameter may be measured, to estimate the residual signal in a visibility data set due to imperfect point-source subtraction. We then propagate these residuals into two metrics of interest for 21 cm EoR experiments-the angular power spectrum and two-dimensional powermore » spectrum-using a combination of full analytic covariant derivation, analytic variant derivation, and covariant Monte Carlo simulations. This methodology differs from previous work in two ways: (1) it uses information theory to set the point-source position error, rather than assuming a global rms error, and (2) it describes a method for propagating the errors analytically, thereby obtaining the full correlation structure of the power spectra. The methods are applied to two upcoming low-frequency instruments that are proposing to perform statistical EoR experiments: the Murchison Widefield Array and the Precision Array for Probing the Epoch of Reionization. In addition to the actual antenna configurations, we apply the methods to minimally redundant and maximally redundant configurations. We find that for peeling sources above 1 Jy, the amplitude of the residual signal, and its variance, will be smaller than the contribution from thermal noise for the observing parameters proposed for upcoming EoR experiments, and that optimal subtraction of bright point sources will not be a limiting factor for EoR parameter estimation. We then use the formalism to provide an ab initio analytic derivation motivating the 'wedge' feature in the two-dimensional power spectrum, complementing previous discussion in the literature.« less

  7. Formations of Tethered Spacecraft as Stable Platforms for Far IR and Sub-mm Astronomy

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.; Hadaegh, Fred Y.; Shao, Michael; Lorenzini, Enrico C.

    2004-01-01

    In this paper we describe current research in tethered formations for interferometry, and a roadmap to demonstrating the required key technologies via on-ground and in-orbit testing. We propose an integrated kilometer-size tethered spacecraft formation flying concept which enables Far IR and Sub-mm astronomy observations from space. A rather general model is used to predict the dynamics, control, and estimation performance of formations of spacecraft connected by tethers in LEO and deep space. These models include the orbital and tethered formation dynamics, environmental models, and models of the formation estimator/controller/commander. Both centralized and decentralized control/sensing/estimation schemes are possible, and dynamic ranges of interest for sensing/control are described. Key component/subsystem technologies are described which need both ground-based and in-orbit demonstration prior to their utilization in precision space interferometry missions using tethered formations. Defining an orbiting formation as an ensemble of orbiting spacecraft performing a cooperative task, recent work has demonstrated the validity of the tethering the spacecraft to provide both the required formation rigidity and satisfy the formation reconfiguration needs such as interferometer baseline control. In our concept, several vehicles are connected and move along the tether, so that to reposition them the connecting tether links must vary in length. This feature enables variable and precise baseline control while the system spins around the boresight. The control architecture features an interferometer configuration composed of one central combiner spacecraft and two aligned collector spacecraft. The combiner spacecraft acts as the formation leader and is also where the centralized sensing and estimation functions reside. Some of the issues analyzed with the model are: dynamic modes of deformation of the distributed structure, architecture of the formation sensor, and sources of dynamical perturbation that need to be mitigated for precision operation in space. Examples from numerical simulation of an envisioned scenario in heliocentric orbit demonstrate the potential of the concept for space interferometry.

  8. The theory precision analyse of RFM localization of satellite remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Jianqing; Xv, Biao

    2009-11-01

    The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.

  9. Information's role in the estimation of chaotic signals

    NASA Astrophysics Data System (ADS)

    Drake, Daniel Fred

    1998-11-01

    Researchers have proposed several methods designed to recover chaotic signals from noise-corrupted observations. While the methods vary, their qualitative performance does not: in low levels of noise all methods effectively recover the underlying signal; in high levels of noise no method can recover the underlying signal to any meaningful degree of accuracy. Of the methods proposed to date, all represent sub-optimal estimators. So: Is the inability to recover the signal in high noise levels simply a consequence of estimator sub-optimality? Or is estimator failure actually a manifestation of some intrinsic property of chaos itself? These questions are answered by deriving an optimal estimator for a class of chaotic systems and noting that it, too, fails in high levels of noise. An exact, closed- form expression for the estimator is obtained for a class of chaotic systems whose signals are solutions to a set of linear (but noncausal) difference equations. The existence of this linear description circumvents the difficulties normally encountered when manipulating the nonlinear (but causal) expressions that govern. chaotic behavior. The reason why even the optimal estimator fails to recover underlying chaotic signals in high levels of noise has its roots in information theory. At such noise levels, the mutual information linking the corrupted observations to the underlying signal is essentially nil, reducing the estimator to a simple guessing strategy based solely on a priori statistics. Entropy, long the common bond between information theory and dynamical systems, is actually one aspect of a far more complete characterization of information sources: the rate distortion function. Determining the rate distortion function associated with the class of chaotic systems considered in this work provides bounds on estimator performance in high levels of noise. Finally, a slight modification of the linear description leads to a method of synthesizing on limited precision platforms ``pseudo-chaotic'' sequences that mimic true chaotic behavior to any finite degree of precision and duration. The use of such a technique in spread-spectrum communications is considered.

  10. Accurate Visual Heading Estimation at High Rotation Rate Without Oculomotor or Static-Depth Cues

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Perrone, John A.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    It has been claimed that either oculomotor or static depth cues provide the signals about self-rotation necessary approx.-1 deg/s. We tested this hypothesis by simulating self-motion along a curved path with the eyes fixed in the head (plus or minus 16 deg/s of rotation). Curvilinear motion offers two advantages: 1) heading remains constant in retinotopic coordinates, and 2) there is no visual-oculomotor conflict (both actual and simulated eye position remain stationary). We simulated 400 ms of rotation combined with 16 m/s of translation at fixed angles with respect to gaze towards two vertical planes of random dots initially 12 and 24 m away, with a field of view of 45 degrees. Four subjects were asked to fixate a central cross and to respond whether they were translating to the left or right of straight-ahead gaze. From the psychometric curves, heading bias (mean) and precision (semi-interquartile) were derived. The mean bias over 2-5 runs was 3.0, 4.0, -2.0, -0.4 deg for the first author and three naive subjects, respectively (positive indicating towards the rotation direction). The mean precision was 2.0, 1.9, 3.1, 1.6 deg. respectively. The ability of observers to make relatively accurate and precise heading judgments, despite the large rotational flow component, refutes the view that extra-flow-field information is necessary for human visual heading estimation at high rotation rates. Our results support models that process combined translational/rotational flow to estimate heading, but should not be construed to suggest that other cues do not play an important role when they are available to the observer.

  11. Perceptual precision of passive body tilt is consistent with statistically optimal cue integration

    PubMed Central

    Karmali, Faisal; Nicoucar, Keyvan; Merfeld, Daniel M.

    2017-01-01

    When making perceptual decisions, humans have been shown to optimally integrate independent noisy multisensory information, matching maximum-likelihood (ML) limits. Such ML estimators provide a theoretic limit to perceptual precision (i.e., minimal thresholds). However, how the brain combines two interacting (i.e., not independent) sensory cues remains an open question. To study the precision achieved when combining interacting sensory signals, we measured perceptual roll tilt and roll rotation thresholds between 0 and 5 Hz in six normal human subjects. Primary results show that roll tilt thresholds between 0.2 and 0.5 Hz were significantly lower than predicted by a ML estimator that includes only vestibular contributions that do not interact. In this paper, we show how other cues (e.g., somatosensation) and an internal representation of sensory and body dynamics might independently contribute to the observed performance enhancement. In short, a Kalman filter was combined with an ML estimator to match human performance, whereas the potential contribution of nonvestibular cues was assessed using published bilateral loss patient data. Our results show that a Kalman filter model including previously proven canal-otolith interactions alone (without nonvestibular cues) can explain the observed performance enhancements as can a model that includes nonvestibular contributions. NEW & NOTEWORTHY We found that human whole body self-motion direction-recognition thresholds measured during dynamic roll tilts were significantly lower than those predicted by a conventional maximum-likelihood weighting of the roll angular velocity and quasistatic roll tilt cues. Here, we show that two models can each match this “apparent” better-than-optimal performance: 1) inclusion of a somatosensory contribution and 2) inclusion of a dynamic sensory interaction between canal and otolith cues via a Kalman filter model. PMID:28179477

  12. Surface characterization protocol for precision aspheric optics

    NASA Astrophysics Data System (ADS)

    Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra

    2017-10-01

    In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.

  13. The CO2 stimulus for cerebrovascular reactivity: Fixing inspired concentrations vs. targeting end-tidal partial pressures.

    PubMed

    Fisher, Joseph A

    2016-06-01

    Cerebrovascular reactivity (CVR) studies have elucidated the physiology and pathophysiology of cerebral blood flow regulation. A non-invasive, high spatial resolution approach uses carbon dioxide (CO2) as the vasoactive stimulus and magnetic resonance techniques to estimate the cerebral blood flow response. CVR is assessed as the ratio response change to stimulus change. Precise control of the stimulus is sought to minimize CVR variability between tests, and show functional differences. Computerized methods targeting end-tidal CO2 partial pressures are precise, but expensive. Simpler, improvised methods that fix the inspired CO2 concentrations have been recommended as less expensive, and so more widely accessible. However, these methods have drawbacks that have not been previously presented by those that advocate their use, or those that employ them in their studies. As one of the developers of a computerized method, I provide my perspective on the trade-offs between these two methods. The main concern is that declaring the precision of fixed inspired concentration of CO2 is misleading: it does not, as implied, translate to precise control of the actual vasoactive stimulus - the arterial partial pressure of CO2 The inherent test-to-test, and therefore subject-to-subject variability, precludes clinical application of findings. Moreover, improvised methods imply widespread duplication of development, assembly time and costs, yet lack uniformity and quality control. A tabular comparison between approaches is provided. © The Author(s) 2016.

  14. A multiplexed magnetic tweezer with precision particle tracking and bi-directional force control.

    PubMed

    Johnson, Keith C; Clemmens, Emilie; Mahmoud, Hani; Kirkpatrick, Robin; Vizcarra, Juan C; Thomas, Wendy E

    2017-01-01

    In the past two decades, methods have been developed to measure the mechanical properties of single biomolecules. One of these methods, Magnetic tweezers, is amenable to aquisition of data on many single molecules simultaneously, but to take full advantage of this "multiplexing" ability, it is necessary to simultaneously incorprorate many capabilities that ahve been only demonstrated separately. Our custom built magnetic tweezer combines high multiplexing, precision bead tracking, and bi-directional force control into a flexible and stable platform for examining single molecule behavior. This was accomplished using electromagnets, which provide high temporal control of force while achieving force levels similar to permanent magnets via large paramagnetic beads. Here we describe the instrument and its ability to apply 2-260 pN of force on up to 120 beads simultaneously, with a maximum spatial precision of 12 nm using a variety of bead sizes and experimental techniques. We also demonstrate a novel method for increasing the precision of force estimations on heterogeneous paramagnetic beads using a combination of density separation and bi-directional force correlation which reduces the coefficient of variation of force from 27% to 6%. We then use the instrument to examine the force dependence of uncoiling and recoiling velocity of type 1 fimbriae from Eschericia coli ( E. coli ) bacteria, and see similar results to previous studies. This platform provides a simple, effective, and flexible method for efficiently gathering single molecule force spectroscopy measurements.

  15. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  16. Differential porosimetry and permeametry for random porous media.

    PubMed

    Hilfer, R; Lemmer, A

    2015-07-01

    Accurate determination of geometrical and physical properties of natural porous materials is notoriously difficult. Continuum multiscale modeling has provided carefully calibrated realistic microstructure models of reservoir rocks with floating point accuracy. Previous measurements using synthetic microcomputed tomography (μ-CT) were based on extrapolation of resolution-dependent properties for discrete digitized approximations of the continuum microstructure. This paper reports continuum measurements of volume and specific surface with full floating point precision. It also corrects an incomplete description of rotations in earlier publications. More importantly, the methods of differential permeametry and differential porosimetry are introduced as precision tools. The continuum microstructure chosen to exemplify the methods is a homogeneous, carefully calibrated and characterized model for Fontainebleau sandstone. The sample has been publicly available since 2010 on the worldwide web as a benchmark for methodical studies of correlated random media. High-precision porosimetry gives the volume and internal surface area of the sample with floating point accuracy. Continuum results with floating point precision are compared to discrete approximations. Differential porosities and differential surface area densities allow geometrical fluctuations to be discriminated from discretization effects and numerical noise. Differential porosimetry and Fourier analysis reveal subtle periodic correlations. The findings uncover small oscillatory correlations with a period of roughly 850μm, thus implying that the sample is not strictly stationary. The correlations are attributed to the deposition algorithm that was used to ensure the grain overlap constraint. Differential permeabilities are introduced and studied. Differential porosities and permeabilities provide scale-dependent information on geometry fluctuations, thereby allowing quantitative error estimates.

  17. Analysis of 14C and 13C in teeth provides precise birth dating and clues to geographical origin

    PubMed Central

    K, Alkass; BA, Buchholz; H, Druid; KL, Spalding

    2011-01-01

    The identification of human bodies in situations when there are no clues as to the person’s identity from circumstantial data, poses a difficult problem to investigators. The determination of age and sex of the body can be crucial in order to limit the search to individuals that are a possible match. We analyzed the proportion of bomb pulse derived carbon-14 (14C) incorporated in the enamel of teeth from individuals from different geographical locations. The ‘bomb pulse’ refers to a significant increase in 14C levels in the atmosphere caused by above ground test detonations of nuclear weapons during the cold war (1955-1963). By comparing 14C levels in enamel with 14C atmospheric levels systematically recorded over time, high precision birth dating of modern biological material is possible. Above ground nuclear bomb testing was largely restricted to a couple of locations in the northern hemisphere, producing differences in atmospheric 14C levels at various geographical regions, particularly in the early phase. Therefore, we examined the precision of 14C birth dating of enamel as a function of time of formation and geographical location. We also investigated the use of the stable isotope 13C as an indicator of geographical origin of an individual. Dental enamel was isolated from 95 teeth extracted from 84 individuals to study the precision of the 14C method along the bomb spike. For teeth formed before 1955 (N = 17), all but one tooth showed negative Δ14C values. Analysis of enamel from teeth formed during the rising part of the bomb-spike (1955-1963, N = 12) and after the peak (>1963, N = 66) resulted in an average absolute date of birth estimation error of 1.9 ±1.4 and 1.3 ± 1.0 years, respectively. Geographical location of an individual had no adverse effect on the precision of year of birth estimation using radiocarbon dating. In 46 teeth, measurement of 13C was also performed. Scandinavian teeth showed a substantially greater depression in average δ13C (−14.8) than teeth from subjects raised in Japan (−13.5), Middle East and North Africa (−12.7) and South America (−10.9). In summary, isotopic analysis of carbon in enamel from a single tooth can give a good estimate of the year of birth of an individual and also provide information about the geographical origin of the individual. This strategy can assist police and forensic authorities when attempting to solve unidentified homicide cases and may facilitate the identification work associated with mass disasters. PMID:21208756

  18. Analysis of 14C and 13C in teeth provides precise birth dating and clues to geographical origin.

    PubMed

    Alkass, K; Buchholz, B A; Druid, H; Spalding, K L

    2011-06-15

    The identification of human bodies in situations when there are no clues as to the person's identity from circumstantial data, poses a difficult problem to the investigators. The determination of age and sex of the body can be crucial in order to limit the search to individuals that are a possible match. We analyzed the proportion of bomb pulse derived carbon-14 ((14)C) incorporated in the enamel of teeth from individuals from different geographical locations. The 'bomb pulse' refers to a significant increase in (14)C levels in the atmosphere caused by above ground test detonations of nuclear weapons during the cold war (1955-1963). By comparing (14)C levels in enamel with (14)C atmospheric levels systematically recorded over time, high precision birth dating of modern biological material is possible. Above ground nuclear bomb testing was largely restricted to a couple of locations in the northern hemisphere, producing differences in atmospheric (14)C levels at various geographical regions, particularly in the early phase. Therefore, we examined the precision of (14)C birth dating of enamel as a function of time of formation and geographical location. We also investigated the use of the stable isotope (13)C as an indicator of geographical origin of an individual. Dental enamel was isolated from 95 teeth extracted from 84 individuals to study the precision of the (14)C method along the bomb spike. For teeth formed before 1955 (N=17), all but one tooth showed negative Δ(14)C values. Analysis of enamel from teeth formed during the rising part of the bomb-spike (1955-1963, N=12) and after the peak (>1963, N=66) resulted in an average absolute date of birth estimation error of 1.9±1.4 and 1.3±1.0 years, respectively. Geographical location of an individual had no adverse effect on the precision of year of birth estimation using radiocarbon dating. In 46 teeth, measurement of (13)C was also performed. Scandinavian teeth showed a substantially greater depression in average δ(13)C (-14.8) than teeth from subjects raised in Japan (-13.5), Middle East and North Africa (-12.7) and South America (-10.9). In summary, isotopic analysis of carbon in enamel from a single tooth can give a good estimate of the year of birth of an individual and also provide information about the geographical origin of the individual. This strategy can assist police and forensic authorities when attempting to solve unidentified homicide cases and may facilitate the identification work associated with mass disasters. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  19. Quantum-enhanced metrology for multiple phase estimation with noise

    PubMed Central

    Yue, Jie-Dong; Zhang, Yu-Ran; Fan, Heng

    2014-01-01

    We present a general quantum metrology framework to study the simultaneous estimation of multiple phases in the presence of noise as a discretized model for phase imaging. This approach can lead to nontrivial bounds of the precision for multiphase estimation. Our results show that simultaneous estimation (SE) of multiple phases is always better than individual estimation (IE) of each phase even in noisy environment. The utility of the bounds of multiple phase estimation for photon loss channels is exemplified explicitly. When noise is low, those bounds possess the Heisenberg scale showing quantum-enhanced precision with the O(d) advantage for SE, where d is the number of phases. However, this O(d) advantage of SE scheme in the variance of the estimation may disappear asymptotically when photon loss becomes significant and then only a constant advantage over that of IE scheme demonstrates. Potential application of those results is presented. PMID:25090445

  20. Bayesian forecasting and uncertainty quantifying of stream flows using Metropolis–Hastings Markov Chain Monte Carlo algorithm

    DOE PAGES

    Wang, Hongrui; Wang, Cheng; Wang, Ying; ...

    2017-04-05

    This paper presents a Bayesian approach using Metropolis-Hastings Markov Chain Monte Carlo algorithm and applies this method for daily river flow rate forecast and uncertainty quantification for Zhujiachuan River using data collected from Qiaotoubao Gage Station and other 13 gage stations in Zhujiachuan watershed in China. The proposed method is also compared with the conventional maximum likelihood estimation (MLE) for parameter estimation and quantification of associated uncertainties. While the Bayesian method performs similarly in estimating the mean value of daily flow rate, it performs over the conventional MLE method on uncertainty quantification, providing relatively narrower reliable interval than the MLEmore » confidence interval and thus more precise estimation by using the related information from regional gage stations. As a result, the Bayesian MCMC method might be more favorable in the uncertainty analysis and risk management.« less

  1. An evaluation of study design for estimating a time-of-day noise weighting

    NASA Technical Reports Server (NTRS)

    Fields, J. M.

    1986-01-01

    The relative importance of daytime and nighttime noise of the same noise level is represented by a time-of-day weight in noise annoyance models. The high correlations between daytime and nighttime noise were regarded as a major reason that previous social surveys of noise annoyance could not accurately estimate the value of the time-of-day weight. Study designs which would reduce the correlation between daytime and nighttime noise are described. It is concluded that designs based on short term variations in nighttime noise levels would not be able to provide valid measures of response to nighttime noise. The accuracy of the estimate of the time-of-day weight is predicted for designs which are based on long term variations in nighttime noise levels. For these designs it is predicted that it is not possible to form satisfactorily precise estimates of the time-of-day weighting.

  2. Bottom Pressure Tides Along a Line in the Southeast Atlantic Ocean and Comparisons with Satellite Altimetry

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.; Byrne, Deidre A.

    2010-01-01

    Seafloor pressure records, collected at 11 stations aligned along a single ground track of the Topex/Poseidon and Jason satellites, are analyzed for their tidal content. With very low background noise levels and approximately 27 months of high-quality records, tidal constituents can be estimated with unusually high precision. This includes many high-frequency lines up through the seventh-diurnal band. The station deployment provides a unique opportunity to compare with tides estimated from satellite altimetry, point by point along the satellite track, in a region of moderately high mesoscale variability. That variability can significantly corrupt altimeter-based tide estimates, even with 17 years of data. A method to improve the along-track altimeter estimates by correcting the data for nontidal variability is found to yield much better agreement with the bottom-pressure data. The technique should prove useful in certain demanding applications, such as altimetric studies of internal tides.

  3. Combining path integration and remembered landmarks when navigating without vision.

    PubMed

    Kalia, Amy A; Schrater, Paul R; Legge, Gordon E

    2013-01-01

    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information.

  4. Spatio-temporal water quality mapping from satellite images using geographically and temporally weighted regression

    NASA Astrophysics Data System (ADS)

    Chu, Hone-Jay; Kong, Shish-Jeng; Chang, Chih-Hua

    2018-03-01

    The turbidity (TB) of a water body varies with time and space. Water quality is traditionally estimated via linear regression based on satellite images. However, estimating and mapping water quality require a spatio-temporal nonstationary model, while TB mapping necessitates the use of geographically and temporally weighted regression (GTWR) and geographically weighted regression (GWR) models, both of which are more precise than linear regression. Given the temporal nonstationary models for mapping water quality, GTWR offers the best option for estimating regional water quality. Compared with GWR, GTWR provides highly reliable information for water quality mapping, boasts a relatively high goodness of fit, improves the explanation of variance from 44% to 87%, and shows a sufficient space-time explanatory power. The seasonal patterns of TB and the main spatial patterns of TB variability can be identified using the estimated TB maps from GTWR and by conducting an empirical orthogonal function (EOF) analysis.

  5. Combining Path Integration and Remembered Landmarks When Navigating without Vision

    PubMed Central

    Kalia, Amy A.; Schrater, Paul R.; Legge, Gordon E.

    2013-01-01

    This study investigated the interaction between remembered landmark and path integration strategies for estimating current location when walking in an environment without vision. We asked whether observers navigating without vision only rely on path integration information to judge their location, or whether remembered landmarks also influence judgments. Participants estimated their location in a hallway after viewing a target (remembered landmark cue) and then walking blindfolded to the same or a conflicting location (path integration cue). We found that participants averaged remembered landmark and path integration information when they judged that both sources provided congruent information about location, which resulted in more precise estimates compared to estimates made with only path integration. In conclusion, humans integrate remembered landmarks and path integration in a gated fashion, dependent on the congruency of the information. Humans can flexibly combine information about remembered landmarks with path integration cues while navigating without visual information. PMID:24039742

  6. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  7. Full Bayes Poisson gamma, Poisson lognormal, and zero inflated random effects models: Comparing the precision of crash frequency estimates.

    PubMed

    Aguero-Valverde, Jonathan

    2013-01-01

    In recent years, complex statistical modeling approaches have being proposed to handle the unobserved heterogeneity and the excess of zeros frequently found in crash data, including random effects and zero inflated models. This research compares random effects, zero inflated, and zero inflated random effects models using a full Bayes hierarchical approach. The models are compared not just in terms of goodness-of-fit measures but also in terms of precision of posterior crash frequency estimates since the precision of these estimates is vital for ranking of sites for engineering improvement. Fixed-over-time random effects models are also compared to independent-over-time random effects models. For the crash dataset being analyzed, it was found that once the random effects are included in the zero inflated models, the probability of being in the zero state is drastically reduced, and the zero inflated models degenerate to their non zero inflated counterparts. Also by fixing the random effects over time the fit of the models and the precision of the crash frequency estimates are significantly increased. It was found that the rankings of the fixed-over-time random effects models are very consistent among them. In addition, the results show that by fixing the random effects over time, the standard errors of the crash frequency estimates are significantly reduced for the majority of the segments on the top of the ranking. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. In-flight performance analysis of MEMS GPS receiver and its application to precise orbit determination of APOD-A satellite

    NASA Astrophysics Data System (ADS)

    Gu, Defeng; Liu, Ye; Yi, Bin; Cao, Jianfeng; Li, Xie

    2017-12-01

    An experimental satellite mission termed atmospheric density detection and precise orbit determination (APOD) was developed by China and launched on 20 September 2015. The micro-electro-mechanical system (MEMS) GPS receiver provides the basis for precise orbit determination (POD) within the range of a few decimetres. The in-flight performance of the MEMS GPS receiver was assessed. The average number of tracked GPS satellites is 10.7. However, only 5.1 GPS satellites are available for dual-frequency navigation because of the loss of many L2 observations at low elevations. The variations in the multipath error for C1 and P2 were estimated, and the maximum multipath error could reach up to 0.8 m. The average code noises are 0.28 m (C1) and 0.69 m (P2). Using the MEMS GPS receiver, the orbit of the APOD nanosatellite (APOD-A) was precisely determined. Two types of orbit solutions are proposed: a dual-frequency solution and a single-frequency solution. The antenna phase center variations (PCVs) and code residual variations (CRVs) were estimated, and the maximum value of the PCVs is 4.0 cm. After correcting the antenna PCVs and CRVs, the final orbit precision for the dual-frequency and single-frequency solutions were 7.71 cm and 12.91 cm, respectively, validated using the satellite laser ranging (SLR) data, which were significantly improved by 3.35 cm and 25.25 cm. The average RMS of the 6-h overlap differences in the dual-frequency solution between two consecutive days in three dimensions (3D) is 4.59 cm. The MEMS GPS receiver is the Chinese indigenous onboard receiver, which was successfully used in the POD of a nanosatellite. This study has important reference value for improving the MEMS GPS receiver and its application in other low Earth orbit (LEO) nanosatellites.

  9. High resolution remote sensing for reducing uncertainties in urban forest carbon offset life cycle assessments.

    PubMed

    Tigges, Jan; Lakes, Tobia

    2017-10-04

    Urban forests reduce greenhouse gas emissions by storing and sequestering considerable amounts of carbon. However, few studies have considered the local scale of urban forests to effectively evaluate their potential long-term carbon offset. The lack of precise, consistent and up-to-date forest details is challenging for long-term prognoses. Therefore, this review aims to identify uncertainties in urban forest carbon offset assessment and discuss the extent to which such uncertainties can be reduced by recent progress in high resolution remote sensing. We do this by performing an extensive literature review and a case study combining remote sensing and life cycle assessment of urban forest carbon offset in Berlin, Germany. Recent progress in high resolution remote sensing and methods is adequate for delivering more precise details on the urban tree canopy, individual tree metrics, species, and age structures compared to conventional land use/cover class approaches. These area-wide consistent details can update life cycle inventories for more precise future prognoses. Additional improvements in classification accuracy can be achieved by a higher number of features derived from remote sensing data of increasing resolution, but first studies on this subject indicated that a smart selection of features already provides sufficient data that avoids redundancies and enables more efficient data processing. Our case study from Berlin could use remotely sensed individual tree species as consistent inventory of a life cycle assessment. However, a lack of growth, mortality and planting data forced us to make assumptions, therefore creating uncertainty in the long-term prognoses. Regarding temporal changes and reliable long-term estimates, more attention is required to detect changes of gradual growth, pruning and abrupt changes in tree planting and mortality. As such, precise long-term urban ecological monitoring using high resolution remote sensing should be intensified, especially due to increasing climate change effects. This is important for calibrating and validating recent prognoses of urban forest carbon offset, which have so far scarcely addressed longer timeframes. Additionally, higher resolution remote sensing of urban forest carbon estimates can improve upscaling approaches, which should be extended to reach a more precise global estimate for the first time. Urban forest carbon offset can be made more relevant by making more standardized assessments available for science and professional practitioners, and the increasing availability of high resolution remote sensing data and the progress in data processing allows for precisely that.

  10. Attosecond-resolution Hong-Ou-Mandel interferometry.

    PubMed

    Lyons, Ashley; Knee, George C; Bolduc, Eliot; Roger, Thomas; Leach, Jonathan; Gauger, Erik M; Faccio, Daniele

    2018-05-01

    When two indistinguishable photons are each incident on separate input ports of a beamsplitter, they "bunch" deterministically, exiting via the same port as a direct consequence of their bosonic nature. This two-photon interference effect has long-held the potential for application in precision measurement of time delays, such as those induced by transparent specimens with unknown thickness profiles. However, the technique has never achieved resolutions significantly better than the few-femtosecond (micrometer) scale other than in a common-path geometry that severely limits applications. We develop the precision of Hong-Ou-Mandel interferometry toward the ultimate limits dictated by statistical estimation theory, achieving few-attosecond (or nanometer path length) scale resolutions in a dual-arm geometry, thus providing access to length scales pertinent to cell biology and monoatomic layer two-dimensional materials.

  11. A Flexure-Based Mechanism for Precision Adjustment of National Ignition Facility Target Shrouds in Three Rotational Degrees of Freedom

    DOE PAGES

    Boehm, K. -J.; Gibson, C. R.; Hollaway, J. R.; ...

    2016-09-01

    This study presents the design of a flexure-based mount allowing adjustment in three rotational degrees of freedom (DOFs) through high-precision set-screw actuators. The requirements of the application called for small but controlled angular adjustments for mounting a cantilevered beam. The proposed design is based on an array of parallel beams to provide sufficiently high stiffness in the translational directions while allowing angular adjustment through the actuators. A simplified physical model in combination with standard beam theory was applied to estimate the deflection profile and maximum stresses in the beams. A finite element model was built to calculate the stresses andmore » beam profiles for scenarios in which the flexure is simultaneously actuated in more than one DOF.« less

  12. Precision Departure Release Capability (PDRC) Overview and Results: NASA to FAA Research Transition

    NASA Technical Reports Server (NTRS)

    Engelland, Shawn; Davis, Tom.

    2013-01-01

    NASA researchers developed the Precision Departure Release Capability (PDRC) concept to improve the tactical departure scheduling process. The PDRC system is comprised of: 1) a surface automation system that computes ready time predictions and departure runway assignments, 2) an en route scheduling automation tool that uses this information to estimate ascent trajectories to the merge point and computes release times and, 3) an interface that provides two-way communication between the two systems. To minimize technology transfer issues and facilitate its adoption by TMCs and Frontline Managers (FLM), NASA developed the PDRC prototype using the Surface Decision Support System (SDSS) for the Tower surface automation tool, a research version of the FAA TMA (RTMA) for en route automation tool and a digital interface between the two DSTs to facilitate coordination.

  13. An extension of the Saltykov method to quantify 3D grain size distributions in mylonites

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco A.; Llana-Fúnez, Sergio

    2016-12-01

    The estimation of 3D grain size distributions (GSDs) in mylonites is key to understanding the rheological properties of crystalline aggregates and to constraining dynamic recrystallization models. This paper investigates whether a common stereological method, the Saltykov method, is appropriate for the study of GSDs in mylonites. In addition, we present a new stereological method, named the two-step method, which estimates a lognormal probability density function describing the 3D GSD. Both methods are tested for reproducibility and accuracy using natural and synthetic data sets. The main conclusion is that both methods are accurate and simple enough to be systematically used in recrystallized aggregates with near-equant grains. The Saltykov method is particularly suitable for estimating the volume percentage of particular grain-size fractions with an absolute uncertainty of ±5 in the estimates. The two-step method is suitable for quantifying the shape of the actual 3D GSD in recrystallized rocks using a single value, the multiplicative standard deviation (MSD) parameter, and providing a precision in the estimate typically better than 5%. The novel method provides a MSD value in recrystallized quartz that differs from previous estimates based on apparent 2D GSDs, highlighting the inconvenience of using apparent GSDs for such tasks.

  14. Evaluation for Water Conservation in Agriculture: Using a Multi-Method Econometric Approach

    NASA Astrophysics Data System (ADS)

    Ramirez, A.; Eaton, D. J.

    2012-12-01

    Since the 1960's, farmers have implemented new irrigation technology to increase crop production and planting acreage. At that time, technology responded to the increasing demand for food due to world population growth. Currently, the problem of decreased water supply threatens to limit agricultural production. Uncertain precipitation patterns, from prolonged droughts to irregular rains, will continue to hamper planting operations, and farmers are further limited by an increased competition for water from rapidly growing urban areas. Irrigation technology promises to reduce water usage while maintaining or increasing farm yields. The challenge for water managers and policy makers is to quantify and redistribute these efficiency gains as a source of 'new water.' Using conservation in farming as a source of 'new water' requires accurately quantifying the efficiency gains of irrigation technology under farmers' actual operations and practices. From a water resource management and policy perspective, the efficiency gains from conservation in farming can be redistributed to municipal, industrial and recreational uses. This paper presents a methodology that water resource managers can use to statistically verify the water savings attributable to conservation technology. The specific conservation technology examined in this study is precision leveling, and the study includes a mixed-methods approach using four different econometric models: Ordinary Least Squares, Fixed Effects, Propensity Score Matching, and Hierarchical Linear Models. These methods are used for ex-post program evaluation where random assignment is not possible, and they could be employed to evaluate agricultural conservation programs, where participation is often self-selected. The principal method taken in this approach is Hierarchical Linear Models (HLM), a useful model for agriculture because it incorporates the hierarchical nature of the data (fields, tenants, and landowners) as well as crop rotation (fields in and out of production). The other three methods provide verification of the accuracy of the HLM model and create a robust comparison of the water savings estimates. Seventeen factors were used to isolate the effect of precision leveling from variations in climate, investments in other irrigation improvements, and farmers' management skills. These statistical analyses yield accurate water savings estimates because they consider farmers' actual irrigation technology and practices. Results suggest that savings from water conservation technology under farmers' actual production systems and management are less than those reported by experimental field studies. These water savings measure the 'in situ' effect of the technology, considering farmers' actual irrigation practices and technology. In terms of the accuracy of the models, HLM provides the most precise estimate of the impact of precision leveling on a field's water usage. The HLM estimate was within the 95% confidence interval of the other three models, thus verifying the accuracy and robustness of the statistical findings and model.

  15. Skinfold Prediction Equations Fail to Provide an Accurate Estimate of Body Composition in Elite Rugby Union Athletes of Caucasian and Polynesian Ethnicity.

    PubMed

    Zemski, Adam J; Broad, Elizabeth M; Slater, Gary J

    2018-01-01

    Body composition in elite rugby union athletes is routinely assessed using surface anthropometry, which can be utilized to provide estimates of absolute body composition using regression equations. This study aims to assess the ability of available skinfold equations to estimate body composition in elite rugby union athletes who have unique physique traits and divergent ethnicity. The development of sport-specific and ethnicity-sensitive equations was also pursued. Forty-three male international Australian rugby union athletes of Caucasian and Polynesian descent underwent surface anthropometry and dual-energy X-ray absorptiometry (DXA) assessment. Body fat percent (BF%) was estimated using five previously developed equations and compared to DXA measures. Novel sport and ethnicity-sensitive prediction equations were developed using forward selection multiple regression analysis. Existing skinfold equations provided unsatisfactory estimates of BF% in elite rugby union athletes, with all equations demonstrating a 95% prediction interval in excess of 5%. The equations tended to underestimate BF% at low levels of adiposity, whilst overestimating BF% at higher levels of adiposity, regardless of ethnicity. The novel equations created explained a similar amount of variance to those previously developed (Caucasians 75%, Polynesians 90%). The use of skinfold equations, including the created equations, cannot be supported to estimate absolute body composition. Until a population-specific equation is established that can be validated to precisely estimate body composition, it is advocated to use a proven method, such as DXA, when absolute measures of lean and fat mass are desired, and raw anthropometry data routinely to derive an estimate of body composition change.

  16. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  17. A study of attitude control concepts for precision-pointing non-rigid spacecraft

    NASA Technical Reports Server (NTRS)

    Likins, P. W.

    1975-01-01

    Attitude control concepts for use onboard structurally nonrigid spacecraft that must be pointed with great precision are examined. The task of determining the eigenproperties of a system of linear time-invariant equations (in terms of hybrid coordinates) representing the attitude motion of a flexible spacecraft is discussed. Literal characteristics are developed for the associated eigenvalues and eigenvectors of the system. A method is presented for determining the poles and zeros of the transfer function describing the attitude dynamics of a flexible spacecraft characterized by hybrid coordinate equations. Alterations are made to linear regulator and observer theory to accommodate modeling errors. The results show that a model error vector, which evolves from an error system, can be added to a reduced system model, estimated by an observer, and used by the control law to render the system less sensitive to uncertain magnitudes and phase relations of truncated modes and external disturbance effects. A hybrid coordinate formulation using the provided assumed mode shapes, rather than incorporating the usual finite element approach is provided.

  18. Nonlinear unbiased minimum-variance filter for Mars entry autonomous navigation under large uncertainties and unknown measurement bias.

    PubMed

    Xiao, Mengli; Zhang, Yongbo; Fu, Huimin; Wang, Zhihua

    2018-05-01

    High-precision navigation algorithm is essential for the future Mars pinpoint landing mission. The unknown inputs caused by large uncertainties of atmospheric density and aerodynamic coefficients as well as unknown measurement biases may cause large estimation errors of conventional Kalman filters. This paper proposes a derivative-free version of nonlinear unbiased minimum variance filter for Mars entry navigation. This filter has been designed to solve this problem by estimating the state and unknown measurement biases simultaneously with derivative-free character, leading to a high-precision algorithm for the Mars entry navigation. IMU/radio beacons integrated navigation is introduced in the simulation, and the result shows that with or without radio blackout, our proposed filter could achieve an accurate state estimation, much better than the conventional unscented Kalman filter, showing the ability of high-precision Mars entry navigation algorithm. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Evaluation of consumer monitors to measure particulate matter.

    PubMed

    Sousan, Sinan; Koehler, Kirsten; Hallett, Laura; Peters, Thomas M

    2017-05-01

    Recently, inexpensive (<$300) consumer aerosol monitors (CAMs) targeted for use in homes have become available. We evaluated the accuracy, bias, and precision of three CAMs (Foobot from Airoxlab, Speck from Carnegie Mellon University, and AirBeam from HabitatMap) for measuring mass concentrations in occupational settings. In a laboratory study, PM 2.5 measured with the CAMs and a medium-cost aerosol photometer (personal DataRAM 1500, Thermo Scientific) were compared to that from reference instruments for three aerosols (salt, welding fume, and Arizona road dust, ARD) at concentrations up to 8500 μg/m 3 . Three of each type of CAM were included to estimate precision. Compared to reference instruments, mass concentrations measured with the Foobot (r-value = 0.99) and medium-cost photometer (r-value = 0.99) show strong correlation, whereas those from the Speck (r-value range 0.88 - 0.99) and AirBeam (0.7 - 0.96) were less correlated. The Foobot bias was (-12%) for ARD and measurements were similar to the medium-cost instrument. Foobot bias was (< -46%) for salt and welding fume aerosols. Speck bias was at 18% salt for ARD and -86% for welding fume. AirBeam bias was (-36%) for salt and (-83%) for welding fume. All three photometers had a bias (< -82%) for welding fume. Precision was excellent for the Foobot (coefficient of variation range: 5% to 8%) and AirBeam (2% to 9%), but poorer for the Speck (8% to 25%). These findings suggest that the Foobot, with a linear response to different aerosol types and good precision, can provide reasonable estimates of PM 2.5 in the workplace after site-specific calibration to account for particle size and composition.

  20. Results of external quality-assurance program for the National Atmospheric Deposition Program and National Trends Network during 1985

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.

    1988-01-01

    External quality assurance monitoring of the National Atmospheric Deposition Program (NADP) and National Trends Network (NTN) was performed by the U.S. Geological Survey during 1985. The monitoring consisted of three primary programs: (1) an intersite comparison program designed to assess the precision and accuracy of onsite pH and specific conductance measurements made by NADP and NTN site operators; (2) a blind audit sample program designed to assess the effect of routine field handling on the precision and bias of NADP and NTN wet deposition data; and (3) an interlaboratory comparison program designed to compare analytical data from the laboratory processing NADP and NTN samples with data produced by other laboratories routinely analyzing wet deposition samples and to provide estimates of individual laboratory precision. An average of 94% of the site operators participated in the four voluntary intersite comparisons during 1985. A larger percentage of participating site operators met the accuracy goal for specific conductance measurements (average, 87%) than for pH measurements (average, 67%). Overall precision was dependent on the actual specific conductance of the test solution and independent of the pH of the test solution. Data for the blind audit sample program indicated slight positive biases resulting from routine field handling for all analytes except specific conductance. These biases were not large enough to be significant for most data users. Data for the blind audit sample program also indicated that decreases in hydrogen ion concentration were accompanied by decreases in specific conductance. Precision estimates derived from the blind audit sample program indicate that the major source of uncertainty in wet deposition data is the routine field handling that each wet deposition sample receives. Results of the interlaboratory comparison program were similar to results of previous years ' evaluations, indicating that the participating laboratories produced comparable data when they analyzed identical wet deposition samples, and that the laboratory processing NADP and NTN samples achieved the best analyte precision of the participating laboratories. (Author 's abstract)

Top