Sample records for case-by-case error estimates

  1. Evaluating EIV, OLS, and SEM Estimators of Group Slope Differences in the Presence of Measurement Error: The Single-Indicator Case

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2012-01-01

    Measurement error significantly biases interaction effects and distorts researchers' inferences regarding interactive hypotheses. This article focuses on the single-indicator case and shows how to accurately estimate group slope differences by disattenuating interaction effects with errors-in-variables (EIV) regression. New analytic findings were…

  2. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters

    PubMed Central

    Park, Chan Gook

    2018-01-01

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539

  3. Aniseikonia quantification: error rate of rule of thumb estimation.

    PubMed

    Lubkin, V; Shippman, S; Bennett, G; Meininger, D; Kramer, P; Poppinga, P

    1999-01-01

    To find the error rate in quantifying aniseikonia by using "Rule of Thumb" estimation in comparison with proven space eikonometry. Study 1: 24 adult pseudophakic individuals were measured for anisometropia, and astigmatic interocular difference. Rule of Thumb quantification for prescription was calculated and compared with aniseikonia measurement by the classical Essilor Projection Space Eikonometer. Study 2: parallel analysis was performed on 62 consecutive phakic patients from our strabismus clinic group. Frequency of error: For Group 1 (24 cases): 5 ( or 21 %) were equal (i.e., 1% or less difference); 16 (or 67% ) were greater (more than 1% different); and 3 (13%) were less by Rule of Thumb calculation in comparison to aniseikonia determined on the Essilor eikonometer. For Group 2 (62 cases): 45 (or 73%) were equal (1% or less); 10 (or 16%) were greater; and 7 (or 11%) were lower in the Rule of Thumb calculations in comparison to Essilor eikonometry. Magnitude of error: In Group 1, in 10/24 (29%) aniseikonia by Rule of Thumb estimation was 100% or more greater than by space eikonometry, and in 6 of those ten by 200% or more. In Group 2, in 4/62 (6%) aniseikonia by Rule of Thumb estimation was 200% or more greater than by space eikonometry. The frequency and magnitude of apparent clinical errors of Rule of Thumb estimation is disturbingly large. This problem is greatly magnified by the time and effort and cost of prescribing and executing an aniseikonic correction for a patient. The higher the refractive error, the greater the anisometropia, and the worse the errors in Rule of Thumb estimation of aniseikonia. Accurate eikonometric methods and devices should be employed in all cases where such measurements can be made. Rule of thumb estimations should be limited to cases where such subjective testing and measurement cannot be performed, as in infants after unilateral cataract surgery.

  4. Analyzing self-controlled case series data when case confirmation rates are estimated from an internal validation sample.

    PubMed

    Xu, Stanley; Clarke, Christina L; Newcomer, Sophia R; Daley, Matthew F; Glanz, Jason M

    2018-05-16

    Vaccine safety studies are often electronic health record (EHR)-based observational studies. These studies often face significant methodological challenges, including confounding and misclassification of adverse event. Vaccine safety researchers use self-controlled case series (SCCS) study design to handle confounding effect and employ medical chart review to ascertain cases that are identified using EHR data. However, for common adverse events, limited resources often make it impossible to adjudicate all adverse events observed in electronic data. In this paper, we considered four approaches for analyzing SCCS data with confirmation rates estimated from an internal validation sample: (1) observed cases, (2) confirmed cases only, (3) known confirmation rate, and (4) multiple imputation (MI). We conducted a simulation study to evaluate these four approaches using type I error rates, percent bias, and empirical power. Our simulation results suggest that when misclassification of adverse events is present, approaches such as observed cases, confirmed case only, and known confirmation rate may inflate the type I error, yield biased point estimates, and affect statistical power. The multiple imputation approach considers the uncertainty of estimated confirmation rates from an internal validation sample, yields a proper type I error rate, largely unbiased point estimate, proper variance estimate, and statistical power. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Errors in Representing Regional Acid Deposition with Spatially Sparse Monitoring: Case Studies of the Eastern US Using Model Predictions

    EPA Science Inventory

    The current study uses case studies of model-estimated regional precipitation and wet ion deposition to estimate errors in corresponding regional values derived from the means of site-specific values within regions of interest located in the eastern US. The mean of model-estimate...

  6. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  7. Use of Quality Controlled AIRS Temperature Soundings to Improve Forecast Skill

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Reale, Oreste; Iredell, Lena

    2010-01-01

    AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU-A are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. Also included are the clear column radiances used to derive these products which are representative of the radiances AIRS would have seen if there were no clouds in the field of view. All products also have error estimates. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20 percent, in cases with up to 90 percent effective cloud cover. The products are designed for data assimilation purposes for the improvement of numerical weather prediction, as well as for the study of climate and meteorological processes. With regard to data assimilation, one can use either the products themselves or the clear column radiances from which the products were derived. The AIRS Version 5 retrieval algorithm is now being used operationally at the Goddard DISC in the routine generation of geophysical parameters derived from AIRS/AMSU data. A major innovation in Version 5 is the ability to generate case-by-case level-by-level error estimates for retrieved quantities and clear column radiances, and the use of these error estimates for Quality Control. The temperature profile error estimates are used to determine a case-by-case characteristic pressure pbest, down to which the profile is considered acceptable for data assimilation purposes. The characteristic pressure p(sub best) is determined by comparing the case dependent error estimate (delta)T(p) to the threshold values (Delta)T(p). The AIRS Version 5 data set provides error estimates of T(p) at all levels, and also profile dependent values of pbest based on use of a Standard profile dependent threshold (Delta)T(p). These Standard thresholds were designed as a compromise between optimal use for data assimilation purposes, which requires highest accuracy (tighter Quality Control), and climate purposes, which requires more spatial coverage (looser Quality Control). Subsequent research using Version 5 sounding and error estimates showed that tighter Quality Control performs better for data assimilation proposes, while looser Quality Control better spatial coverage) performs better for climate purposes. We conducted a number of data assimilation experiments using the NASA GEOS-5 Data Assimilation System as a step toward finding an optimum balance of spatial coverage and sounding accuracy with regard to improving forecast skill. The model was run at a horizontal resolution of 0.5 degree latitude x 0.67 degree longitude with 72 vertical levels. These experiments were run during four different seasons, each using a different year. The AIRS temperature profiles were presented to the GEOS-5 analysis as rawinsonde profiles, and the profile error estimates (delta)T(p) were used as the uncertainty for each measurement in the data assimilation process.

  8. The effect of misclassification errors on case mix measurement.

    PubMed

    Sutherland, Jason M; Botz, Chas K

    2006-12-01

    Case mix systems have been implemented for hospital reimbursement and performance measurement across Europe and North America. Case mix categorizes patients into discrete groups based on clinical information obtained from patient charts in an attempt to identify clinical or cost difference amongst these groups. The diagnosis related group (DRG) case mix system is the most common methodology, with variants adopted in many countries. External validation studies of coding quality have confirmed that widespread variability exists between originally recorded diagnoses and re-abstracted clinical information. DRG assignment errors in hospitals that share patient level cost data for the purpose of establishing cost weights affects cost weight accuracy. The purpose of this study is to estimate bias in cost weights due to measurement error of reported clinical information. DRG assignment error rates are simulated based on recent clinical re-abstraction study results. Our simulation study estimates that 47% of cost weights representing the least severe cases are over weight by 10%, while 32% of cost weights representing the most severe cases are under weight by 10%. Applying the simulated weights to a cross-section of hospitals, we find that teaching hospitals tend to be under weight. Since inaccurate cost weights challenges the ability of case mix systems to accurately reflect patient mix and may lead to potential distortions in hospital funding, bias in hospital case mix measurement highlights the role clinical data quality plays in hospital funding in countries that use DRG-type case mix systems. Quality of clinical information should be carefully considered from hospitals that contribute financial data for establishing cost weights.

  9. The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.

    PubMed

    Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik

    2014-11-11

    Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.

  10. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error.

    PubMed

    Carroll, Raymond J; Delaigle, Aurore; Hall, Peter

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  11. Estimation of attitude sensor timetag biases

    NASA Technical Reports Server (NTRS)

    Sedlak, J.

    1995-01-01

    This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are independently known, the timetag error for a single sensor can be accurately estimated as long as its boresight is not too close to the spacecraft rotation axis.

  12. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration

    PubMed Central

    Doss, Hani; Tan, Aixin

    2017-01-01

    In the classical biased sampling problem, we have k densities π1(·), …, πk(·), each known up to a normalizing constant, i.e. for l = 1, …, k, πl(·) = νl(·)/ml, where νl(·) is a known function and ml is an unknown constant. For each l, we have an iid sample from πl,·and the problem is to estimate the ratios ml/ms for all l and all s. This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the πl’s are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case. PMID:28706463

  13. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.

    PubMed

    Doss, Hani; Tan, Aixin

    2014-09-01

    In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.

  14. Rheumatoid arthritis prevalence in Quebec.

    PubMed

    Bernatsky, Sasha; Dekis, Alaa; Hudson, Marie; Pineau, Christian A; Boire, Gilles; Fortin, Paul R; Bessette, Louis; Jean, Sonia; Chetaille, Ann L; Belisle, Patrick; Bergeron, Louise; Feldman, Debbie Ehrmann; Joseph, Lawrence

    2014-12-19

    To estimate rheumatoid arthritis (RA) prevalence in Quebec using administrative health data, comparing across regions. Cases of RA were ascertained from physician billing and hospitalization data, 1992-2008. We used three case definitions: 1) ≥ 2 billing diagnoses, submitted by any physician, ≥ 2 months apart, but within 2 years; 2) ≥ 1 diagnosis, by a rheumatologist; 3) ≥1 hospitalization diagnosis (all based on ICD-9 code 714, and ICD-10 code M05). We combined data across these three case definitions, using Bayesian hierarchical latent class models to estimate RA prevalence, adjusting for the imperfect sensitivity and specificity of the data. We compared urban versus rural regions. Using our case definitions and no adjustment for error, we defined 75,760 cases for an over-all RA prevalence of 9.9 per thousand residents. After adjusting for the imperfect sensitivity and specificity of our case definition algorithms, we estimated Quebec RA prevalence at 5.6 per 1000 females and 4.1 per 1000 males. The adjusted RA prevalence estimates for older females were the highest for any demographic group (9.9 cases per 1,000), and were similar in rural and urban regions. In younger males and females, and in older males, RA prevalence estimates were lower in rural versus urban areas. Without adjustment for error inherent in administrative databases, RA prevalence in Quebec was approximately 1%, while adjusted estimates are approximately half that. The lower prevalence in rural areas, seen for most demographic groups, may suggest either true regional variations in RA risk, or under-ascertainment of cases in rural Quebec.

  15. Estimating error rates for firearm evidence identifications in forensic science.

    PubMed

    Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan

    2018-03-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.

  16. Accuracy of patient specific organ-dose estimates obtained using an automated image segmentation algorithm

    NASA Astrophysics Data System (ADS)

    Gilat-Schmidt, Taly; Wang, Adam; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-03-01

    The overall goal of this work is to develop a rapid, accurate and fully automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using a deterministic Boltzmann Transport Equation solver and automated CT segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. The investigated algorithm uses a combination of feature-based and atlas-based methods. A multiatlas approach was also investigated. We hypothesize that the auto-segmentation algorithm is sufficiently accurate to provide organ dose estimates since random errors at the organ boundaries will average out when computing the total organ dose. To test this hypothesis, twenty head-neck CT scans were expertly segmented into nine regions. A leave-one-out validation study was performed, where every case was automatically segmented with each of the remaining cases used as the expert atlas, resulting in nineteen automated segmentations for each of the twenty datasets. The segmented regions were applied to gold-standard Monte Carlo dose maps to estimate mean and peak organ doses. The results demonstrated that the fully automated segmentation algorithm estimated the mean organ dose to within 10% of the expert segmentation for regions other than the spinal canal, with median error for each organ region below 2%. In the spinal canal region, the median error was 7% across all data sets and atlases, with a maximum error of 20%. The error in peak organ dose was below 10% for all regions, with a median error below 4% for all organ regions. The multiple-case atlas reduced the variation in the dose estimates and additional improvements may be possible with more robust multi-atlas approaches. Overall, the results support potential feasibility of an automated segmentation algorithm to provide accurate organ dose estimates.

  17. Design considerations for case series models with exposure onset measurement error.

    PubMed

    Mohammed, Sandra M; Dalrymple, Lorien S; Sentürk, Damla; Nguyen, Danh V

    2013-02-28

    The case series model allows for estimation of the relative incidence of events, such as cardiovascular events, within a pre-specified time window after an exposure, such as an infection. The method requires only cases (individuals with events) and controls for all fixed/time-invariant confounders. The measurement error case series model extends the original case series model to handle imperfect data, where the timing of an infection (exposure) is not known precisely. In this work, we propose a method for power/sample size determination for the measurement error case series model. Extensive simulation studies are used to assess the accuracy of the proposed sample size formulas. We also examine the magnitude of the relative loss of power due to exposure onset measurement error, compared with the ideal situation where the time of exposure is measured precisely. To facilitate the design of case series studies, we provide publicly available web-based tools for determining power/sample size for both the measurement error case series model as well as the standard case series model. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Estimating error rates for firearm evidence identifications in forensic science

    PubMed Central

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  19. Impact of random and systematic recall errors and selection bias in case--control studies on mobile phone use and brain tumors in adolescents (CEFALO study).

    PubMed

    Aydin, Denis; Feychting, Maria; Schüz, Joachim; Andersen, Tina Veje; Poulsen, Aslak Harbo; Prochazka, Michaela; Klaeboe, Lars; Kuehni, Claudia E; Tynes, Tore; Röösli, Martin

    2011-07-01

    Whether the use of mobile phones is a risk factor for brain tumors in adolescents is currently being studied. Case--control studies investigating this possible relationship are prone to recall error and selection bias. We assessed the potential impact of random and systematic recall error and selection bias on odds ratios (ORs) by performing simulations based on real data from an ongoing case--control study of mobile phones and brain tumor risk in children and adolescents (CEFALO study). Simulations were conducted for two mobile phone exposure categories: regular and heavy use. Our choice of levels of recall error was guided by a validation study that compared objective network operator data with the self-reported amount of mobile phone use in CEFALO. In our validation study, cases overestimated their number of calls by 9% on average and controls by 34%. Cases also overestimated their duration of calls by 52% on average and controls by 163%. The participation rates in CEFALO were 83% for cases and 71% for controls. In a variety of scenarios, the combined impact of recall error and selection bias on the estimated ORs was complex. These simulations are useful for the interpretation of previous case-control studies on brain tumor and mobile phone use in adults as well as for the interpretation of future studies on adolescents. Copyright © 2011 Wiley-Liss, Inc.

  20. Real-time hydraulic interval state estimation for water transport networks: a case study

    NASA Astrophysics Data System (ADS)

    Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.

    2018-03-01

    Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.

  1. Attitude determination using vector observations: A fast optimal matrix algorithm

    NASA Technical Reports Server (NTRS)

    Markley, F. Landis

    1993-01-01

    The attitude matrix minimizing Wahba's loss function is computed directly by a method that is competitive with the fastest known algorithm for finding this optimal estimate. The method also provides an estimate of the attitude error covariance matrix. Analysis of the special case of two vector observations identifies those cases for which the TRIAD or algebraic method minimizes Wahba's loss function.

  2. A pharmacometric case study regarding the sensitivity of structural model parameter estimation to error in patient reported dosing times.

    PubMed

    Knights, Jonathan; Rohatagi, Shashank

    2015-12-01

    Although there is a body of literature focused on minimizing the effect of dosing inaccuracies on pharmacokinetic (PK) parameter estimation, most of the work centers on missing doses. No attempt has been made to specifically characterize the effect of error in reported dosing times. Additionally, existing work has largely dealt with cases in which the compound of interest is dosed at an interval no less than its terminal half-life. This work provides a case study investigating how error in patient reported dosing times might affect the accuracy of structural model parameter estimation under sparse sampling conditions when the dosing interval is less than the terminal half-life of the compound, and the underlying kinetics are monoexponential. Additional effects due to noncompliance with dosing events are not explored and it is assumed that the structural model and reasonable initial estimates of the model parameters are known. Under the conditions of our simulations, with structural model CV % ranging from ~20 to 60 %, parameter estimation inaccuracy derived from error in reported dosing times was largely controlled around 10 % on average. Given that no observed dosing was included in the design and sparse sampling was utilized, we believe these error results represent a practical ceiling given the variability and parameter estimates for the one-compartment model. The findings suggest additional investigations may be of interest and are noteworthy given the inability of current PK software platforms to accommodate error in dosing times.

  3. The multicategory case of the sequential Bayesian pixel selection and estimation procedure

    NASA Technical Reports Server (NTRS)

    Pore, M. D.; Dennis, T. B. (Principal Investigator)

    1980-01-01

    A Bayesian technique for stratified proportion estimation and a sampling based on minimizing the mean squared error of this estimator were developed and tested on LANDSAT multispectral scanner data using the beta density function to model the prior distribution in the two-class case. An extention of this procedure to the k-class case is considered. A generalization of the beta function is shown to be a density function for the general case which allows the procedure to be extended.

  4. Recall bias in the assessment of exposure to mobile phones.

    PubMed

    Vrijheid, Martine; Armstrong, Bruce K; Bédard, Daniel; Brown, Julianne; Deltour, Isabelle; Iavarone, Ivano; Krewski, Daniel; Lagorio, Susanna; Moore, Stephen; Richardson, Lesley; Giles, Graham G; McBride, Mary; Parent, Marie-Elise; Siemiatycki, Jack; Cardis, Elisabeth

    2009-05-01

    Most studies of mobile phone use are case-control studies that rely on participants' reports of past phone use for their exposure assessment. Differential errors in recalled phone use are a major concern in such studies. INTERPHONE, a multinational case-control study of brain tumour risk and mobile phone use, included validation studies to quantify such errors and evaluate the potential for recall bias. Mobile phone records of 212 cases and 296 controls were collected from network operators in three INTERPHONE countries over an average of 2 years, and compared with mobile phone use reported at interview. The ratio of reported to recorded phone use was analysed as measure of agreement. Mean ratios were virtually the same for cases and controls: both underestimated number of calls by a factor of 0.81 and overestimated call duration by a factor of 1.4. For cases, but not controls, ratios increased with increasing time before the interview; however, these trends were based on few subjects with long-term data. Ratios increased by level of use. Random recall errors were large. In conclusion, there was little evidence for differential recall errors overall or in recent time periods. However, apparent overestimation by cases in more distant time periods could cause positive bias in estimates of disease risk associated with mobile phone use.

  5. A fully redundant double difference algorithm for obtaining minimum variance estimates from GPS observations

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.

    1986-01-01

    In double differencing a regression system obtained from concurrent Global Positioning System (GPS) observation sequences, one either undersamples the system to avoid introducing colored measurement statistics, or one fully samples the system incurring the resulting non-diagonal covariance matrix for the differenced measurement errors. A suboptimal estimation result will be obtained in the undersampling case and will also be obtained in the fully sampled case unless the color noise statistics are taken into account. The latter approach requires a least squares weighting matrix derived from inversion of a non-diagonal covariance matrix for the differenced measurement errors instead of inversion of the customary diagonal one associated with white noise processes. Presented is the so-called fully redundant double differencing algorithm for generating a weighted double differenced regression system that yields equivalent estimation results, but features for certain cases a diagonal weighting matrix even though the differenced measurement error statistics are highly colored.

  6. Maximum likelihood estimation for Cox's regression model under nested case-control sampling.

    PubMed

    Scheike, Thomas H; Juul, Anders

    2004-04-01

    Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin-like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used to obtain information additional to the relative risk estimates of covariates.

  7. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  8. Recent Theoretical Advances in Analysis of AIRS/AMSU Sounding Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2007-01-01

    AIRS was launched on EOS Aqua on May 4,2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. This paper describes the AIRS Science Team Version 5.0 retrieval algorithm. Starting in early 2007, the Goddard DAAC will use this algorithm to analyze near real time AIRS/AMSU observations. These products are then made available to the scientific community for research purposes. The products include twice daily measurements of the Earth's three dimensional global temperature, water vapor, and ozone distribution as well as cloud cover. In addition, accurate twice daily measurements of the earth's land and ocean temperatures are derived and reported. Scientists use this important set of observations for two major applications. They provide important information for climate studies of global and regional variability and trends of different aspects of the earth's atmosphere. They also provide information for researchers to improve the skill of weather forecasting. A very important new product of the AIRS Version 5 algorithm is accurate case-by-case error estimates of the retrieved products. This heightens their utility for use in both weather and climate applications. These error estimates are also used directly for quality control of the retrieved products. Version 5 also allows for accurate quality controlled AIRS only retrievals, called "Version 5 AO retrievals" which can be used as a backup methodology if AMSU fails. Examples of the accuracy of error estimates and quality controlled retrieval products of the AIRS/AMSU Version 5 and Version 5 AO algorithms are given, and shown to be significantly better than the previously used Version 4 algorithm. Assimilation of Version 5 retrievals are also shown to significantly improve forecast skill, especially when the case-by-case error estimates are utilized in the data assimilation process.

  9. Phase correction system for automatic focusing of synthetic aperture radar

    DOEpatents

    Eichel, Paul H.; Ghiglia, Dennis C.; Jakowatz, Jr., Charles V.

    1990-01-01

    A phase gradient autofocus system for use in synthetic aperture imaging accurately compensates for arbitrary phase errors in each imaged frame by locating highlighted areas and determining the phase disturbance or image spread associated with each of these highlight areas. An estimate of the image spread for each highlighted area in a line in the case of one dimensional processing or in a sector, in the case of two-dimensional processing, is determined. The phase error is determined using phase gradient processing. The phase error is then removed from the uncorrected image and the process is iteratively performed to substantially eliminate phase errors which can degrade the image.

  10. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  11. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    NASA Astrophysics Data System (ADS)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  12. Improved estimation of heavy rainfall by weather radar after reflectivity correction and accounting for raindrop size distribution variability

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2015-04-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z-R) and radar reflectivity-specific attenuation (Z-k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  13. The impact of reflectivity correction and accounting for raindrop size distribution variability to improve precipitation estimation by weather radar for an extreme low-land mesoscale convective system

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-11-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands, locally giving rise to rainfall accumulations exceeding 150 mm. Correctly measuring the amount of precipitation during such an extreme event is important, both from a hydrological and meteorological perspective. Unfortunately, the operational weather radar measurements were affected by multiple sources of error and only 30% of the precipitation observed by rain gauges was estimated. Such an underestimation of heavy rainfall, albeit generally less strong than in this extreme case, is typical for operational weather radar in The Netherlands. In general weather radar measurement errors can be subdivided into two groups: (1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, radar calibration, vertical profile of reflectivity) and (2) errors resulting from variations in the raindrop size distribution that in turn result in incorrect rainfall intensity and attenuation estimates from observed reflectivity measurements. A stepwise procedure to correct for the first group of errors leads to large improvements in the quality of the estimated precipitation, increasing the radar rainfall accumulations to about 65% of those observed by gauges. To correct for the second group of errors, a coherent method is presented linking the parameters of the radar reflectivity-rain rate (Z - R) and radar reflectivity-specific attenuation (Z - k) relationships to the normalized drop size distribution (DSD). Two different procedures were applied. First, normalized DSD parameters for the whole event and for each precipitation type separately (convective, stratiform and undefined) were obtained using local disdrometer observations. Second, 10,000 randomly generated plausible normalized drop size distributions were used for rainfall estimation, to evaluate whether this Monte Carlo method would improve the quality of weather radar rainfall products. Using the disdrometer information, the best results were obtained in case no differentiation between precipitation type (convective, stratiform and undefined) was made, increasing the event accumulations to more than 80% of those observed by gauges. For the randomly optimized procedure, radar precipitation estimates further improve and closely resemble observations in case one differentiates between precipitation type. However, the optimal parameter sets are very different from those derived from disdrometer observations. It is therefore questionable if single disdrometer observations are suitable for large-scale quantitative precipitation estimation, especially if the disdrometer is located relatively far away from the main rain event, which was the case in this study. In conclusion, this study shows the benefit of applying detailed error correction methods to improve the quality of the weather radar product, but also confirms the need to be cautious using locally obtained disdrometer measurements.

  14. 3-D direct current resistivity anisotropic modelling by goal-oriented adaptive finite element methods

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Qiu, Lewen; Tang, Jingtian; Wu, Xiaoping; Xiao, Xiao; Zhou, Zilong

    2018-01-01

    Although accurate numerical solvers for 3-D direct current (DC) isotropic resistivity models are current available even for complicated models with topography, reliable numerical solvers for the anisotropic case are still an open question. This study aims to develop a novel and optimal numerical solver for accurately calculating the DC potentials for complicated models with arbitrary anisotropic conductivity structures in the Earth. First, a secondary potential boundary value problem is derived by considering the topography and the anisotropic conductivity. Then, two a posteriori error estimators with one using the gradient-recovery technique and one measuring the discontinuity of the normal component of current density are developed for the anisotropic cases. Combing the goal-oriented and non-goal-oriented mesh refinements and these two error estimators, four different solving strategies are developed for complicated DC anisotropic forward modelling problems. A synthetic anisotropic two-layer model with analytic solutions verified the accuracy of our algorithms. A half-space model with a buried anisotropic cube and a mountain-valley model are adopted to test the convergence rates of these four solving strategies. We found that the error estimator based on the discontinuity of current density shows better performance than the gradient-recovery based a posteriori error estimator for anisotropic models with conductivity contrasts. Both error estimators working together with goal-oriented concepts can offer optimal mesh density distributions and highly accurate solutions.

  15. Automatic Error Analysis Using Intervals

    ERIC Educational Resources Information Center

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  16. How common are cognitive errors in cases presented at emergency medicine resident morbidity and mortality conferences?

    PubMed

    Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett

    2018-06-20

    Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.

  17. Reliability and Validity in Hospital Case-Mix Measurement

    PubMed Central

    Pettengill, Julian; Vertrees, James

    1982-01-01

    There is widespread interest in the development of a measure of hospital output. This paper describes the problem of measuring the expected cost of the mix of inpatient cases treated in a hospital (hospital case-mix) and a general approach to its solution. The solution is based on a set of homogenous groups of patients, defined by a patient classification system, and a set of estimated relative cost weights corresponding to the patient categories. This approach is applied to develop a summary measure of the expected relative costliness of the mix of Medicare patients treated in 5,576 participating hospitals. The Medicare case-mix index is evaluated by estimating a hospital average cost function. This provides a direct test of the hypothesis that the relationship between Medicare case-mix and Medicare cost per case is proportional. The cost function analysis also provides a means of simulating the effects of classification error on our estimate of this relationship. Our results indicate that this general approach to measuring hospital case-mix provides a valid and robust measure of the expected cost of a hospital's case-mix. PMID:10309909

  18. Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE

    NASA Astrophysics Data System (ADS)

    Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.

    2015-12-01

    Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.

  19. Estimation of sensible and latent heat flux from natural sparse vegetation surfaces using surface renewal

    NASA Astrophysics Data System (ADS)

    Zapata, N.; Martínez-Cob, A.

    2001-12-01

    This paper reports a study undertaken to evaluate the feasibility of the surface renewal method to accurately estimate long-term evaporation from the playa and margins of an endorreic salty lagoon (Gallocanta lagoon, Spain) under semiarid conditions. High-frequency temperature readings were taken for two time lags ( r) and three measurement heights ( z) in order to get surface renewal sensible heat flux ( HSR) values. These values were compared against eddy covariance sensible heat flux ( HEC) values for a calibration period (25-30 July 2000). Error analysis statistics (index of agreement, IA; root mean square error, RMSE; and systematic mean square error, MSEs) showed that the agreement between HSR and HEC improved as measurement height decreased and time lag increased. Calibration factors α were obtained for all analyzed cases. The best results were obtained for the z=0.9 m ( r=0.75 s) case for which α=1.0 was observed. In this case, uncertainty was about 10% in terms of relative error ( RE). Latent heat flux values were obtained by solving the energy balance equation for both the surface renewal ( LESR) and the eddy covariance ( LEEC) methods, using HSR and HEC, respectively, and measurements of net radiation and soil heat flux. For the calibration period, error analysis statistics for LESR were quite similar to those for HSR, although errors were mostly at random. LESR uncertainty was less than 9%. Calibration factors were applied for a validation data subset (30 July-4 August 2000) for which meteorological conditions were somewhat different (higher temperatures and wind speed and lower solar and net radiation). Error analysis statistics for both HSR and LESR were quite good for all cases showing the goodness of the calibration factors. Nevertheless, the results obtained for the z=0.9 m ( r=0.75 s) case were still the best ones.

  20. [Determination of the numbers of monitoring medical institutions necessary for estimating incidence rates in the surveillance of infectious diseases in Japan].

    PubMed

    Hashimoto, S; Murakami, Y; Taniguchi, K; Nagai, M

    1999-12-01

    Our purpose was to determine the number of monitoring stations (medical institutions) necessary for estimating incidence rates in the surveillance system of infectious diseases in Japan. Infectious diseases were selected by the type of monitoring stations: 15 diseases in pediatrics stations, influenza in influenza stations, 3 diseases in ophthalmology stations and 5 diseases in the stations of sexually transmitted diseases (STD). For each type of monitoring station, 5 cases of the number of monitoring stations in each health center, including the number determined from presently established standards and the actual number in 1997, were given. It was assumed that monitoring stations were randomly selected among medical institutions in health centers. For each infectious disease, each case and each type of monitoring station, standard error rates of estimated numbers of incidence cases in the whole country were calculated in 1993-1997 using the data of the surveillance of infectious diseases. Among 5 cases of monitoring stations, the case satisfied the condition that those standard error rates were lower than the critical values, was selected. The critical values were 5% in pediatrics and influenza stations, and 10% in ophthalmology and STD stations. The numbers of monitoring stations in the selected cases were 3,000 in pediatrics stations, 5,000 in influenza stations (including all pediatrics stations), 605 in ophthalmology stations and 900 in STD stations.

  1. Insight From the Statistics of Nothing: Estimating Limits of Change Detection Using Inferred No-Change Areas in DEM Difference Maps and Application to Landslide Hazard Studies

    NASA Astrophysics Data System (ADS)

    Haneberg, W. C.

    2017-12-01

    Remote characterization of new landslides or areas of ongoing movement using differences in high resolution digital elevation models (DEMs) created through time, for example before and after major rains or earthquakes, is an attractive proposition. In the case of large catastrophic landslides, changes may be apparent enough that simple subtraction suffices. In other cases, statistical noise can obscure landslide signatures and place practical limits on detection. In ideal cases on land, GPS surveys of representative areas at the time of DEM creation can quantify the inherent errors. In less-than-ideal terrestrial cases and virtually all submarine cases, it may be impractical or impossible to independently estimate the DEM errors. Examining DEM difference statistics for areas reasonably inferred to have no change, however, can provide insight into the limits of detectability. Data from inferred no-change areas of airborne LiDAR DEM difference maps of the 2014 Oso, Washington landslide and landslide-prone colluvium slopes along the Ohio River valley in northern Kentucky, show that DEM difference maps can have non-zero mean and slope dependent error components consistent with published studies of DEM errors. Statistical thresholds derived from DEM difference error and slope data can help to distinguish between DEM differences that are likely real—and which may indicate landsliding—from those that are likely spurious or irrelevant. This presentation describes and compares two different approaches, one based upon a heuristic assumption about the proportion of the study area likely covered by new landslides and another based upon the amount of change necessary to ensure difference at a specified level of probability.

  2. Semiparametric Bayesian analysis of gene-environment interactions with error in measurement of environmental covariates and missing genetic data.

    PubMed

    Lobach, Iryna; Mallick, Bani; Carroll, Raymond J

    2011-01-01

    Case-control studies are widely used to detect gene-environment interactions in the etiology of complex diseases. Many variables that are of interest to biomedical researchers are difficult to measure on an individual level, e.g. nutrient intake, cigarette smoking exposure, long-term toxic exposure. Measurement error causes bias in parameter estimates, thus masking key features of data and leading to loss of power and spurious/masked associations. We develop a Bayesian methodology for analysis of case-control studies for the case when measurement error is present in an environmental covariate and the genetic variable has missing data. This approach offers several advantages. It allows prior information to enter the model to make estimation and inference more precise. The environmental covariates measured exactly are modeled completely nonparametrically. Further, information about the probability of disease can be incorporated in the estimation procedure to improve quality of parameter estimates, what cannot be done in conventional case-control studies. A unique feature of the procedure under investigation is that the analysis is based on a pseudo-likelihood function therefore conventional Bayesian techniques may not be technically correct. We propose an approach using Markov Chain Monte Carlo sampling as well as a computationally simple method based on an asymptotic posterior distribution. Simulation experiments demonstrated that our method produced parameter estimates that are nearly unbiased even for small sample sizes. An application of our method is illustrated using a population-based case-control study of the association between calcium intake with the risk of colorectal adenoma development.

  3. Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario.

    PubMed

    Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao

    2016-11-22

    Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals' average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day's WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas.

  4. Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario

    PubMed Central

    Shu, Hua; Song, Ci; Pei, Tao; Xu, Lianming; Ou, Yang; Zhang, Libin; Li, Tao

    2016-01-01

    Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals’ average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day’s WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas. PMID:27879663

  5. Comparison of undulation difference accuracies using gravity anomalies and gravity disturbances. [for ocean geoid

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1980-01-01

    Errors in the outer zone contribution to oceanic undulation differences computed from a finite set of potential coefficients based on satellite measurements of gravity anomalies and gravity disturbances are analyzed. Equations are derived for the truncation errors resulting from the lack of high-degree coefficients and the commission errors arising from errors in the available lower-degree coefficients, and it is assumed that the inner zone (spherical cap) is sufficiently covered by surface gravity measurements in conjunction with altimetry or by gravity anomaly data. Numerical computations of error for various observational conditions reveal undulation difference errors ranging from 13 to 15 cm and from 6 to 36 cm in the cases of gravity anomaly and gravity disturbance data, respectively for a cap radius of 10 deg and mean anomalies accurate to 10 mgal, with a reduction of errors in both cases to less than 10 cm as mean anomaly accuracy is increased to 1 mgal. In the absence of a spherical cap, both cases yield error estimates of 68 cm for an accuracy of 1 mgal and between 93 and 160 cm for the lesser accuracy, which can be reduced to about 110 cm by the introduction of a perfect 30-deg reference field.

  6. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  7. Improved Soundings and Error Estimates using AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2006-01-01

    AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

  8. Population size estimation in Yellowstone wolves with error-prone noninvasive microsatellite genotypes.

    PubMed

    Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas

    2003-07-01

    Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.

  9. Wind estimates from cloud motions: Phase 1 of an in situ aircraft verification experiment

    NASA Technical Reports Server (NTRS)

    Hasler, A. F.; Shenk, W. E.; Skillman, W.

    1974-01-01

    An initial experiment was conducted to verify geostationary satellite derived cloud motion wind estimates with in situ aircraft wind velocity measurements. Case histories of one-half hour to two hours were obtained for 3-10km diameter cumulus cloud systems on 6 days. Also, one cirrus cloud case was obtained. In most cases the clouds were discrete enough that both the cloud motion and the ambient wind could be measured with the same aircraft Inertial Navigation System (INS). Since the INS drift error is the same for both the cloud motion and wind measurements, the drift error subtracts out of the relative motion determinations. The magnitude of the vector difference between the cloud motion and the ambient wind at the cloud base averaged 1.2 m/sec. The wind vector at higher levels in the cloud layer differed by about 3 m/sec to 5 m/sec from the cloud motion vector.

  10. Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures

    NASA Technical Reports Server (NTRS)

    Kaneko, Hideaki; Bey, Kim S.

    2004-01-01

    The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.

  11. Effects of structural error on the estimates of parameters of dynamical systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.

  12. Improved error estimates of a discharge algorithm for remotely sensed river measurements: Test cases on Sacramento and Garonne Rivers

    NASA Astrophysics Data System (ADS)

    Yoon, Yeosang; Garambois, Pierre-André; Paiva, Rodrigo C. D.; Durand, Michael; Roux, Hélène; Beighley, Edward

    2016-01-01

    We present an improvement to a previously presented algorithm that used a Bayesian Markov Chain Monte Carlo method for estimating river discharge from remotely sensed observations of river height, width, and slope. We also present an error budget for discharge calculations from the algorithm. The algorithm may be utilized by the upcoming Surface Water and Ocean Topography (SWOT) mission. We present a detailed evaluation of the method using synthetic SWOT-like observations (i.e., SWOT and AirSWOT, an airborne version of SWOT). The algorithm is evaluated using simulated AirSWOT observations over the Sacramento and Garonne Rivers that have differing hydraulic characteristics. The algorithm is also explored using SWOT observations over the Sacramento River. SWOT and AirSWOT height, width, and slope observations are simulated by corrupting the "true" hydraulic modeling results with instrument error. Algorithm discharge root mean square error (RMSE) was 9% for the Sacramento River and 15% for the Garonne River for the AirSWOT case using expected observation error. The discharge uncertainty calculated from Manning's equation was 16.2% and 17.1%, respectively. For the SWOT scenario, the RMSE and uncertainty of the discharge estimate for the Sacramento River were 15% and 16.2%, respectively. A method based on the Kalman filter to correct errors of discharge estimates was shown to improve algorithm performance. From the error budget, the primary source of uncertainty was the a priori uncertainty of bathymetry and roughness parameters. Sensitivity to measurement errors was found to be a function of river characteristics. For example, Steeper Garonne River is less sensitive to slope errors than the flatter Sacramento River.

  13. Analysis of binary responses with outcome-specific misclassification probability in genome-wide association studies.

    PubMed

    Rekaya, Romdhane; Smith, Shannon; Hay, El Hamidi; Farhat, Nourhene; Aggrey, Samuel E

    2016-01-01

    Errors in the binary status of some response traits are frequent in human, animal, and plant applications. These error rates tend to differ between cases and controls because diagnostic and screening tests have different sensitivity and specificity. This increases the inaccuracies of classifying individuals into correct groups, giving rise to both false-positive and false-negative cases. The analysis of these noisy binary responses due to misclassification will undoubtedly reduce the statistical power of genome-wide association studies (GWAS). A threshold model that accommodates varying diagnostic errors between cases and controls was investigated. A simulation study was carried out where several binary data sets (case-control) were generated with varying effects for the most influential single nucleotide polymorphisms (SNPs) and different diagnostic error rate for cases and controls. Each simulated data set consisted of 2000 individuals. Ignoring misclassification resulted in biased estimates of true influential SNP effects and inflated estimates for true noninfluential markers. A substantial reduction in bias and increase in accuracy ranging from 12% to 32% was observed when the misclassification procedure was invoked. In fact, the majority of influential SNPs that were not identified using the noisy data were captured using the proposed method. Additionally, truly misclassified binary records were identified with high probability using the proposed method. The superiority of the proposed method was maintained across different simulation parameters (misclassification rates and odds ratios) attesting to its robustness.

  14. An alternative to FASTSIM for tangential solution of the wheel-rail contact

    NASA Astrophysics Data System (ADS)

    Sichani, Matin Sh.; Enblom, Roger; Berg, Mats

    2016-06-01

    In most rail vehicle dynamics simulation packages, tangential solution of the wheel-rail contact is gained by means of Kalker's FASTSIM algorithm. While 5-25% error is expected for creep force estimation, the errors of shear stress distribution, needed for wheel-rail damage analysis, may rise above 30% due to the parabolic traction bound. Therefore, a novel algorithm named FaStrip is proposed as an alternative to FASTSIM. It is based on the strip theory which extends the two-dimensional rolling contact solution to three-dimensional contacts. To form FaStrip, the original strip theory is amended to obtain accurate estimations for any contact ellipse size and it is combined by a numerical algorithm to handle spin. The comparison between the two algorithms shows that using FaStrip improves the accuracy of the estimated shear stress distribution and the creep force estimation in all studied cases. In combined lateral creepage and spin cases, for instance, the error in force estimation reduces from 18% to less than 2%. The estimation of the slip velocities in the slip zone, needed for wear analysis, is also studied. Since FaStrip is as fast as FASTSIM, it can be an alternative for tangential solution of the wheel-rail contact in simulation packages.

  15. Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT

    NASA Astrophysics Data System (ADS)

    Ubaidulla, P.; Chockalingam, A.

    2009-12-01

    We present robust joint nonlinear transceiver designs for multiuser multiple-input multiple-output (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs Tomlinson-Harashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussian-distributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a norm-bounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worst-case design. For this model, we consider robust (i) minimum SMSE, (ii) MSE-constrained, and (iii) MSE-balancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with per-antenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.

  16. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  17. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  18. A stopping criterion for the iterative solution of partial differential equations

    NASA Astrophysics Data System (ADS)

    Rao, Kaustubh; Malan, Paul; Perot, J. Blair

    2018-01-01

    A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.

  19. Worst-Case Flutter Margins from F/A-18 Aircraft Aeroelastic Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Marty

    1997-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, micron, computes a stability margin which directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The micron margins are robust margins which indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 SRA using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  20. Mammalian cell culture process for monoclonal antibody production: nonlinear modelling and parameter estimation.

    PubMed

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad; Roman, Monica

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies.

  1. Mammalian Cell Culture Process for Monoclonal Antibody Production: Nonlinear Modelling and Parameter Estimation

    PubMed Central

    Selişteanu, Dan; Șendrescu, Dorin; Georgeanu, Vlad

    2015-01-01

    Monoclonal antibodies (mAbs) are at present one of the fastest growing products of pharmaceutical industry, with widespread applications in biochemistry, biology, and medicine. The operation of mAbs production processes is predominantly based on empirical knowledge, the improvements being achieved by using trial-and-error experiments and precedent practices. The nonlinearity of these processes and the absence of suitable instrumentation require an enhanced modelling effort and modern kinetic parameter estimation strategies. The present work is dedicated to nonlinear dynamic modelling and parameter estimation for a mammalian cell culture process used for mAb production. By using a dynamical model of such kind of processes, an optimization-based technique for estimation of kinetic parameters in the model of mammalian cell culture process is developed. The estimation is achieved as a result of minimizing an error function by a particle swarm optimization (PSO) algorithm. The proposed estimation approach is analyzed in this work by using a particular model of mammalian cell culture, as a case study, but is generic for this class of bioprocesses. The presented case study shows that the proposed parameter estimation technique provides a more accurate simulation of the experimentally observed process behaviour than reported in previous studies. PMID:25685797

  2. SU-E-J-92: Validating Dose Uncertainty Estimates Produced by AUTODIRECT, An Automated Program to Evaluate Deformable Image Registration Accuracy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, H; Chen, J; Pouliot, J

    2015-06-15

    Purpose: Deformable image registration (DIR) is a powerful tool with the potential to deformably map dose from one computed-tomography (CT) image to another. Errors in the DIR, however, will produce errors in the transferred dose distribution. We have proposed a software tool, called AUTODIRECT (automated DIR evaluation of confidence tool), which predicts voxel-specific dose mapping errors on a patient-by-patient basis. This work validates the effectiveness of AUTODIRECT to predict dose mapping errors with virtual and physical phantom datasets. Methods: AUTODIRECT requires 4 inputs: moving and fixed CT images and two noise scans of a water phantom (for noise characterization). Then,more » AUTODIRECT uses algorithms to generate test deformations and applies them to the moving and fixed images (along with processing) to digitally create sets of test images, with known ground-truth deformations that are similar to the actual one. The clinical DIR algorithm is then applied to these test image sets (currently 4) . From these tests, AUTODIRECT generates spatial and dose uncertainty estimates for each image voxel based on a Student’s t distribution. This work compares these uncertainty estimates to the actual errors made by the Velocity Deformable Multi Pass algorithm on 11 virtual and 1 physical phantom datasets. Results: For 11 of the 12 tests, the predicted dose error distributions from AUTODIRECT are well matched to the actual error distributions within 1–6% for 10 virtual phantoms, and 9% for the physical phantom. For one of the cases though, the predictions underestimated the errors in the tail of the distribution. Conclusion: Overall, the AUTODIRECT algorithm performed well on the 12 phantom cases for Velocity and was shown to generate accurate estimates of dose warping uncertainty. AUTODIRECT is able to automatically generate patient-, organ- , and voxel-specific DIR uncertainty estimates. This ability would be useful for patient-specific DIR quality assurance.« less

  3. Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise

    NASA Technical Reports Server (NTRS)

    Sedlak, J.; Hashmall, J.

    1997-01-01

    Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.

  4. Probability of misclassifying biological elements in surface waters.

    PubMed

    Loga, Małgorzata; Wierzchołowska-Dziedzic, Anna

    2017-11-24

    Measurement uncertainties are inherent to assessment of biological indices of water bodies. The effect of these uncertainties on the probability of misclassification of ecological status is the subject of this paper. Four Monte-Carlo (M-C) models were applied to simulate the occurrence of random errors in the measurements of metrics corresponding to four biological elements of surface waters: macrophytes, phytoplankton, phytobenthos, and benthic macroinvertebrates. Long series of error-prone measurement values of these metrics, generated by M-C models, were used to identify cases in which values of any of the four biological indices lay outside of the "true" water body class, i.e., outside the class assigned from the actual physical measurements. Fraction of such cases in the M-C generated series was used to estimate the probability of misclassification. The method is particularly useful for estimating the probability of misclassification of the ecological status of surface water bodies in the case of short sequences of measurements of biological indices. The results of the Monte-Carlo simulations show a relatively high sensitivity of this probability to measurement errors of the river macrophyte index (MIR) and high robustness to measurement errors of the benthic macroinvertebrate index (MMI). The proposed method of using Monte-Carlo models to estimate the probability of misclassification has significant potential for assessing the uncertainty of water body status reported to the EC by the EU member countries according to WFD. The method can be readily applied also in risk assessment of water management decisions before adopting the status dependent corrective actions.

  5. Matched samples logistic regression in case-control studies with missing values: when to break the matches.

    PubMed

    Hansson, Lisbeth; Khamis, Harry J

    2008-12-01

    Simulated data sets are used to evaluate conditional and unconditional maximum likelihood estimation in an individual case-control design with continuous covariates when there are different rates of excluded cases and different levels of other design parameters. The effectiveness of the estimation procedures is measured by method bias, variance of the estimators, root mean square error (RMSE) for logistic regression and the percentage of explained variation. Conditional estimation leads to higher RMSE than unconditional estimation in the presence of missing observations, especially for 1:1 matching. The RMSE is higher for the smaller stratum size, especially for the 1:1 matching. The percentage of explained variation appears to be insensitive to missing data, but is generally higher for the conditional estimation than for the unconditional estimation. It is particularly good for the 1:2 matching design. For minimizing RMSE, a high matching ratio is recommended; in this case, conditional and unconditional logistic regression models yield comparable levels of effectiveness. For maximizing the percentage of explained variation, the 1:2 matching design with the conditional logistic regression model is recommended.

  6. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  7. Filtering Methods for Error Reduction in Spacecraft Attitude Estimation Using Quaternion Star Trackers

    NASA Technical Reports Server (NTRS)

    Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil

    2011-01-01

    Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.

  8. Tissue resistivity estimation in the presence of positional and geometrical uncertainties.

    PubMed

    Baysal, U; Eyüboğlu, B M

    2000-08-01

    Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.

  9. Linear Parameter Varying Control Synthesis for Actuator Failure, Based on Estimated Parameter

    NASA Technical Reports Server (NTRS)

    Shin, Jong-Yeob; Wu, N. Eva; Belcastro, Christine

    2002-01-01

    The design of a linear parameter varying (LPV) controller for an aircraft at actuator failure cases is presented. The controller synthesis for actuator failure cases is formulated into linear matrix inequality (LMI) optimizations based on an estimated failure parameter with pre-defined estimation error bounds. The inherent conservatism of an LPV control synthesis methodology is reduced using a scaling factor on the uncertainty block which represents estimated parameter uncertainties. The fault parameter is estimated using the two-stage Kalman filter. The simulation results of the designed LPV controller for a HiMXT (Highly Maneuverable Aircraft Technology) vehicle with the on-line estimator show that the desired performance and robustness objectives are achieved for actuator failure cases.

  10. Predicting protein concentrations with ELISA microarray assays, monotonic splines and Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Daly, Don S.; Anderson, Kevin K.; White, Amanda M.

    Background: A microarray of enzyme-linked immunosorbent assays, or ELISA microarray, predicts simultaneously the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Making sound biological inferences as well as improving the ELISA microarray process require require both concentration predictions and creditable estimates of their errors. Methods: We present a statistical method based on monotonic spline statistical models, penalized constrained least squares fitting (PCLS) and Monte Carlo simulation (MC) to predict concentrations and estimate prediction errors in ELISA microarray. PCLS restrains the flexible spline to a fit of assay intensitymore » that is a monotone function of protein concentration. With MC, both modeling and measurement errors are combined to estimate prediction error. The spline/PCLS/MC method is compared to a common method using simulated and real ELISA microarray data sets. Results: In contrast to the rigid logistic model, the flexible spline model gave credible fits in almost all test cases including troublesome cases with left and/or right censoring, or other asymmetries. For the real data sets, 61% of the spline predictions were more accurate than their comparable logistic predictions; especially the spline predictions at the extremes of the prediction curve. The relative errors of 50% of comparable spline and logistic predictions differed by less than 20%. Monte Carlo simulation rendered acceptable asymmetric prediction intervals for both spline and logistic models while propagation of error produced symmetric intervals that diverged unrealistically as the standard curves approached horizontal asymptotes. Conclusions: The spline/PCLS/MC method is a flexible, robust alternative to a logistic/NLS/propagation-of-error method to reliably predict protein concentrations and estimate their errors. The spline method simplifies model selection and fitting, and reliably estimates believable prediction errors. For the 50% of the real data sets fit well by both methods, spline and logistic predictions are practically indistinguishable, varying in accuracy by less than 15%. The spline method may be useful when automated prediction across simultaneous assays of numerous proteins must be applied routinely with minimal user intervention.« less

  11. Data Envelopment Analysis in the Presence of Measurement Error: Case Study from the National Database of Nursing Quality Indicators® (NDNQI®)

    PubMed Central

    Gajewski, Byron J.; Lee, Robert; Dunton, Nancy

    2012-01-01

    Data Envelopment Analysis (DEA) is the most commonly used approach for evaluating healthcare efficiency (Hollingsworth, 2008), but a long-standing concern is that DEA assumes that data are measured without error. This is quite unlikely, and DEA and other efficiency analysis techniques may yield biased efficiency estimates if it is not realized (Gajewski, Lee, Bott, Piamjariyakul and Taunton, 2009; Ruggiero, 2004). We propose to address measurement error systematically using a Bayesian method (Bayesian DEA). We will apply Bayesian DEA to data from the National Database of Nursing Quality Indicators® (NDNQI®) to estimate nursing units’ efficiency. Several external reliability studies inform the posterior distribution of the measurement error on the DEA variables. We will discuss the case of generalizing the approach to situations where an external reliability study is not feasible. PMID:23328796

  12. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    PubMed

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  14. The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error

    PubMed Central

    Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G

    2012-01-01

    Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908

  15. Robust Flutter Margin Analysis that Incorporates Flight Data

    NASA Technical Reports Server (NTRS)

    Lind, Rick; Brenner, Martin J.

    1998-01-01

    An approach for computing worst-case flutter margins has been formulated in a robust stability framework. Uncertainty operators are included with a linear model to describe modeling errors and flight variations. The structured singular value, mu, computes a stability margin that directly accounts for these uncertainties. This approach introduces a new method of computing flutter margins and an associated new parameter for describing these margins. The mu margins are robust margins that indicate worst-case stability estimates with respect to the defined uncertainty. Worst-case flutter margins are computed for the F/A-18 Systems Research Aircraft using uncertainty sets generated by flight data analysis. The robust margins demonstrate flight conditions for flutter may lie closer to the flight envelope than previously estimated by p-k analysis.

  16. Finite-error metrological bounds on multiparameter Hamiltonian estimation

    NASA Astrophysics Data System (ADS)

    Kura, Naoto; Ueda, Masahito

    2018-01-01

    Estimation of multiple parameters in an unknown Hamiltonian is investigated. We present upper and lower bounds on the time required to complete the estimation within a prescribed error tolerance δ . The lower bound is given on the basis of the Cramér-Rao inequality, where the quantum Fisher information is bounded by the squared evolution time. The upper bound is obtained by an explicit construction of estimation procedures. By comparing the cases with different numbers of Hamiltonian channels, we also find that the few-channel procedure with adaptive feedback and the many-channel procedure with entanglement are equivalent in the sense that they require the same amount of time resource up to a constant factor.

  17. Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Florjanczyk, Jan; Brun, Todd; CenterQuantum Information Science; Technology Team

    We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.

  18. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  19. The importance of scientific evaluation of biological evidence--data from eight years of case review.

    PubMed

    Coyle, Heather Miller

    2012-12-01

    In 2009, the National Research Council published a report stating that the addition of more science and technology into the field of forensic science in the United States would be of great benefit to the judicial system. As a starting point to address this NRC report, one needs to make an assessment of the system. One factor that is continuously requested is an estimate of an error rate. In any given scientific area of forensics that is difficult to quantitate except by external review and audits. After eight years of requested defense review of cases with biological and DNA evidence, most cases appear to be scientifically sound in test methods and procedures. However, there were some cases where errors in the forensic science process did occur. This article takes information compiled from those eight years of defense review and summarizes the cases where errors have been discovered and discusses the scientific implications of these errors. The scope of this article is limited to crime scene collection and forensic science laboratory testing of biological materials for body fluid identification and DNA individualization to a source. The greatest value of defense review comes from (a) providing effective balance and independent oversight to the judicial process and (b) collecting data into a format that can be useful as a guide in training programs. Copyright © 2012 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  20. Prediction and error of baldcypress stem volume from stump diameter

    Treesearch

    Bernard R. Parresol

    1998-01-01

    The need to estimate the volume of removals occurs for many reasons, such as in trespass cases, severance tax reports, and post-harvest assessments. A logarithmic model is presented for prediction of baldcypress total stem cubic foot volume using stump diameter as the independent variable. Because the error of prediction is as important as the volume estimate, the...

  1. Use of attribute association error probability estimates to evaluate quality of medical record geocodes.

    PubMed

    Klaus, Christian A; Carrasco, Luis E; Goldberg, Daniel W; Henry, Kevin A; Sherman, Recinda L

    2015-09-15

    The utility of patient attributes associated with the spatiotemporal analysis of medical records lies not just in their values but also the strength of association between them. Estimating the extent to which a hierarchy of conditional probability exists between patient attribute associations such as patient identifying fields, patient and date of diagnosis, and patient and address at diagnosis is fundamental to estimating the strength of association between patient and geocode, and patient and enumeration area. We propose a hierarchy for the attribute associations within medical records that enable spatiotemporal relationships. We also present a set of metrics that store attribute association error probability (AAEP), to estimate error probability for all attribute associations upon which certainty in a patient geocode depends. A series of experiments were undertaken to understand how error estimation could be operationalized within health data and what levels of AAEP in real data reveal themselves using these methods. Specifically, the goals of this evaluation were to (1) assess if the concept of our error assessment techniques could be implemented by a population-based cancer registry; (2) apply the techniques to real data from a large health data agency and characterize the observed levels of AAEP; and (3) demonstrate how detected AAEP might impact spatiotemporal health research. We present an evaluation of AAEP metrics generated for cancer cases in a North Carolina county. We show examples of how we estimated AAEP for selected attribute associations and circumstances. We demonstrate the distribution of AAEP in our case sample across attribute associations, and demonstrate ways in which disease registry specific operations influence the prevalence of AAEP estimates for specific attribute associations. The effort to detect and store estimates of AAEP is worthwhile because of the increase in confidence fostered by the attribute association level approach to the assessment of uncertainty in patient geocodes, relative to existing geocoding related uncertainty metrics.

  2. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    NASA Astrophysics Data System (ADS)

    Cheung, KW; So, HC; Ma, W.-K.; Chan, YT

    2006-12-01

    The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA), received signal strength (RSS), time-difference-of-arrival (TDOA), and angle-of-arrival (AOA) are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS) mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly). Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  3. Exploring Discretization Error in Simulation-Based Aerodynamic Databases

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2010-01-01

    This work examines the level of discretization error in simulation-based aerodynamic databases and introduces strategies for error control. Simulations are performed using a parallel, multi-level Euler solver on embedded-boundary Cartesian meshes. Discretization errors in user-selected outputs are estimated using the method of adjoint-weighted residuals and we use adaptive mesh refinement to reduce these errors to specified tolerances. Using this framework, we examine the behavior of discretization error throughout a token database computed for a NACA 0012 airfoil consisting of 120 cases. We compare the cost and accuracy of two approaches for aerodynamic database generation. In the first approach, mesh adaptation is used to compute all cases in the database to a prescribed level of accuracy. The second approach conducts all simulations using the same computational mesh without adaptation. We quantitatively assess the error landscape and computational costs in both databases. This investigation highlights sensitivities of the database under a variety of conditions. The presence of transonic shocks or the stiffness in the governing equations near the incompressible limit are shown to dramatically increase discretization error requiring additional mesh resolution to control. Results show that such pathologies lead to error levels that vary by over factor of 40 when using a fixed mesh throughout the database. Alternatively, controlling this sensitivity through mesh adaptation leads to mesh sizes which span two orders of magnitude. We propose strategies to minimize simulation cost in sensitive regions and discuss the role of error-estimation in database quality.

  4. On the error propagation of semi-Lagrange and Fourier methods for advection problems☆

    PubMed Central

    Einkemmer, Lukas; Ostermann, Alexander

    2015-01-01

    In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018

  5. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    NASA Astrophysics Data System (ADS)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  6. Influence of hypo- and hyperthermia on death time estimation - A simulation study.

    PubMed

    Muggenthaler, H; Hubig, M; Schenkl, S; Mall, G

    2017-09-01

    Numerous physiological and pathological mechanisms can cause elevated or lowered body core temperatures. Deviations from the physiological level of about 37°C can influence temperature based death time estimations. However, it has not been investigated by means of thermodynamics, to which extent hypo- and hyperthermia bias death time estimates. Using numerical simulation, the present study investigates the errors inherent in temperature based death time estimation in case of elevated or lowered body core temperatures before death. The most considerable errors with regard to the normothermic model occur in the first few hours post-mortem. With decreasing body core temperature and increasing post-mortem time the error diminishes and stagnates at a nearly constant level. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Minimum-error quantum distinguishability bounds from matrix monotone functions: A comment on 'Two-sided estimates of minimum-error distinguishability of mixed quantum states via generalized Holevo-Curlander bounds' [J. Math. Phys. 50, 032106 (2009)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyson, Jon

    2009-06-15

    Matrix monotonicity is used to obtain upper bounds on minimum-error distinguishability of arbitrary ensembles of mixed quantum states. This generalizes one direction of a two-sided bound recently obtained by the author [J. Tyson, J. Math. Phys. 50, 032106 (2009)]. It is shown that the previously obtained special case has unique properties.

  8. Worst-error analysis of batch filter and sequential filter in navigation problems. [in spacecraft trajectory estimation

    NASA Technical Reports Server (NTRS)

    Nishimura, T.

    1975-01-01

    This paper proposes a worst-error analysis for dealing with problems of estimation of spacecraft trajectories in deep space missions. Navigation filters in use assume either constant or stochastic (Markov) models for their estimated parameters. When the actual behavior of these parameters does not follow the pattern of the assumed model, the filters sometimes result in very poor performance. To prepare for such pathological cases, the worst errors of both batch and sequential filters are investigated based on the incremental sensitivity studies of these filters. By finding critical switching instances of non-gravitational accelerations, intensive tracking can be carried out around those instances. Also the worst errors in the target plane provide a measure in assignment of the propellant budget for trajectory corrections. Thus the worst-error study presents useful information as well as practical criteria in establishing the maneuver and tracking strategy of spacecraft's missions.

  9. Number-counts slope estimation in the presence of Poisson noise

    NASA Technical Reports Server (NTRS)

    Schmitt, Juergen H. M. M.; Maccacaro, Tommaso

    1986-01-01

    The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.

  10. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study.

    PubMed

    Espino-Hernandez, Gabriela; Gustafson, Paul; Burstyn, Igor

    2011-05-14

    In epidemiological studies explanatory variables are frequently subject to measurement error. The aim of this paper is to develop a Bayesian method to correct for measurement error in multiple continuous exposures in individually matched case-control studies. This is a topic that has not been widely investigated. The new method is illustrated using data from an individually matched case-control study of the association between thyroid hormone levels during pregnancy and exposure to perfluorinated acids. The objective of the motivating study was to examine the risk of maternal hypothyroxinemia due to exposure to three perfluorinated acids measured on a continuous scale. Results from the proposed method are compared with those obtained from a naive analysis. Using a Bayesian approach, the developed method considers a classical measurement error model for the exposures, as well as the conditional logistic regression likelihood as the disease model, together with a random-effect exposure model. Proper and diffuse prior distributions are assigned, and results from a quality control experiment are used to estimate the perfluorinated acids' measurement error variability. As a result, posterior distributions and 95% credible intervals of the odds ratios are computed. A sensitivity analysis of method's performance in this particular application with different measurement error variability was performed. The proposed Bayesian method to correct for measurement error is feasible and can be implemented using statistical software. For the study on perfluorinated acids, a comparison of the inferences which are corrected for measurement error to those which ignore it indicates that little adjustment is manifested for the level of measurement error actually exhibited in the exposures. Nevertheless, a sensitivity analysis shows that more substantial adjustments arise if larger measurement errors are assumed. In individually matched case-control studies, the use of conditional logistic regression likelihood as a disease model in the presence of measurement error in multiple continuous exposures can be justified by having a random-effect exposure model. The proposed method can be successfully implemented in WinBUGS to correct individually matched case-control studies for several mismeasured continuous exposures under a classical measurement error model.

  11. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  12. Skin-deep diagnosis: affective bias and zebra retreat complicating the diagnosis of systemic sclerosis.

    PubMed

    Miller, Chad S

    2013-01-01

    Nearly half of medical errors can be attributed to an error of clinical reasoning or decision making. It is estimated that the correct diagnosis is missed or delayed in between 5% and 14% of acute hospital admissions. Through understanding why and how physicians make these errors, it is hoped that strategies can be developed to decrease the number of these errors. In the present case, a patient presented with dyspnea, gastrointestinal symptoms and weight loss; the diagnosis was initially missed when the treating physicians took mental short cuts and used heuristics as in this case. Heuristics have an inherent bias that can lead to faulty reasoning or conclusions, especially in complex or difficult cases. Affective bias, which is the overinvolvement of emotion in clinical decision making, limited the available information for diagnosis because of the hesitancy to acquire a full history and perform a complete physical examination in this patient. Zebra retreat, another type of bias, is when a rare diagnosis figures prominently on the differential diagnosis but the physician retreats for various reasons. Zebra retreat also factored in the delayed diagnosis. Through the description of these clinical reasoning errors in an actual case, it is hoped that future errors can be prevented or inspiration for additional research in this area will develop.

  13. TH-AB-202-04: Auto-Adaptive Margin Generation for MLC-Tracked Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glitzner, M; Lagendijk, J; Raaymakers, B

    Purpose: To develop an auto-adaptive margin generator for MLC tracking. The generator is able to estimate errors arising in image guided radiotherapy, particularly on an MR-Linac, which depend on the latencies of machine and image processing, as well as on patient motion characteristics. From the estimated error distribution, a segment margin is generated, able to compensate errors up to a user-defined confidence. Method: In every tracking control cycle (TCC, 40ms), the desired aperture D(t) is compared to the actual aperture A(t), a delayed and imperfect representation of D(t). Thus an error e(t)=A(T)-D(T) is measured every TCC. Applying kernel-density-estimation (KDE), themore » cumulative distribution (CDF) of e(t) is estimated. With CDF-confidence limits, upper and lower error limits are extracted for motion axes along and perpendicular leaf-travel direction and applied as margins. To test the dosimetric impact, two representative motion traces were extracted from fast liver-MRI (10Hz). The traces were applied onto a 4D-motion platform and continuously tracked by an Elekta Agility 160 MLC using an artificially imposed tracking delay. Gafchromic film was used to detect dose exposition for static, tracked, and error-compensated tracking cases. The margin generator was parameterized to cover 90% of all tracking errors. Dosimetric impact was rated by calculating the ratio between underexposed points (>5% underdosage) to the total number of points inside FWHM of static exposure. Results: Without imposing adaptive margins, tracking experiments showed a ratio of underexposed points of 17.5% and 14.3% for two motion cases with imaging delays of 200ms and 300ms, respectively. Activating the margin generated yielded total suppression (<1%) of underdosed points. Conclusion: We showed that auto-adaptive error compensation using machine error statistics is possible for MLC tracking. The error compensation margins are calculated on-line, without the need of assuming motion or machine models. Further strategies to reduce consequential overdosages are currently under investigation. This work was funded by the SoRTS consortium, which includes the industry partners Elekta, Philips and Technolution.« less

  14. First-order approximation error analysis of Risley-prism-based beam directing system.

    PubMed

    Zhao, Yanyan; Yuan, Yan

    2014-12-01

    To improve the performance of a Risley-prism system for optical detection and measuring applications, it is necessary to be able to determine the direction of the outgoing beam with high accuracy. In previous works, error sources and their impact on the performance of the Risley-prism system have been analyzed, but their numerical approximation accuracy was not high. Besides, pointing error analysis of the Risley-prism system has provided results for the case when the component errors, prism orientation errors, and assembly errors are certain. In this work, the prototype of a Risley-prism system was designed. The first-order approximations of the error analysis were derived and compared with the exact results. The directing errors of a Risley-prism system associated with wedge-angle errors, prism mounting errors, and bearing assembly errors were analyzed based on the exact formula and the first-order approximation. The comparisons indicated that our first-order approximation is accurate. In addition, the combined errors produced by the wedge-angle errors and mounting errors of the two prisms together were derived and in both cases were proved to be the sum of errors caused by the first and the second prism separately. Based on these results, the system error of our prototype was estimated. The derived formulas can be implemented to evaluate beam directing errors of any Risley-prism beam directing system with a similar configuration.

  15. Systemic autoimmune rheumatic disease prevalence in Canada: updated analyses across 7 provinces.

    PubMed

    Broten, Laurel; Aviña-Zubieta, J Antonio; Lacaille, Diane; Joseph, Lawrence; Hanly, John G; Lix, Lisa; O'Donnell, Siobhan; Barnabe, Cheryl; Fortin, Paul R; Hudson, Marie; Jean, Sonia; Peschken, Christine; Edworthy, Steven M; Svenson, Larry; Pineau, Christian A; Clarke, Ann E; Smith, Mark; Bélisle, Patrick; Badley, Elizabeth M; Bergeron, Louise; Bernatsky, Sasha

    2014-04-01

    To estimate systemic autoimmune rheumatic disease (SARD) prevalence across 7 Canadian provinces using population-based administrative data evaluating both regional variations and the effects of age and sex. Using provincial physician billing and hospitalization data, cases of SARD (systemic lupus erythematosus, scleroderma, primary Sjögren syndrome, polymyositis/dermatomyositis) were ascertained. Three case definitions (rheumatology billing, 2-code physician billing, and hospital diagnosis) were combined to derive a SARD prevalence estimate for each province, categorized by age, sex, and rural/urban status. A hierarchical Bayesian latent class regression model was fit to account for the imperfect sensitivity and specificity of each case definition. The model also provided sensitivity estimates of different case definition approaches. Prevalence estimates for overall SARD ranged between 2 and 5 cases per 1000 residents across provinces. Similar demographic trends were evident across provinces, with greater prevalence in women and in persons over 45 years old. SARD prevalence in women over 45 was close to 1%. Overall sensitivity was poor, but estimates for each of the 3 case definitions improved within older populations and were slightly higher for men compared to women. Our results are consistent with previous estimates and other North American findings, and provide results from coast to coast, as well as useful information about the degree of regional and demographic variations that can be seen within a single country. Our work demonstrates the usefulness of using multiple data sources, adjusting for the error in each, and providing estimates of the sensitivity of different case definition approaches.

  16. Improving tree age estimates derived from increment cores: a case study of red pine

    Treesearch

    Shawn Fraver; John B. Bradford; Brian J. Palik

    2011-01-01

    Accurate tree ages are critical to a range of forestry and ecological studies. However, ring counts from increment cores, if not corrected for the years between the root collar and coring height, can produce sizeable age errors. The magnitude of errors is influenced by both the height at which the core is extracted and the growth rate. We destructively sampled saplings...

  17. Estimation of chaotic coupled map lattices using symbolic vector dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Pei, Wenjiang; Cheung, Yiu-ming; Shen, Yi; He, Zhenya

    2010-01-01

    In [K. Wang, W.J. Pei, Z.Y. He, Y.M. Cheung, Phys. Lett. A 367 (2007) 316], an original symbolic vector dynamics based method has been proposed for initial condition estimation in additive white Gaussian noisy environment. The estimation precision of this estimation method is determined by symbolic errors of the symbolic vector sequence gotten by symbolizing the received signal. This Letter further develops the symbolic vector dynamical estimation method. We correct symbolic errors with backward vector and the estimated values by using different symbols, and thus the estimation precision can be improved. Both theoretical and experimental results show that this algorithm enables us to recover initial condition of coupled map lattice exactly in both noisy and noise free cases. Therefore, we provide novel analytical techniques for understanding turbulences in coupled map lattice.

  18. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  19. Derivation of error sources for experimentally derived heliostat shapes

    NASA Astrophysics Data System (ADS)

    Cumpston, Jeff; Coventry, Joe

    2017-06-01

    Data gathered using photogrammetry that represents the surface and structure of a heliostat mirror panel is investigated in detail. A curve-fitting approach that allows the retrieval of four distinct mirror error components, while prioritizing the best fit possible to paraboloidal terms in the curve fitting equation, is presented. The angular errors associated with each of the four surfaces are calculated, and the relative magnitude for each of them is given. It is found that in this case, the mirror had a significant structural twist, and an estimate of the improvement to the mirror surface quality in the case of no twist was made.

  20. Stereo pair design for cameras with a fovea

    NASA Technical Reports Server (NTRS)

    Chettri, Samir R.; Keefe, Michael; Zimmerman, John R.

    1992-01-01

    We describe the methodology for the design and selection of a stereo pair when the cameras have a greater concentration of sensing elements in the center of the image plane (fovea). Binocular vision is important for the purpose of depth estimation, which in turn is important in a variety of applications such as gaging and autonomous vehicle guidance. We assume that one camera has square pixels of size dv and the other has pixels of size rdv, where r is between 0 and 1. We then derive results for the average error, the maximum error, and the error distribution in the depth determination of a point. These results can be shown to be a general form of the results for the case when the cameras have equal sized pixels. We discuss the behavior of the depth estimation error as we vary r and the tradeoffs between the extra processing time and increased accuracy. Knowing these results makes it possible to study the case when we have a pair of cameras with a fovea.

  1. Human leader and robot follower team: correcting leader's position from follower's heading

    NASA Astrophysics Data System (ADS)

    Borenstein, Johann; Thomas, David; Sights, Brandon; Ojeda, Lauro; Bankole, Peter; Fellars, Donald

    2010-04-01

    In multi-agent scenarios, there can be a disparity in the quality of position estimation amongst the various agents. Here, we consider the case of two agents - a leader and a follower - following the same path, in which the follower has a significantly better estimate of position and heading. This may be applicable to many situations, such as a robotic "mule" following a soldier. Another example is that of a convoy, in which only one vehicle (not necessarily the leading one) is instrumented with precision navigation instruments while all other vehicles use lower-precision instruments. We present an algorithm, called Follower-derived Heading Correction (FDHC), which substantially improves estimates of the leader's heading and, subsequently, position. Specifically, FHDC produces a very accurate estimate of heading errors caused by slow-changing errors (e.g., those caused by drift in gyros) of the leader's navigation system and corrects those errors.

  2. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  3. Validation of self-reported start year of mobile phone use in a Swedish case-control study on radiofrequency fields and acoustic neuroma risk.

    PubMed

    Pettersson, David; Bottai, Matteo; Mathiesen, Tiit; Prochazka, Michaela; Feychting, Maria

    2015-01-01

    The possible effect of radiofrequency exposure from mobile phones on tumor risk has been studied since the late 1990s. Yet, empirical information about recall of the start of mobile phone use among adult cases and controls has never been reported. Limited knowledge about recall errors hampers interpretations of the epidemiological evidence. We used network operator data to validate the self-reported start year of mobile phone use in a case-control study of mobile phone use and acoustic neuroma risk. The answers of 96 (29%) cases and 111 (22%) controls could be included in the validation. The larger proportion of cases reflects a more complete and detailed reporting of subscription history. Misclassification was substantial, with large random errors, small systematic errors, and no significant differences between cases and controls. The average difference between self-reported and operator start year was -0.62 (95% confidence interval: -1.42, 0.17) years for cases and -0.71 (-1.50, 0.07) years for controls, standard deviations were 3.92 and 4.17 years, respectively. Agreement between self-reported and operator-recorded data categorized into short, intermediate and long-term use was moderate (kappa statistic: 0.42). Should an association exist, dilution of risk estimates and distortion of exposure-response patterns for time since first mobile phone use could result from the large random errors in self-reported start year. Retrospective collection of operator data likely leads to a selection of "good reporters", with a higher proportion of cases. Thus, differential recall cannot be entirely excluded.

  4. Utility of a novel error-stepping method to improve gradient-based parameter identification by increasing the smoothness of the local objective surface: a case-study of pulmonary mechanics.

    PubMed

    Docherty, Paul D; Schranz, Christoph; Chase, J Geoffrey; Chiew, Yeong Shiong; Möller, Knut

    2014-05-01

    Accurate model parameter identification relies on accurate forward model simulations to guide convergence. However, some forward simulation methodologies lack the precision required to properly define the local objective surface and can cause failed parameter identification. The role of objective surface smoothness in identification of a pulmonary mechanics model was assessed using forward simulation from a novel error-stepping method and a proprietary Runge-Kutta method. The objective surfaces were compared via the identified parameter discrepancy generated in a Monte Carlo simulation and the local smoothness of the objective surfaces they generate. The error-stepping method generated significantly smoother error surfaces in each of the cases tested (p<0.0001) and more accurate model parameter estimates than the Runge-Kutta method in three of the four cases tested (p<0.0001) despite a 75% reduction in computational cost. Of note, parameter discrepancy in most cases was limited to a particular oblique plane, indicating a non-intuitive multi-parameter trade-off was occurring. The error-stepping method consistently improved or equalled the outcomes of the Runge-Kutta time-integration method for forward simulations of the pulmonary mechanics model. This study indicates that accurate parameter identification relies on accurate definition of the local objective function, and that parameter trade-off can occur on oblique planes resulting prematurely halted parameter convergence. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Uncharted territory: measuring costs of diagnostic errors outside the medical record.

    PubMed

    Schwartz, Alan; Weiner, Saul J; Weaver, Frances; Yudkowsky, Rachel; Sharma, Gunjan; Binns-Calvey, Amy; Preyss, Ben; Jordan, Neil

    2012-11-01

    In a past study using unannounced standardised patients (USPs), substantial rates of diagnostic and treatment errors were documented among internists. Because the authors know the correct disposition of these encounters and obtained the physicians' notes, they can identify necessary treatment that was not provided and unnecessary treatment. They can also discern which errors can be identified exclusively from a review of the medical records. To estimate the avoidable direct costs incurred by physicians making errors in our previous study. In the study, USPs visited 111 internal medicine attending physicians. They presented variants of four previously validated cases that jointly manipulate the presence or absence of contextual and biomedical factors that could lead to errors in management if overlooked. For example, in a patient with worsening asthma symptoms, a complicating biomedical factor was the presence of reflux disease and a complicating contextual factor was inability to afford the currently prescribed inhaler. Costs of missed or unnecessary services were computed using Medicare cost-based reimbursement data. Fourteen practice locations, including two academic clinics, two community-based primary care networks with multiple sites, a core safety net provider, and three Veteran Administration government facilities. Contribution of errors to costs of care. Overall, errors in care resulted in predicted costs of approximately $174,000 across 399 visits, of which only $8745 was discernible from a review of the medical records alone (without knowledge of the correct diagnoses). The median cost of error per visit with an incorrect care plan differed by case and by presentation variant within case. Chart reviews alone underestimate costs of care because they typically reflect appropriate treatment decisions conditional on (potentially erroneous) diagnoses. Important information about patient context is often entirely missing from medical records. Experimental methods, including the use of USPs, reveal the substantial costs of these errors.

  6. A new multistage groundwater transport inverse method: presentation, evaluation, and implications

    USGS Publications Warehouse

    Anderman, Evan R.; Hill, Mary C.

    1999-01-01

    More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three‐stage nonlinear‐regression‐based iterative procedure in which trial advective‐front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow‐ and transport‐model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte‐Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.

  7. Graphical Evaluation of the Ridge-Type Robust Regression Estimators in Mixture Experiments

    PubMed Central

    Erkoc, Ali; Emiroglu, Esra

    2014-01-01

    In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set. PMID:25202738

  8. Graphical evaluation of the ridge-type robust regression estimators in mixture experiments.

    PubMed

    Erkoc, Ali; Emiroglu, Esra; Akay, Kadri Ulas

    2014-01-01

    In mixture experiments, estimation of the parameters is generally based on ordinary least squares (OLS). However, in the presence of multicollinearity and outliers, OLS can result in very poor estimates. In this case, effects due to the combined outlier-multicollinearity problem can be reduced to certain extent by using alternative approaches. One of these approaches is to use biased-robust regression techniques for the estimation of parameters. In this paper, we evaluate various ridge-type robust estimators in the cases where there are multicollinearity and outliers during the analysis of mixture experiments. Also, for selection of biasing parameter, we use fraction of design space plots for evaluating the effect of the ridge-type robust estimators with respect to the scaled mean squared error of prediction. The suggested graphical approach is illustrated on Hald cement data set.

  9. Worst case estimation of homology design by convex analysis

    NASA Technical Reports Server (NTRS)

    Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.

    1998-01-01

    The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.

  10. A comparative study of satellite estimation for solar insolation in Albania with ground measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitrushi, Driada, E-mail: driadamitrushi@yahoo.com; Berberi, Pëllumb, E-mail: pellumb.berberi@gmail.com; Muda, Valbona, E-mail: vmuda@hotmail.com

    The main objective of this study is to compare data provided by Database of NASA with available ground data for regions covered by national meteorological net NASA estimates that their measurements of average daily solar radiation have a root-mean-square deviation RMSD error of 35 W/m{sup 2} (roughly 20% inaccuracy). Unfortunately valid data from meteorological stations for regions of interest are quite rare in Albania. In these cases, use of Solar Radiation Database of NASA would be a satisfactory solution for different case studies. Using a statistical method allows to determine most probable margins between to sources of data. Comparison of meanmore » insulation data provided by NASA with ground data of mean insulation provided by meteorological stations show that ground data for mean insolation results, in all cases, to be underestimated compared with data provided by Database of NASA. Converting factor is 1.149.« less

  11. Variation in center of mass estimates for extant sauropsids and its importance for reconstructing inertial properties of extinct archosaurs.

    PubMed

    Allen, Vivian; Paxton, Heather; Hutchinson, John R

    2009-09-01

    Inertial properties of animal bodies and segments are critical input parameters for biomechanical analysis of standing and moving, and thus are important for paleobiological inquiries into the broader behaviors, ecology and evolution of extinct taxa such as dinosaurs. But how accurately can these be estimated? Computational modeling was used to estimate the inertial properties including mass, density, and center of mass (COM) for extant crocodiles (adult and juvenile Crocodylus johnstoni) and birds (Gallus gallus; junglefowl and broiler chickens), to identify the chief sources of variation and methodological errors, and their significance. High-resolution computed tomography scans were segmented into 3D objects and imported into inertial property estimation software that allowed for the examination of variable body segment densities (e.g., air spaces such as lungs, and deformable body outlines). Considerable biological variation of inertial properties was found within groups due to ontogenetic changes as well as evolutionary changes between chicken groups. COM positions shift in variable directions during ontogeny in different groups. Our method was repeatable and the resolution was sufficient for accurate estimations of mass and density in particular. However, we also found considerable potential methodological errors for COM related to (1) assumed body segment orientation, (2) what frames of reference are used to normalize COM for size-independent comparisons among animals, and (3) assumptions about tail shape. Methods and assumptions are suggested to minimize these errors in the future and thereby improve estimation of inertial properties for extant and extinct animals. In the best cases, 10%-15% errors in these estimates are unavoidable, but particularly for extinct taxa errors closer to 50% should be expected, and therefore, cautiously investigated. Nonetheless in the best cases these methods allow rigorous estimation of inertial properties. (c) 2009 Wiley-Liss, Inc.

  12. Evaluation of a Teleform-based data collection system: a multi-center obesity research case study.

    PubMed

    Jenkins, Todd M; Wilson Boyce, Tawny; Akers, Rachel; Andringa, Jennifer; Liu, Yanhong; Miller, Rosemary; Powers, Carolyn; Ralph Buncher, C

    2014-06-01

    Utilizing electronic data capture (EDC) systems in data collection and management allows automated validation programs to preemptively identify and correct data errors. For our multi-center, prospective study we chose to use TeleForm, a paper-based data capture software that uses recognition technology to create case report forms (CRFs) with similar functionality to EDC, including custom scripts to identify entry errors. We quantified the accuracy of the optimized system through a data audit of CRFs and the study database, examining selected critical variables for all subjects in the study, as well as an audit of all variables for 25 randomly selected subjects. Overall we found 6.7 errors per 10,000 fields, with similar estimates for critical (6.9/10,000) and non-critical (6.5/10,000) variables-values that fall below the acceptable quality threshold of 50 errors per 10,000 established by the Society for Clinical Data Management. However, error rates were found to widely vary by type of data field, with the highest rate observed with open text fields. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Estimation of three-dimensional radar tracking using modified extended kalman filter

    NASA Astrophysics Data System (ADS)

    Aditya, Prima; Apriliani, Erna; Khusnul Arif, Didik; Baihaqi, Komar

    2018-03-01

    Kalman filter is an estimation method by combining data and mathematical models then developed be extended Kalman filter to handle nonlinear systems. Three-dimensional radar tracking is one of example of nonlinear system. In this paper developed a modification method of extended Kalman filter from the direct decline of the three-dimensional radar tracking case. The development of this filter algorithm can solve the three-dimensional radar measurements in the case proposed in this case the target measured by radar with distance r, azimuth angle θ, and the elevation angle ϕ. Artificial covariance and mean adjusted directly on the three-dimensional radar system. Simulations result show that the proposed formulation is effective in the calculation of nonlinear measurement compared with extended Kalman filter with the value error at 0.77% until 1.15%.

  14. Discrepancy-based error estimates for Quasi-Monte Carlo III. Error distributions and central limits

    NASA Astrophysics Data System (ADS)

    Hoogland, Jiri; Kleiss, Ronald

    1997-04-01

    In Quasi-Monte Carlo integration, the integration error is believed to be generally smaller than in classical Monte Carlo with the same number of integration points. Using an appropriate definition of an ensemble of quasi-random point sets, we derive various results on the probability distribution of the integration error, which can be compared to the standard Central Limit Theorem for normal stochastic sampling. In many cases, a Gaussian error distribution is obtained.

  15. The statistical validity of nursing home survey findings.

    PubMed

    Woolley, Douglas C

    2011-11-01

    The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.

  16. Analytical estimation of laser phase noise induced BER floor in coherent receiver with digital signal processing.

    PubMed

    Vanin, Evgeny; Jacobsen, Gunnar

    2010-03-01

    The Bit-Error-Ratio (BER) floor caused by the laser phase noise in the optical fiber communication system with differential quadrature phase shift keying (DQPSK) and coherent detection followed by digital signal processing (DSP) is analytically evaluated. An in-phase and quadrature (I&Q) receiver with a carrier phase recovery using DSP is considered. The carrier phase recovery is based on a phase estimation of a finite sum (block) of the signal samples raised to the power of four and the phase unwrapping at transitions between blocks. It is demonstrated that errors generated at block transitions cause the dominating contribution to the system BER floor when the impact of the additive noise is negligibly small in comparison with the effect of the laser phase noise. Even the BER floor in the case when the phase unwrapping is omitted is analytically derived and applied to emphasize the crucial importance of this signal processing operation. The analytical results are verified by full Monte Carlo simulations. The BER for another type of DQPSK receiver operation, which is based on differential phase detection, is also obtained in the analytical form using the principle of conditional probability. The principle of conditional probability is justified in the case of differential phase detection due to statistical independency of the laser phase noise induced signal phase error and the additive noise contributions. Based on the achieved analytical results the laser linewidth tolerance is calculated for different system cases.

  17. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.

  18. The impact of precise robotic lesion length measurement on stent length selection: ramifications for stent savings.

    PubMed

    Campbell, Paul T; Kruse, Kevin R; Kroll, Christopher R; Patterson, Janet Y; Esposito, Michele J

    2015-09-01

    Coronary stent deployment outcomes can be negatively impacted by inaccurate lesion measurement and inappropriate stent length selection (SLS). We compared visual estimate of these parameters to those provided by the CorPath 200® Robotic PCI System. Sixty consecutive patients who underwent coronary stent placement utilizing the CorPath System were evaluated. The treating physician assessed orthogonal images and provided visual estimates of lesion length and SLS. The robotic system was then used for the same measures. SLS was considered to be accurate when visual estimate and robotic measures were in agreement. Visual estimate SLSs were considered to be "short" or "long" if they were below or above the robotic-selected stents, respectively. Only 35% (21/60) of visually estimated lesions resulted in accurate SLS, whereas 33% (20/60) and 32% (19/60) of the visually estimated SLSs were long and short, respectively. In 5 cases (8.3%), 1 less stent was placed based on the robotic lesion measurement being shorter than the visual estimate. Visual estimate assessment of lesion length and SLS is highly variable with 65% of the cases being inaccurately measured when compared to objective measures obtained from the robotic system. The 32% of the cases where lesions were visually estimated to be short represents cases that often require the use of extra stents after the full lesion is not covered by 1 stent [longitudinal geographic miss (LGM)]. Further, these data showed that the use of the robotic system prevented the use of extra stents in 8.3% of the cases. Measurement of lesions with robotic PCI may reduce measurement errors, need for extra stents, and LGM. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    NASA Astrophysics Data System (ADS)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.

  20. Influence of model errors in optimal sensor placement

    NASA Astrophysics Data System (ADS)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  1. PHOTOMETRIC ORBITS OF EXTRASOLAR PLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Robert A.

    We define and analyze the photometric orbit (PhO) of an extrasolar planet observed in reflected light. In our definition, the PhO is a Keplerian entity with six parameters: semimajor axis, eccentricity, mean anomaly at some particular time, argument of periastron, inclination angle, and effective radius, which is the square root of the geometric albedo times the planetary radius. Preliminarily, we assume a Lambertian phase function. We study in detail the case of short-period giant planets (SPGPs) and observational parameters relevant to the Kepler mission: 20 ppm photometry with normal errors, 6.5 hr cadence, and three-year duration. We define a relevantmore » 'planetary population of interest' in terms of probability distributions of the PhO parameters. We perform Monte Carlo experiments to estimate the ability to detect planets and to recover PhO parameters from light curves. We calibrate the completeness of a periodogram search technique, and find structure caused by degeneracy. We recover full orbital solutions from synthetic Kepler data sets and estimate the median errors in recovered PhO parameters. We treat in depth a case of a Jupiter body-double. For the stated assumptions, we find that Kepler should obtain orbital solutions for many of the 100-760 SPGP that Jenkins and Doyle estimate Kepler will discover. Because most or all of these discoveries will be followed up by ground-based radial velocity observations, the estimates of inclination angle from the PhO may enable the calculation of true companion masses: Kepler photometry may break the 'msin i' degeneracy. PhO observations may be difficult. There is uncertainty about how low the albedos of SPGPs actually are, about their phase functions, and about a possible noise floor due to systematic errors from instrumental and stellar sources. Nevertheless, simple detection of SPGPs in reflected light should be robust in the regime of Kepler photometry, and estimates of all six orbital parameters may be feasible in at least a subset of cases.« less

  2. Robust ridge regression estimators for nonlinear models with applications to high throughput screening assay data.

    PubMed

    Lim, Changwon

    2015-03-30

    Nonlinear regression is often used to evaluate the toxicity of a chemical or a drug by fitting data from a dose-response study. Toxicologists and pharmacologists may draw a conclusion about whether a chemical is toxic by testing the significance of the estimated parameters. However, sometimes the null hypothesis cannot be rejected even though the fit is quite good. One possible reason for such cases is that the estimated standard errors of the parameter estimates are extremely large. In this paper, we propose robust ridge regression estimation procedures for nonlinear models to solve this problem. The asymptotic properties of the proposed estimators are investigated; in particular, their mean squared errors are derived. The performances of the proposed estimators are compared with several standard estimators using simulation studies. The proposed methodology is also illustrated using high throughput screening assay data obtained from the National Toxicology Program. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Sensitivity of planetary cruise navigation to earth orientation calibration errors

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Folkner, W. M.

    1995-01-01

    A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.

  4. [Responsibility due to medication errors in France: a study based on SHAM insurance data].

    PubMed

    Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P

    2015-03-01

    The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  5. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  6. Gains in accuracy from averaging ratings of abnormality

    NASA Astrophysics Data System (ADS)

    Swensson, Richard G.; King, Jill L.; Gur, David; Good, Walter F.

    1999-05-01

    Six radiologists used continuous scales to rate 529 chest-film cases for likelihood of five separate types of abnormalities (interstitial disease, nodules, pneumothorax, alveolar infiltrates and rib fractures) in each of six replicated readings, yielding 36 separate ratings of each case for the five abnormalities. Analyses for each type of abnormality estimated the relative gains in accuracy (area below the ROC curve) obtained by averaging the case-ratings across: (1) six independent replications by each reader (30% gain), (2) six different readers within each replication (39% gain) or (3) all 36 readings (58% gain). Although accuracy differed among both readers and abnormalities, ROC curves for the median ratings showed similar relative gains in accuracy. From a latent-variable model for these gains, we estimate that about 51% of a reader's total decision variance consisted of random (within-reader) errors that were uncorrelated between replications, another 14% came from that reader's consistent (but idiosyncratic) responses to different cases, and only about 35% could be attributed to systematic variations among the sampled cases that were consistent across different readers.

  7. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds.

    PubMed

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error ("risk") in the worst case. We construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O(1/sqrt[N])-in contrast to that of classical probability estimation, which is O(1/N)-where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. This makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.

  8. New Methods for Assessing and Reducing Uncertainty in Microgravity Studies

    NASA Astrophysics Data System (ADS)

    Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.

    2017-12-01

    Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.

  9. Estimation of true incidence of polio: overcoming misclassification errors due to stool culture insensitivity.

    PubMed

    Srinivas, V; Puliyel, Jacob M

    2007-08-01

    The diagnosis of polio dependents on culturing the virus in stool samples of children with AFP. Using data obtained under the "Right to Information Act" of instances where only one of the two samples was positive for polio, it was possible to estimate the sensitivity of the system to detect cases of polio. The calculations suggest that there were 1625 (95% CI 1528 to 1725) cases of polio in India in 2006 rather than the 674 reported widely!

  10. Colored noise effects on batch attitude accuracy estimates

    NASA Technical Reports Server (NTRS)

    Bilanow, Stephen

    1991-01-01

    The effects of colored noise on the accuracy of batch least squares parameter estimates with applications to attitude determination cases are investigated. The standard approaches used for estimating the accuracy of a computed attitude commonly assume uncorrelated (white) measurement noise, while in actual flight experience measurement noise often contains significant time correlations and thus is colored. For example, horizon scanner measurements from low Earth orbit were observed to show correlations over many minutes in response to large scale atmospheric phenomena. A general approach to the analysis of the effects of colored noise is investigated, and interpretation of the resulting equations provides insight into the effects of any particular noise color and the worst case noise coloring for any particular parameter estimate. It is shown that for certain cases, the effects of relatively short term correlations can be accommodated by a simple correction factor. The errors in the predicted accuracy assuming white noise and the reduced accuracy due to the suboptimal nature of estimators that do not take into account the noise color characteristics are discussed. The appearance of a variety of sample noise color characteristics are demonstrated through simulation, and their effects are discussed for sample estimation cases. Based on the analysis, options for dealing with the effects of colored noise are discussed.

  11. Incorporation of prior information on parameters into nonlinear regression groundwater flow models: 2. Applications

    USGS Publications Warehouse

    Cooley, Richard L.

    1983-01-01

    This paper investigates factors influencing the degree of improvement in estimates of parameters of a nonlinear regression groundwater flow model by incorporating prior information of unknown reliability. Consideration of expected behavior of the regression solutions and results of a hypothetical modeling problem lead to several general conclusions. First, if the parameters are properly scaled, linearized expressions for the mean square error (MSE) in parameter estimates of a nonlinear model will often behave very nearly as if the model were linear. Second, by using prior information, the MSE in properly scaled parameters can be reduced greatly over the MSE of ordinary least squares estimates of parameters. Third, plots of estimated MSE and the estimated standard deviation of MSE versus an auxiliary parameter (the ridge parameter) specifying the degree of influence of the prior information on regression results can help determine the potential for improvement of parameter estimates. Fourth, proposed criteria can be used to make appropriate choices for the ridge parameter and another parameter expressing degree of overall bias in the prior information. Results of a case study of Truckee Meadows, Reno-Sparks area, Washoe County, Nevada, conform closely to the results of the hypothetical problem. In the Truckee Meadows case, incorporation of prior information did not greatly change the parameter estimates from those obtained by ordinary least squares. However, the analysis showed that both sets of estimates are more reliable than suggested by the standard errors from ordinary least squares.

  12. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  13. Developing Performance Estimates for High Precision Astrometry with TMT

    NASA Astrophysics Data System (ADS)

    Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana

    2013-12-01

    Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.

  14. The input ambiguity hypothesis and case blindness: an account of cross-linguistic and intra-linguistic differences in case errors.

    PubMed

    Pelham, Sabra D

    2011-03-01

    English-acquiring children frequently make pronoun case errors, while German-acquiring children rarely do. Nonetheless, German-acquiring children frequently make article case errors. It is proposed that when child-directed speech contains a high percentage of case-ambiguous forms, case errors are common in child language; when percentages are low, case errors are rare. Input to English and German children was analyzed for percentage of case-ambiguous personal pronouns on adult tiers of corpora from 24 English-acquiring and 24 German-acquiring children. Also analyzed for German was the percentage of case-ambiguous articles. Case-ambiguous pronouns averaged 63·3% in English, compared with 7·6% in German. The percentage of case-ambiguous articles in German was 77·0%. These percentages align with the children's errors reported in the literature. It appears children may be sensitive to levels of ambiguity such that low ambiguity may aid error-free acquisition, while high ambiguity may blind children to case distinctions, resulting in errors.

  15. Re-Assessing Poverty Dynamics and State Protections in Britain and the US: The Role of Measurement Error

    ERIC Educational Resources Information Center

    Worts, Diana; Sacker, Amanda; McDonough, Peggy

    2010-01-01

    This paper addresses a key methodological challenge in the modeling of individual poverty dynamics--the influence of measurement error. Taking the US and Britain as case studies and building on recent research that uses latent Markov models to reduce bias, we examine how measurement error can affect a range of important poverty estimates. Our data…

  16. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  17. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    NASA Astrophysics Data System (ADS)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were linear in the shift for both the QSPECT and QPlanar methods. QPlanar was less sensitive to object definition perturbations than QSPECT, especially for dilation and erosion cases. Up to 1 voxel misregistration or misdefinition resulted in up to 8% error in organ activity estimates, with the largest errors for small or low uptake organs. Both types of VOI definition errors produced larger errors in activity estimates for a small and low uptake organs (i.e. -7.5% to 5.3% for the left kidney) than for a large and high uptake organ (i.e. -2.9% to 2.1% for the liver). We observed that misregistration generally had larger effects than misdefinition, with errors ranging from -7.2% to 8.4%. The different imaging methods evaluated responded differently to the errors from misregistration and misdefinition. We found that QSPECT was more sensitive to misdefinition errors, but less sensitive to misregistration errors, as compared to the QPlanar method. Thus, sensitivity to VOI definition errors should be an important criterion in evaluating quantitative imaging methods.

  18. Genotype-Based Association Mapping of Complex Diseases: Gene-Environment Interactions with Multiple Genetic Markers and Measurement Error in Environmental Exposures

    PubMed Central

    Lobach, Irvna; Fan, Ruzone; Carroll, Raymond T.

    2011-01-01

    With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequihbrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development. PMID:21031455

  19. Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Munoz, Cesar A.

    2007-01-01

    This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.

  20. Position estimation and driving of an autonomous vehicle by monocular vision

    NASA Astrophysics Data System (ADS)

    Hanan, Jay C.; Kayathi, Pavan; Hughlett, Casey L.

    2007-04-01

    Automatic adaptive tracking in real-time for target recognition provided autonomous control of a scale model electric truck. The two-wheel drive truck was modified as an autonomous rover test-bed for vision based guidance and navigation. Methods were implemented to monitor tracking error and ensure a safe, accurate arrival at the intended science target. Some methods are situation independent relying only on the confidence error of the target recognition algorithm. Other methods take advantage of the scenario of combined motion and tracking to filter out anomalies. In either case, only a single calibrated camera was needed for position estimation. Results from real-time autonomous driving tests on the JPL simulated Mars yard are presented. Recognition error was often situation dependent. For the rover case, the background was in motion and may be characterized to provide visual cues on rover travel such as rate, pitch, roll, and distance to objects of interest or hazards. Objects in the scene may be used as landmarks, or waypoints, for such estimations. As objects are approached, their scale increases and their orientation may change. In addition, particularly on rough terrain, these orientation and scale changes may be unpredictable. Feature extraction combined with the neural network algorithm was successful in providing visual odometry in the simulated Mars environment.

  1. Data-Driven Model Uncertainty Estimation in Hydrologic Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S.; Moradkhani, H.; Marshall, L.; Sharma, A.; Geenens, G.

    2018-02-01

    The increasing availability of earth observations necessitates mathematical methods to optimally combine such data with hydrologic models. Several algorithms exist for such purposes, under the umbrella of data assimilation (DA). However, DA methods are often applied in a suboptimal fashion for complex real-world problems, due largely to several practical implementation issues. One such issue is error characterization, which is known to be critical for a successful assimilation. Mischaracterized errors lead to suboptimal forecasts, and in the worst case, to degraded estimates even compared to the no assimilation case. Model uncertainty characterization has received little attention relative to other aspects of DA science. Traditional methods rely on subjective, ad hoc tuning factors or parametric distribution assumptions that may not always be applicable. We propose a novel data-driven approach (named SDMU) to model uncertainty characterization for DA studies where (1) the system states are partially observed and (2) minimal prior knowledge of the model error processes is available, except that the errors display state dependence. It includes an approach for estimating the uncertainty in hidden model states, with the end goal of improving predictions of observed variables. The SDMU is therefore suited to DA studies where the observed variables are of primary interest. Its efficacy is demonstrated through a synthetic case study with low-dimensional chaotic dynamics and a real hydrologic experiment for one-day-ahead streamflow forecasting. In both experiments, the proposed method leads to substantial improvements in the hidden states and observed system outputs over a standard method involving perturbation with Gaussian noise.

  2. Brain Volume Estimation Enhancement by Morphological Image Processing Tools.

    PubMed

    Zeinali, R; Keshtkar, A; Zamani, A; Gharehaghaji, N

    2017-12-01

    Volume estimation of brain is important for many neurological applications. It is necessary in measuring brain growth and changes in brain in normal/abnormal patients. Thus, accurate brain volume measurement is very important. Magnetic resonance imaging (MRI) is the method of choice for volume quantification due to excellent levels of image resolution and between-tissue contrast. Stereology method is a good method for estimating volume but it requires to segment enough MRI slices and have a good resolution. In this study, it is desired to enhance stereology method for volume estimation of brain using less MRI slices with less resolution. In this study, a program for calculating volume using stereology method has been introduced. After morphologic method, dilation was applied and the stereology method enhanced. For the evaluation of this method, we used T1-wighted MR images from digital phantom in BrainWeb which had ground truth. The volume of 20 normal brain extracted from BrainWeb, was calculated. The volumes of white matter, gray matter and cerebrospinal fluid with given dimension were estimated correctly. Volume calculation from Stereology method in different cases was made. In three cases, Root Mean Square Error (RMSE) was measured. Case I with T=5, d=5, Case II with T=10, D=10 and Case III with T=20, d=20 (T=slice thickness, d=resolution as stereology parameters). By comparing these results of two methods, it is obvious that RMSE values for our proposed method are smaller than Stereology method. Using morphological operation, dilation allows to enhance the estimation volume method, Stereology. In the case with less MRI slices and less test points, this method works much better compared to Stereology method.

  3. Integrating Solar PV in Utility System Operations: Analytical Framework and Arizona Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Jing; Botterud, Audun; Mills, Andrew

    2015-06-01

    A systematic framework is proposed to estimate the impact on operating costs due to uncertainty and variability in renewable resources. The framework quantifies the integration costs associated with subhourly variability and uncertainty as well as day-ahead forecasting errors in solar PV (photovoltaics) power. A case study illustrates how changes in system operations may affect these costs for a utility in the southwestern United States (Arizona Public Service Company). We conduct an extensive sensitivity analysis under different assumptions about balancing reserves, system flexibility, fuel prices, and forecasting errors. We find that high solar PV penetrations may lead to operational challenges, particularlymore » during low-load and high solar periods. Increased system flexibility is essential for minimizing integration costs and maintaining reliability. In a set of sensitivity cases where such flexibility is provided, in part, by flexible operations of nuclear power plants, the estimated integration costs vary between $1.0 and $4.4/MWh-PV for a PV penetration level of 17%. The integration costs are primarily due to higher needs for hour-ahead balancing reserves to address the increased sub-hourly variability and uncertainty in the PV resource. (C) 2015 Elsevier Ltd. All rights reserved.« less

  4. Reliability of clinical impact grading by healthcare professionals of common prescribing error and optimisation cases in critical care patients.

    PubMed

    Bourne, Richard S; Shulman, Rob; Tomlin, Mark; Borthwick, Mark; Berry, Will; Mills, Gary H

    2017-04-01

    To identify between and within profession-rater reliability of clinical impact grading for common critical care prescribing error and optimisation cases. To identify representative clinical impact grades for each individual case. Electronic questionnaire. 5 UK NHS Trusts. 30 Critical care healthcare professionals (doctors, pharmacists and nurses). Participants graded severity of clinical impact (5-point categorical scale) of 50 error and 55 optimisation cases. Case between and within profession-rater reliability and modal clinical impact grading. Between and within profession rater reliability analysis used linear mixed model and intraclass correlation, respectively. The majority of error and optimisation cases (both 76%) had a modal clinical severity grade of moderate or higher. Error cases: doctors graded clinical impact significantly lower than pharmacists (-0.25; P < 0.001) and nurses (-0.53; P < 0.001), with nurses significantly higher than pharmacists (0.28; P < 0.001). Optimisation cases: doctors graded clinical impact significantly lower than nurses and pharmacists (-0.39 and -0.5; P < 0.001, respectively). Within profession reliability grading was excellent for pharmacists (0.88 and 0.89; P < 0.001) and doctors (0.79 and 0.83; P < 0.001) but only fair to good for nurses (0.43 and 0.74; P < 0.001), for optimisation and error cases, respectively. Representative clinical impact grades for over 100 common prescribing error and optimisation cases are reported for potential clinical practice and research application. The between professional variability highlights the importance of multidisciplinary perspectives in assessment of medication error and optimisation cases in clinical practice and research. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  5. Accurate acceleration of kinetic Monte Carlo simulations through the modification of rate constants.

    PubMed

    Chatterjee, Abhijit; Voter, Arthur F

    2010-05-21

    We present a novel computational algorithm called the accelerated superbasin kinetic Monte Carlo (AS-KMC) method that enables a more efficient study of rare-event dynamics than the standard KMC method while maintaining control over the error. In AS-KMC, the rate constants for processes that are observed many times are lowered during the course of a simulation. As a result, rare processes are observed more frequently than in KMC and the time progresses faster. We first derive error estimates for AS-KMC when the rate constants are modified. These error estimates are next employed to develop a procedure for lowering process rates with control over the maximum error. Finally, numerical calculations are performed to demonstrate that the AS-KMC method captures the correct dynamics, while providing significant CPU savings over KMC in most cases. We show that the AS-KMC method can be employed with any KMC model, even when no time scale separation is present (although in such cases no computational speed-up is observed), without requiring the knowledge of various time scales present in the system.

  6. Reduced backscattering cross section (Sigma degree) data from the Skylab S-193 radar altimeter

    NASA Technical Reports Server (NTRS)

    Brown, G. S.

    1975-01-01

    Backscattering cross section per unit scattering area data, reduced from measurements made by the Skylab S-193 radar altimeter over the ocean surface are presented. Descriptions of the altimeter are given where applicable to the measurement process. Analytical solutions are obtained for the flat surface impulse response for the case of a nonsymmetrical antenna pattern. Formulations are developed for converting altimeter AGC outputs into values for the backscattering cross section. Reduced data are presented for Missions SL-2, 3 and 4 for all modes of the altimeter where sufficient calibration existed. The problem of interpreting land scatter data is also discussed. Finally, a comprehensive error analysis of the measurement is presented and worst case random and bias errors are estimated.

  7. A method to estimate groundwater depletion from confining layers

    USGS Publications Warehouse

    Konikow, Leonard F.; Neuzil, Christopher E.

    2007-01-01

    Although depletion of storage in low‐permeability confining layers is the source of much of the groundwater produced from many confined aquifer systems, it is all too frequently overlooked or ignored. This makes effective management of groundwater resources difficult by masking how much water has been derived from storage and, in some cases, the total amount of water that has been extracted from an aquifer system. Analyzing confining layer storage is viewed as troublesome because of the additional computational burden and because the hydraulic properties of confining layers are poorly known. In this paper we propose a simplified method for computing estimates of confining layer depletion, as well as procedures for approximating confining layer hydraulic conductivity (K) and specific storage (Ss) using geologic information. The latter makes the technique useful in developing countries and other settings where minimal data are available or when scoping calculations are needed. As such, our approach may be helpful for estimating the global transfer of groundwater to surface water. A test of the method on a synthetic system suggests that the computational errors will generally be small. Larger errors will probably result from inaccuracy in confining layer property estimates, but these may be no greater than errors in more sophisticated analyses. The technique is demonstrated by application to two aquifer systems: the Dakota artesian aquifer system in South Dakota and the coastal plain aquifer system in Virginia. In both cases, depletion from confining layers was substantially larger than depletion from the aquifers.

  8. Experimental Validation of Pulse Phase Tracking for X-Ray Pulsar Based

    NASA Technical Reports Server (NTRS)

    Anderson, Kevin

    2012-01-01

    Pulsars are a form of variable celestial source that have shown to be usable as aids for autonomous, deep space navigation. Particularly those sources emitting in the X-ray band are ideal for navigation due to smaller detector sizes. In this paper X-ray photons arriving from a pulsar are modeled as a non-homogeneous Poisson process. The method of pulse phase tracking is then investigated as a technique to measure the radial distance traveled by a spacecraft over an observation interval. A maximum-likelihood phase estimator (MLE) is used for the case where the observed frequency signal is constant. For the varying signal frequency case, an algorithm is used in which the observation window is broken up into smaller blocks over which an MLE is used. The outputs of this phase estimation process were then looped through a digital phase-locked loop (DPLL) in order to reduce the errors and produce estimates of the doppler frequency. These phase tracking algorithms were tested both in a computer simulation environment and using the NASA Goddard Space flight Center X-ray Navigation Laboratory Testbed (GXLT). This provided an experimental validation with photons being emitted by a modulated X-ray source and detected by a silicon-drift detector. Models of the Crab pulsar and the pulsar B1821-24 were used in order to generate test scenarios. Three different simulated detector trajectories were used to be tracked by the phase tracking algorithm: a stationary case, one with constant velocity, and one with constant acceleration. All three were performed in one-dimension along the line of sight to the pulsar. The first two had a constant signal frequency and the third had a time varying frequency. All of the constant frequency cases were processed using the MLE, and it was shown that they tracked the initial phase within 0.15% for the simulations and 2.5% in the experiments, based on an average of ten runs. The MLE-DPLL cascade version of the phase tracking algorithm was used in the varying frequency case. This resulted in tracking of the phase and frequency by the DPLL outputs in both the simulation and experimental environments. The crab pulsar was experimentally tested with a trajectory with a higher acceleration. In this case the phase error tended toward zero as the observation extended to 250 seconds and the doppler frequency error tended to zero in under 100 seconds.

  9. Scleroderma prevalence: demographic variations in a population-based sample.

    PubMed

    Bernatsky, S; Joseph, L; Pineau, C A; Belisle, P; Hudson, M; Clarke, A E

    2009-03-15

    To estimate the prevalence of systemic sclerosis (SSc) using population-based administrative data, and to assess the sensitivity of case ascertainment approaches. We ascertained SSc cases from Quebec physician billing and hospitalization databases (covering approximately 7.5 million individuals). Three case definition algorithms were compared, and statistical methods accounting for imperfect case ascertainment were used to estimate SSc prevalence and case ascertainment sensitivity. A hierarchical Bayesian latent class regression model that accounted for possible between-test dependence conditional on disease status estimated the effect of patient characteristics on SSc prevalence and the sensitivity of the 3 ascertainment algorithms. Accounting for error inherent in both the billing and the hospitalization data, we estimated SSc prevalence in 2003 at 74.4 cases per 100,000 women (95% credible interval [95% CrI] 69.3-79.7) and 13.3 cases per 100,000 men (95% CrI 11.1-16.1). Prevalence was higher for older individuals, particularly in urban women (161.2 cases per 100,000, 95% CrI 148.6-175.0). Prevalence was lowest in young men (in rural areas, as low as 2.8 cases per 100,000, 95% CrI 1.4-4.8). In general, no single algorithm was very sensitive, with point estimates for sensitivity ranging from 20-73%. We found marked differences in SSc prevalence according to age, sex, and region. In general, no single case ascertainment approach was very sensitive for SSc. Therefore, using data from multiple sources, with adjustment for the imperfect nature of each, is an important strategy in population-based studies of SSc and similar conditions.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  11. Estimation of chromatic errors from broadband images for high contrast imaging

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2015-09-01

    Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.

  12. Recall bias in melanoma risk factors and measurement error effects: a nested case-control study within the Norwegian Women and Cancer Study.

    PubMed

    Parr, Christine L; Hjartåker, Anette; Laake, Petter; Lund, Eiliv; Veierød, Marit B

    2009-02-01

    Case-control studies of melanoma have the potential for recall bias after much public information about the relation with ultraviolet radiation. Recall bias has been investigated in few studies and only for some risk factors. A nested case-control study of recall bias was conducted in 2004 within the Norwegian Women and Cancer Study: 208 melanoma cases and 2,080 matched controls were invited. Data were analyzed for 162 cases (response, 78%) and 1,242 controls (response, 77%). Questionnaire responses to several host factors and ultraviolet exposures collected at enrollment in 1991-1997 and in 2004 were compared stratified on case-control status. Shifts in responses were observed among both cases and controls, but a shift in cases was observed only for skin color after chronic sun exposure, and a larger shift in cases was observed for nevi. Weighted kappa was lower for cases than for controls for most age intervals of sunburn, sunbathing vacations, and solarium use. Differences in odds ratio estimates of melanoma based on prospective and retrospective measurements indicate measurement error that is difficult to characterize. The authors conclude that indications of recall bias were found in this sample of Norwegian women, but that the results were inconsistent for the different exposures.

  13. European option pricing under the Student's t noise with jumps

    NASA Astrophysics Data System (ADS)

    Wang, Xiao-Tian; Li, Zhe; Zhuang, Le

    2017-03-01

    In this paper we present a new approach to price European options under the Student's t noise with jumps. Through the conditional delta hedging strategy and the minimal mean-square-error hedging, a closed-form solution of the European option value is obtained under the incomplete information case. In particular, we propose a Value-at-Risk-type procedure to estimate the volatility parameter σ such that the pricing error is in accord with the risk preferences of investors. In addition, the numerical results of us show that options are not priced in some cases in an incomplete information market.

  14. Data Assimilation Experiments using Quality Controlled AIRS Version 5 Temperature Soundings

    NASA Technical Reports Server (NTRS)

    SUsskind, Joel

    2008-01-01

    The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains two significant improvements over Version 4: 1) Improved physics allows for use of AIRS observations in the entire 4.3 pm C02 absorption band in the retrieval of temperature profile T(p) during both day and night. Tropospheric sounding 15 pm C02 observations are now used primarily in the generation of cloud cleared radiances Ri. This approach allows for the generation of accurate values of Ri and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by- channel error estimates for Ri. These error estimates are used for quality control of the retrieved products. We have conducted forecast impact experiments assimilating AIRS temperature profiles with different levels of quality control using the NASA GEOS-5 data assimilation system. Assimilation of quality controlled T(p) resulted in significantly improved forecast skill compared to that obtained from analyses obtained when all data used operationally by NCEP, except for AIRS data, is assimilated. We also conducted an experiment assimilating AIRS radiances uncontaminated by clouds, as done Operationally by ECMWF and NCEP. Forecasts resulting from assimilated AIRS radiances were of poorer quality than those obtained assimilating AIRS temperatures.

  15. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1991-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  16. Multistage estimation of received carrier signal parameters under very high dynamic conditions of the receiver

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1990-01-01

    A multistage estimator is provided for the parameters of a received carrier signal possibly phase-modulated by unknown data and experiencing very high Doppler, Doppler rate, etc., as may arise, for example, in the case of Global Positioning Systems (GPS) where the signal parameters are directly related to the position, velocity and jerk of the GPS ground-based receiver. In a two-stage embodiment of the more general multistage scheme, the first stage, selected to be a modified least squares algorithm referred to as differential least squares (DLS), operates as a coarse estimator resulting in higher rms estimation errors but with a relatively small probability of the frequency estimation error exceeding one-half of the sampling frequency, provides relatively coarse estimates of the frequency and its derivatives. The second stage of the estimator, an extended Kalman filter (EKF), operates on the error signal available from the first stage refining the overall estimates of the phase along with a more refined estimate of frequency as well and in the process also reduces the number of cycle slips.

  17. A Generalized Least Squares Regression Approach for Computing Effect Sizes in Single-Case Research: Application Examples

    ERIC Educational Resources Information Center

    Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.

    2011-01-01

    A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…

  18. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy.

    PubMed

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-09-18

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms.

  19. How Angular Velocity Features and Different Gyroscope Noise Types Interact and Determine Orientation Estimation Accuracy

    PubMed Central

    Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio

    2015-01-01

    In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms. PMID:26393606

  20. Does exposure to simulated patient cases improve accuracy of clinicians' predictive value estimates of diagnostic test results? A within-subjects experiment at St Michael's Hospital, Toronto, Canada.

    PubMed

    Armstrong, Bonnie; Spaniol, Julia; Persaud, Nav

    2018-02-13

    Clinicians often overestimate the probability of a disease given a positive test result (positive predictive value; PPV) and the probability of no disease given a negative test result (negative predictive value; NPV). The purpose of this study was to investigate whether experiencing simulated patient cases (ie, an 'experience format') would promote more accurate PPV and NPV estimates compared with a numerical format. Participants were presented with information about three diagnostic tests for the same fictitious disease and were asked to estimate the PPV and NPV of each test. Tests varied with respect to sensitivity and specificity. Information about each test was presented once in the numerical format and once in the experience format. The study used a 2 (format: numerical vs experience) × 3 (diagnostic test: gold standard vs low sensitivity vs low specificity) within-subjects design. The study was completed online, via Qualtrics (Provo, Utah, USA). 50 physicians (12 clinicians and 38 residents) from the Department of Family and Community Medicine at St Michael's Hospital in Toronto, Canada, completed the study. All participants had completed at least 1 year of residency. Estimation accuracy was quantified by the mean absolute error (MAE; absolute difference between estimate and true predictive value). PPV estimation errors were larger in the numerical format (MAE=32.6%, 95% CI 26.8% to 38.4%) compared with the experience format (MAE=15.9%, 95% CI 11.8% to 20.0%, d =0.697, P<0.001). Likewise, NPV estimation errors were larger in the numerical format (MAE=24.4%, 95% CI 14.5% to 34.3%) than in the experience format (MAE=11.0%, 95% CI 6.5% to 15.5%, d =0.303, P=0.015). Exposure to simulated patient cases promotes accurate estimation of predictive values in clinicians. This finding carries implications for diagnostic training and practice. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Evaluation of a method of estimating low-flow frequencies from base-flow measurements at Indiana streams

    USGS Publications Warehouse

    Wilson, John Thomas

    2000-01-01

    A mathematical technique of estimating low-flow frequencies from base-flow measurements was evaluated by using data for streams in Indiana. Low-flow frequencies at low- flow partial-record stations were estimated by relating base-flow measurements to concurrent daily flows at nearby streamflow-gaging stations (index stations) for which low-flowfrequency curves had been developed. A network of long-term streamflow-gaging stations in Indiana provided a sample of sites with observed low-flow frequencies. Observed values of 7-day, 10-year low flow and 7-day, 2-year low flow were compared to predicted values to evaluate the accuracy of the method. Five test cases were used to evaluate the method under a variety of conditions in which the location of the index station and its drainage area varied relative to the partial-record station. A total of 141 pairs of streamflow-gaging stations were used in the five test cases. Four of the test cases used one index station, the fifth test case used two index stations. The number of base-flow measurements was varied for each test case to see if the accuracy of the method was affected by the number of measurements used. The most accurate and least variable results were produced when two index stations on the same stream or tributaries of the partial-record station were used. All but one value of the predicted 7-day, 10-year low flow were within 15 percent of the values observed for the long-term continuous record, and all of the predicted values of the 7-day, 2-year lowflow were within 15 percent of the observed values. This apparent accuracy, to some extent, may be a result of the small sample set of 15. Of the four test cases that used one index station, the most accurate and least variable results were produced in the test case where the index station and partial-record station were on the same stream or on streams tributary to each other and where the index station had a larger drainage area than the partial-record station. In that test case, the method tended to over predict, based on the median relative error. In 23 of 28 test pairs, the predicted 7-day, 10-year low flow was within 15 percent of the observed value; in 26 of 28 test pairs, the predicted 7-day, 2-year low flow was within 15 percent of the observed value. When the index station and partial-record station were on the same stream or streams tributary to each other and the index station had a smaller drainage area than the partial-record station, the method tended to under predict the low-flow frequencies. Nineteen of 28 predicted values of the 7-day, 10-year low flow were within 15 percent of the observed values. Twenty-five of 28 predicted values of the 7-day, 2-year low flow were within 15 percent of the observed values. When the index station and the partial-record station were on different streams, the method tended to under predict regardless of whether the index station had a larger or smaller drainage area than that of the partial-record station. Also, the variability of the relative error of estimate was greatest for the test cases that used index stations and partial-record stations from different streams. This variability, in part, may be caused by using more streamflow-gaging stations with small low-flow frequencies in these test cases. A small difference in the predicted and observed values can equate to a large relative error when dealing with stations that have small low-flow frequencies. In the test cases that used one index station, the method tended to predict smaller low-flow frequencies as the number of base-flow measurements was reduced from 20 to 5. Overall, the average relative error of estimate and the variability of the predicted values increased as the number of base-flow measurements was reduced.

  2. Computational estimation of errors generated by lumping of physiologically-based pharmacokinetic (PBPK) interaction models of inhaled complex chemical mixtures

    EPA Science Inventory

    Many cases of environmental contamination result in concurrent or sequential exposure to more than one chemical. However, limitations of available resources make it unlikely that experimental toxicology will provide health risk information about all the possible mixtures to which...

  3. Estimation of population mean under systematic sampling

    NASA Astrophysics Data System (ADS)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  4. The role of misclassification in estimating proportions and an estimator of misclassification probability

    Treesearch

    Patrick L. Zimmerman; Greg C. Liknes

    2010-01-01

    Dot grids are often used to estimate the proportion of land cover belonging to some class in an aerial photograph. Interpreter misclassification is an often-ignored source of error in dot-grid sampling that has the potential to significantly bias proportion estimates. For the case when the true class of items is unknown, we present a maximum-likelihood estimator of...

  5. Cross sections for H(-) and Cl(-) production from HCl by dissociative electron attachment

    NASA Technical Reports Server (NTRS)

    Orient, O. J.; Srivastava, S. K.

    1985-01-01

    A crossed target beam-electron beam collision geometry and a quadrupole mass spectrometer have been used to conduct dissociative electron attachment cross section measurements for the case of H(-) and Cl(-) production from HCl. The relative flow technique is used to determine the absolute values of cross sections. A tabulation is given of the attachment energies corresponding to various cross section maxima. Error sources contributing to total errors are also estimated.

  6. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    NASA Astrophysics Data System (ADS)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  7. Re-use of pilot data and interim analysis of pivotal data in MRMC studies: a simulation study

    NASA Astrophysics Data System (ADS)

    Chen, Weijie; Samuelson, Frank; Sahiner, Berkman; Petrick, Nicholas

    2017-03-01

    Novel medical imaging devices are often evaluated with multi-reader multi-case (MRMC) studies in which radiologists read images of patient cases for a specified clinical task (e.g., cancer detection). A pilot study is often used to measure the effect size and variance parameters that are necessary for sizing a pivotal study (including sizing readers, non-diseased and diseased cases). Due to the practical difficulty of collecting patient cases or recruiting clinical readers, some investigators attempt to include the pilot data as part of their pivotal study. In other situations, some investigators attempt to perform an interim analysis of their pivotal study data based upon which the sample sizes may be re-estimated. Re-use of the pilot data or interim analyses of the pivotal data may inflate the type I error of the pivotal study. In this work, we use the Roe and Metz model to simulate MRMC data under the null hypothesis (i.e., two devices have equal diagnostic performance) and investigate the type I error rate for several practical designs involving re-use of pilot data or interim analysis of pivotal data. Our preliminary simulation results indicate that, under the simulation conditions we investigated, the inflation of type I error is none or only marginal for some design strategies (e.g., re-use of patient data without re-using readers, and size re-estimation without using the effect-size estimated in the interim analysis). Upon further verifications, these are potentially useful design methods in that they may help make a study less burdensome and have a better chance to succeed without substantial loss of the statistical rigor.

  8. A probability model for evaluating the bias and precision of influenza vaccine effectiveness estimates from case-control studies.

    PubMed

    Haber, M; An, Q; Foppa, I M; Shay, D K; Ferdinands, J M; Orenstein, W A

    2015-05-01

    As influenza vaccination is now widely recommended, randomized clinical trials are no longer ethical in many populations. Therefore, observational studies on patients seeking medical care for acute respiratory illnesses (ARIs) are a popular option for estimating influenza vaccine effectiveness (VE). We developed a probability model for evaluating and comparing bias and precision of estimates of VE against symptomatic influenza from two commonly used case-control study designs: the test-negative design and the traditional case-control design. We show that when vaccination does not affect the probability of developing non-influenza ARI then VE estimates from test-negative design studies are unbiased even if vaccinees and non-vaccinees have different probabilities of seeking medical care against ARI, as long as the ratio of these probabilities is the same for illnesses resulting from influenza and non-influenza infections. Our numerical results suggest that in general, estimates from the test-negative design have smaller bias compared to estimates from the traditional case-control design as long as the probability of non-influenza ARI is similar among vaccinated and unvaccinated individuals. We did not find consistent differences between the standard errors of the estimates from the two study designs.

  9. Potential lost productivity resulting from the global burden of uncorrected refractive error.

    PubMed

    Smith, T S T; Frick, K D; Holden, B A; Fricke, T R; Naidoo, K S

    2009-06-01

    To estimate the potential global economic productivity loss associated with the existing burden of visual impairment from uncorrected refractive error (URE). Conservative assumptions and national population, epidemiological and economic data were used to estimate the purchasing power parity-adjusted gross domestic product (PPP-adjusted GDP) loss for all individuals with impaired vision and blindness, and for individuals with normal sight who provide them with informal care. An estimated 158.1 million cases of visual impairment resulted from uncorrected or undercorrected refractive error in 2007; of these, 8.7 million were blind. We estimated the global economic productivity loss in international dollars (I$) associated with this burden at I$ 427.7 billion before, and I$ 268.8 billion after, adjustment for country-specific labour force participation and employment rates. With the same adjustment, but assuming no economic productivity for individuals aged > 50 years, we estimated the potential productivity loss at I$ 121.4 billion. Even under the most conservative assumptions, the total estimated productivity loss, in $I, associated with visual impairment from URE is approximately a thousand times greater than the global number of cases. The cost of scaling up existing refractive services to meet this burden is unknown, but if each affected individual were to be provided with appropriate eyeglasses for less than I$ 1000, a net economic gain may be attainable.

  10. Potential lost productivity resulting from the global burden of uncorrected refractive error

    PubMed Central

    Frick, KD; Holden, BA; Fricke, TR; Naidoo, KS

    2009-01-01

    Abstract Objective To estimate the potential global economic productivity loss associated with the existing burden of visual impairment from uncorrected refractive error (URE). Methods Conservative assumptions and national population, epidemiological and economic data were used to estimate the purchasing power parity-adjusted gross domestic product (PPP-adjusted GDP) loss for all individuals with impaired vision and blindness, and for individuals with normal sight who provide them with informal care. Findings An estimated 158.1 million cases of visual impairment resulted from uncorrected or undercorrected refractive error in 2007; of these, 8.7 million were blind. We estimated the global economic productivity loss in international dollars (I$) associated with this burden at I$ 427.7 billion before, and I$ 268.8 billion after, adjustment for country-specific labour force participation and employment rates. With the same adjustment, but assuming no economic productivity for individuals aged ≥ 50 years, we estimated the potential productivity loss at I$ 121.4 billion. Conclusion Even under the most conservative assumptions, the total estimated productivity loss, in $I, associated with visual impairment from URE is approximately a thousand times greater than the global number of cases. The cost of scaling up existing refractive services to meet this burden is unknown, but if each affected individual were to be provided with appropriate eyeglasses for less than I$ 1000, a net economic gain may be attainable. PMID:19565121

  11. Robust logistic regression to narrow down the winner's curse for rare and recessive susceptibility variants.

    PubMed

    Kesselmeier, Miriam; Lorenzo Bermejo, Justo

    2017-11-01

    Logistic regression is the most common technique used for genetic case-control association studies. A disadvantage of standard maximum likelihood estimators of the genotype relative risk (GRR) is their strong dependence on outlier subjects, for example, patients diagnosed at unusually young age. Robust methods are available to constrain outlier influence, but they are scarcely used in genetic studies. This article provides a non-intimidating introduction to robust logistic regression, and investigates its benefits and limitations in genetic association studies. We applied the bounded Huber and extended the R package 'robustbase' with the re-descending Hampel functions to down-weight outlier influence. Computer simulations were carried out to assess the type I error rate, mean squared error (MSE) and statistical power according to major characteristics of the genetic study and investigated markers. Simulations were complemented with the analysis of real data. Both standard and robust estimation controlled type I error rates. Standard logistic regression showed the highest power but standard GRR estimates also showed the largest bias and MSE, in particular for associated rare and recessive variants. For illustration, a recessive variant with a true GRR=6.32 and a minor allele frequency=0.05 investigated in a 1000 case/1000 control study by standard logistic regression resulted in power=0.60 and MSE=16.5. The corresponding figures for Huber-based estimation were power=0.51 and MSE=0.53. Overall, Hampel- and Huber-based GRR estimates did not differ much. Robust logistic regression may represent a valuable alternative to standard maximum likelihood estimation when the focus lies on risk prediction rather than identification of susceptibility variants. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Identification of the release history of a groundwater contaminant in non-uniform flow field through the minimum relative entropy method

    NASA Astrophysics Data System (ADS)

    Cupola, F.; Tanda, M. G.; Zanini, A.

    2014-12-01

    The interest in approaches that allow the estimation of pollutant source release in groundwater has increased exponentially over the last decades. This is due to the large number of groundwater reclamation procedures that have been carried out: the remediation is expensive and the costs can be easily shared among the different actors if the release history is known. Moreover, a reliable release history can be a useful tool for predicting the plume evolution and for minimizing the harmful effects of the contamination. In this framework, Woodbury and Ulrych (1993, 1996) adopted and improved the minimum relative entropy (MRE) method to solve linear inverse problems for the recovery of the pollutant release history in an aquifer. In this work, the MRE method has been improved to detect the source release history in 2-D aquifer characterized by a non-uniform flow-field. The approach has been tested on two cases: a 2-D homogeneous conductivity field and a strong heterogeneous one (the hydraulic conductivity presents three orders of magnitude in terms of variability). In the latter case the transfer function could not be described with an analytical formulation, thus, the transfer functions were estimated by means of the method developed by Butera et al. (2006). In order to demonstrate its scope, this method was applied with two different datasets: observations collected at the same time at 20 different monitoring points, and observations collected at 2 monitoring points at different times (15-25 monitoring points). The data observed were considered affected by a random error. These study cases have been carried out considering a Boxcar and a Gaussian function as expected value of the prior distribution of the release history. The agreement between the true and the estimated release history has been evaluated through the calculation of the normalized Root Mean Square (nRMSE) error: this has shown the ability of the method of recovering the release history even in the most severe cases. Finally, the forward simulation has been carried out by using the estimated release history in order to compare the true data with the estimated one: the best agreement has been obtained in the homogeneous case, even if also in the heterogenous one the nRMSE is acceptable.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Y; Macq, B; Bondar, L

    Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less

  14. Corrected score estimation in the proportional hazards model with misclassified discrete covariates

    PubMed Central

    Zucker, David M.; Spiegelman, Donna

    2013-01-01

    SUMMARY We consider Cox proportional hazards regression when the covariate vector includes error-prone discrete covariates along with error-free covariates, which may be discrete or continuous. The misclassification in the discrete error-prone covariates is allowed to be of any specified form. Building on the work of Nakamura and his colleagues, we present a corrected score method for this setting. The method can handle all three major study designs (internal validation design, external validation design, and replicate measures design), both functional and structural error models, and time-dependent covariates satisfying a certain ‘localized error’ condition. We derive the asymptotic properties of the method and indicate how to adjust the covariance matrix of the regression coefficient estimates to account for estimation of the misclassification matrix. We present the results of a finite-sample simulation study under Weibull survival with a single binary covariate having known misclassification rates. The performance of the method described here was similar to that of related methods we have examined in previous works. Specifically, our new estimator performed as well as or, in a few cases, better than the full Weibull maximum likelihood estimator. We also present simulation results for our method for the case where the misclassification probabilities are estimated from an external replicate measures study. Our method generally performed well in these simulations. The new estimator has a broader range of applicability than many other estimators proposed in the literature, including those described in our own earlier work, in that it can handle time-dependent covariates with an arbitrary misclassification structure. We illustrate the method on data from a study of the relationship between dietary calcium intake and distal colon cancer. PMID:18219700

  15. Hedonic price models with omitted variables and measurement errors: a constrained autoregression-structural equation modeling approach with application to urban Indonesia

    NASA Astrophysics Data System (ADS)

    Suparman, Yusep; Folmer, Henk; Oud, Johan H. L.

    2014-01-01

    Omitted variables and measurement errors in explanatory variables frequently occur in hedonic price models. Ignoring these problems leads to biased estimators. In this paper, we develop a constrained autoregression-structural equation model (ASEM) to handle both types of problems. Standard panel data models to handle omitted variables bias are based on the assumption that the omitted variables are time-invariant. ASEM allows handling of both time-varying and time-invariant omitted variables by constrained autoregression. In the case of measurement error, standard approaches require additional external information which is usually difficult to obtain. ASEM exploits the fact that panel data are repeatedly measured which allows decomposing the variance of a variable into the true variance and the variance due to measurement error. We apply ASEM to estimate a hedonic housing model for urban Indonesia. To get insight into the consequences of measurement error and omitted variables, we compare the ASEM estimates with the outcomes of (1) a standard SEM, which does not account for omitted variables, (2) a constrained autoregression model, which does not account for measurement error, and (3) a fixed effects hedonic model, which ignores measurement error and time-varying omitted variables. The differences between the ASEM estimates and the outcomes of the three alternative approaches are substantial.

  16. Bayes and empirical Bayes methods for reduced rank regression models in matched case-control studies.

    PubMed

    Satagopan, Jaya M; Sen, Ananda; Zhou, Qin; Lan, Qing; Rothman, Nathaniel; Langseth, Hilde; Engel, Lawrence S

    2016-06-01

    Matched case-control studies are popular designs used in epidemiology for assessing the effects of exposures on binary traits. Modern studies increasingly enjoy the ability to examine a large number of exposures in a comprehensive manner. However, several risk factors often tend to be related in a nontrivial way, undermining efforts to identify the risk factors using standard analytic methods due to inflated type-I errors and possible masking of effects. Epidemiologists often use data reduction techniques by grouping the prognostic factors using a thematic approach, with themes deriving from biological considerations. We propose shrinkage-type estimators based on Bayesian penalization methods to estimate the effects of the risk factors using these themes. The properties of the estimators are examined using extensive simulations. The methodology is illustrated using data from a matched case-control study of polychlorinated biphenyls in relation to the etiology of non-Hodgkin's lymphoma. © 2015, The International Biometric Society.

  17. Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones

    PubMed Central

    Wang, Zhen; Jin, Bingwen; Geng, Weidong

    2017-01-01

    The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765

  18. Studies on time of death estimation in the early post mortem period -- application of a method based on eyeball temperature measurement to human bodies.

    PubMed

    Kaliszan, Michał

    2013-09-01

    This paper presents a verification of the thermodynamic model allowing an estimation of the time of death (TOD) by calculating the post mortem interval (PMI) based on a single eyeball temperature measurement at the death scene. The study was performed on 30 cases with known PMI, ranging from 1h 35min to 5h 15min, using pin probes connected to a high precision electronic thermometer (Dostmann-electronic). The measured eye temperatures ranged from 20.2 to 33.1°C. Rectal temperature was measured at the same time and ranged from 32.8 to 37.4°C. Ambient temperatures which ranged from -1 to 24°C, environmental conditions (still air to light wind) and the amount of hair on the head were also recorded every time. PMI was calculated using a formula based on Newton's law of cooling, previously derived and successfully tested in comprehensive studies on pigs and a few human cases. Thanks to both the significantly faster post mortem decrease of eye temperature and a residual or nonexistent plateau effect in the eye, as well as practically no influence of body mass, TOD in the human death cases could be estimated with good accuracy. The highest TOD estimation error during the post mortem intervals up to around 5h was 1h 16min, 1h 14min and 1h 03min, respectively in three cases among 30, while for the remaining 27 cases it was not more than 47min. The mean error for all 30 cases was ±31min. All that indicates that the proposed method is of quite good precision in the early post mortem period, with an accuracy of ±1h for a 95% confidence interval. On the basis of the presented method, TOD can be also calculated at the death scene with the use of a proposed portable electronic device (TOD-meter). Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Error correcting coding-theory for structured light illumination systems

    NASA Astrophysics Data System (ADS)

    Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben

    2017-06-01

    Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.

  20. The Influence of Screening for Precancerous Lesions on Family-Based Genetic Association Tests: An Example of Colorectal Polyps and Cancer.

    PubMed

    Schmit, Stephanie L; Figueiredo, Jane C; Cortessis, Victoria K; Thomas, Duncan C

    2015-10-15

    Unintended consequences of secondary prevention include potential introduction of bias into epidemiologic studies estimating genotype-disease associations. To better understand such bias, we simulated a family-based study of colorectal cancer (CRC), which can be prevented by resecting screen-detected polyps. We simulated genes related to CRC development through risk of polyps (G1), risk of CRC but not polyps (G2), and progression from polyp to CRC (G3). Then, we examined 4 analytical strategies for studying diseases subject to secondary prevention, comparing the following: 1) CRC cases with all controls, without adjusting for polyp history; 2) CRC cases with controls, adjusting for polyp history; 3) CRC cases with only polyp-free controls; and 4) cases with either CRC or polyps with controls having neither. Strategy 1 yielded estimates of association between CRC and each G that were not substantially biased. Strategies 2-4 yielded biased estimates varying in direction according to analysis strategy and gene type. Type I errors were correct, but strategy 1 provided greater power for estimating associations with G2 and G3. We also applied each strategy to case-control data from the Colon Cancer Family Registry (1997-2007). Generally, the best analytical option balancing bias and power is to compare all CRC cases with all controls, ignoring polyps. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Bradley Fighting Vehicle Gunnery: An Analysis of Engagement Strategies for the M242 25-mm Automatic Gun

    DTIC Science & Technology

    1993-03-01

    source for this estimate of eight rounds per BMP target. According to analyst Donna Quirido, AMSAA does not provide or support any such estimate (30...engagement or in the case of. the Bradley, stabilization inaccuracies. According to Helgert: These errors give rise to aim-wander, a term that derives from...the same area. (6:14_5) The resulting approximation to the truncated normal integral has a maximum relative error of 0.0075. Using Polya -Williams, an

  2. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  3. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  4. Fatal overdoses involving hydromorphone and morphine among inpatients: a case series

    PubMed Central

    Lowe, Amanda; Hamilton, Michael; Greenall BScPhm MHSc, Julie; Ma, Jessica; Dhalla, Irfan; Persaud, Nav

    2017-01-01

    Background: Opioids have narrow therapeutic windows, and errors in ordering or administration can be fatal. The purpose of this study was to describe deaths involving hydromorphone and morphine, which have similar-sounding names, but different potencies. Methods: In this case series, we describe deaths of patients admitted to hospital or residents of long-term care facilities that involved hydromorphone and morphine. We searched for deaths referred to the Patient Safety Review Committee of the Office of the Chief Coroner for Ontario between 2007 and 2012, and subsequently reviewed by 2014. We reviewed each case to identify intervention points where errors could have been prevented. Results: We identified 8 cases involving decedents aged 19 to 91 years. The cases involved errors in prescribing, order processing and transcription, dispensing, administration and monitoring. For 7 of the 8 cases, there were multiple (2 or more) possible intervention points. Six cases may have been prevented by additional patient monitoring, and 5 cases involved dispensing errors. Interpretation: Opioid toxicity deaths in patients living in institutions can be prevented at multiple points in the prescribing and dispensing processes. Interventions aimed at preventing errors in hydromorphone and morphine prescribing, administration and patient monitoring should be implemented and rigorously evaluated. PMID:28401133

  5. Fatal overdoses involving hydromorphone and morphine among inpatients: a case series.

    PubMed

    Lowe, Amanda; Hamilton, Michael; Greenall BScPhm MHSc, Julie; Ma, Jessica; Dhalla, Irfan; Persaud, Nav

    2017-01-01

    Opioids have narrow therapeutic windows, and errors in ordering or administration can be fatal. The purpose of this study was to describe deaths involving hydromorphone and morphine, which have similar-sounding names, but different potencies. In this case series, we describe deaths of patients admitted to hospital or residents of long-term care facilities that involved hydromorphone and morphine. We searched for deaths referred to the Patient Safety Review Committee of the Office of the Chief Coroner for Ontario between 2007 and 2012, and subsequently reviewed by 2014. We reviewed each case to identify intervention points where errors could have been prevented. We identified 8 cases involving decedents aged 19 to 91 years. The cases involved errors in prescribing, order processing and transcription, dispensing, administration and monitoring. For 7 of the 8 cases, there were multiple (2 or more) possible intervention points. Six cases may have been prevented by additional patient monitoring, and 5 cases involved dispensing errors. Opioid toxicity deaths in patients living in institutions can be prevented at multiple points in the prescribing and dispensing processes. Interventions aimed at preventing errors in hydromorphone and morphine prescribing, administration and patient monitoring should be implemented and rigorously evaluated.

  6. Changes to Hospital Inpatient Volume After Newspaper Reporting of Medical Errors.

    PubMed

    Fukuda, Haruhisa

    2017-06-30

    The aim of this study was to investigate the influence of medical error case reporting by national newspapers on inpatient volume at acute care hospitals. A case-control study was conducted using the article databases of 3 major Japanese newspapers with nationwide circulation between fiscal years 2012 and 2013. Data on inpatient volume at acute care hospitals were obtained from a Japanese government survey between fiscal years 2011 and 2014. Panel data were constructed and analyzed using a difference-in-differences design. Acute care hospitals in Japan. Hospitals named in articles that included the terms "medical error" and "hospital" were designated case hospitals, which were matched with control hospitals using corresponding locations, nurse-to-patient ratios, and bed numbers. Medical error case reporting in newspapers. Changes to hospital inpatient volume after error reports. The sample comprised 40 case hospitals and 40 control hospitals. Difference-in-differences analyses indicated that newspaper reporting of medical errors was not significantly associated (P = 0.122) with overall inpatient volume. Medical error case reporting by newspapers showed no influence on inpatient volume. Hospitals therefore have little incentive to respond adequately and proactively to medical errors. There may be a need for government intervention to improve the posterror response and encourage better health care safety.

  7. Obstetric Neuraxial Drug Administration Errors: A Quantitative and Qualitative Analytical Review.

    PubMed

    Patel, Santosh; Loveridge, Robert

    2015-12-01

    Drug administration errors in obstetric neuraxial anesthesia can have devastating consequences. Although fully recognizing that they represent "only the tip of the iceberg," published case reports/series of these errors were reviewed in detail with the aim of estimating the frequency and the nature of these errors. We identified case reports and case series from MEDLINE and performed a quantitative analysis of the involved drugs, error setting, source of error, the observed complications, and any therapeutic interventions. We subsequently performed a qualitative analysis of the human factors involved and proposed modifications to practice. Twenty-nine cases were identified. Various drugs were given in error, but no direct effects on the course of labor, mode of delivery, or neonatal outcome were reported. Four maternal deaths from the accidental intrathecal administration of tranexamic acid were reported, all occurring after delivery of the fetus. A range of hemodynamic and neurologic signs and symptoms were noted, but the most commonly reported complication was the failure of the intended neuraxial anesthetic technique. Several human factors were present; most common factors were drug storage issues and similar drug appearance. Four practice recommendations were identified as being likely to have prevented the errors. The reported errors exposed latent conditions within health care systems. We suggest that the implementation of the following processes may decrease the risk of these types of drug errors: (1) Careful reading of the label on any drug ampule or syringe before the drug is drawn up or injected; (2) labeling all syringes; (3) checking labels with a second person or a device (such as a barcode reader linked to a computer) before the drug is drawn up or administered; and (4) use of non-Luer lock connectors on all epidural/spinal/combined spinal-epidural devices. Further study is required to determine whether routine use of these processes will reduce drug error.

  8. Comparisons of two moments‐based estimators that utilize historical and paleoflood data for the log Pearson type III distribution

    USGS Publications Warehouse

    England, John F.; Salas, José D.; Jarrett, Robert D.

    2003-01-01

    The expected moments algorithm (EMA) [Cohn et al., 1997] and the Bulletin 17B [Interagency Committee on Water Data, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed‐threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed‐threshold exceedance cases. EMA performed comparatively much better in other fixed‐threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV‐simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.

  9. Comparisons of two moments-based estimators that utilize historical and paleoflood data for the log Pearson type III distribution

    NASA Astrophysics Data System (ADS)

    England, John F.; Salas, José D.; Jarrett, Robert D.

    2003-09-01

    The expected moments algorithm (EMA) [, 1997] and the Bulletin 17B [, 1982] historical weighting procedure (B17H) for the log Pearson type III distribution are compared by Monte Carlo computer simulation for cases in which historical and/or paleoflood data are available. The relative performance of the estimators was explored for three cases: fixed-threshold exceedances, a fixed number of large floods, and floods generated from a different parent distribution. EMA can effectively incorporate four types of historical and paleoflood data: floods where the discharge is explicitly known, unknown discharges below a single threshold, floods with unknown discharge that exceed some level, and floods with discharges described in a range. The B17H estimator can utilize only the first two types of historical information. Including historical/paleoflood data in the simulation experiments significantly improved the quantile estimates in terms of mean square error and bias relative to using gage data alone. EMA performed significantly better than B17H in nearly all cases considered. B17H performed as well as EMA for estimating X100 in some limited fixed-threshold exceedance cases. EMA performed comparatively much better in other fixed-threshold situations, for the single large flood case, and in cases when estimating extreme floods equal to or greater than X500. B17H did not fully utilize historical information when the historical period exceeded 200 years. Robustness studies using GEV-simulated data confirmed that EMA performed better than B17H. Overall, EMA is preferred to B17H when historical and paleoflood data are available for flood frequency analysis.

  10. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-07-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations and in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, using either magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  11. Degradation analysis in the estimation of photometric redshifts from non-representative training sets

    NASA Astrophysics Data System (ADS)

    Rivera, J. D.; Moraes, B.; Merson, A. I.; Jouvel, S.; Abdalla, F. B.; Abdalla, M. C. B.

    2018-04-01

    We perform an analysis of photometric redshifts estimated by using a non-representative training sets in magnitude space. We use the ANNz2 and GPz algorithms to estimate the photometric redshift both in simulations as well as in real data from the Sloan Digital Sky Survey (DR12). We show that for the representative case, the results obtained by using both algorithms have the same quality, either using magnitudes or colours as input. In order to reduce the errors when estimating the redshifts with a non-representative training set, we perform the training in colour space. We estimate the quality of our results by using a mock catalogue which is split samples cuts in the r-band between 19.4 < r < 20.8. We obtain slightly better results with GPz on single point z-phot estimates in the complete training set case, however the photometric redshifts estimated with ANNz2 algorithm allows us to obtain mildly better results in deeper r-band cuts when estimating the full redshift distribution of the sample in the incomplete training set case. By using a cumulative distribution function and a Monte-Carlo process, we manage to define a photometric estimator which fits well the spectroscopic distribution of galaxies in the mock testing set, but with a larger scatter. To complete this work, we perform an analysis of the impact on the detection of clusters via density of galaxies in a field by using the photometric redshifts obtained with a non-representative training set.

  12. Fusion of magnetometer and gradiometer sensors of MEG in the presence of multiplicative error.

    PubMed

    Mohseni, Hamid R; Woolrich, Mark W; Kringelbach, Morten L; Luckhoo, Henry; Smith, Penny Probert; Aziz, Tipu Z

    2012-07-01

    Novel neuroimaging techniques have provided unprecedented information on the structure and function of the living human brain. Multimodal fusion of data from different sensors promises to radically improve this understanding, yet optimal methods have not been developed. Here, we demonstrate a novel method for combining multichannel signals. We show how this method can be used to fuse signals from the magnetometer and gradiometer sensors used in magnetoencephalography (MEG), and through extensive experiments using simulation, head phantom and real MEG data, show that it is both robust and accurate. This new approach works by assuming that the lead fields have multiplicative error. The criterion to estimate the error is given within a spatial filter framework such that the estimated power is minimized in the worst case scenario. The method is compared to, and found better than, existing approaches. The closed-form solution and the conditions under which the multiplicative error can be optimally estimated are provided. This novel approach can also be employed for multimodal fusion of other multichannel signals such as MEG and EEG. Although the multiplicative error is estimated based on beamforming, other methods for source analysis can equally be used after the lead-field modification.

  13. Improving Forecast Skill by Assimilation of AIRS Temperature Soundings

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Reale, Oreste

    2010-01-01

    AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU-A are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The AIRS Version 5 retrieval algorithm, is now being used operationally at the Goddard DISC in the routine generation of geophysical parameters derived from AIRS/AMSU data. A major innovation in Version 5 is the ability to generate case-by-case level-by-level error estimates delta T(p) for retrieved quantities and the use of these error estimates for Quality Control. We conducted a number of data assimilation experiments using the NASA GEOS-5 Data Assimilation System as a step toward finding an optimum balance of spatial coverage and sounding accuracy with regard to improving forecast skill. The model was run at a horizontal resolution of 0.5 deg. latitude X 0.67 deg longitude with 72 vertical levels. These experiments were run during four different seasons, each using a different year. The AIRS temperature profiles were presented to the GEOS-5 analysis as rawinsonde profiles, and the profile error estimates delta (p) were used as the uncertainty for each measurement in the data assimilation process. We compared forecasts analyses generated from the analyses done by assimilation of AIRS temperature profiles with three different sets of thresholds; Standard, Medium, and Tight. Assimilation of Quality Controlled AIRS temperature profiles significantly improve 5-7 day forecast skill compared to that obtained without the benefit of AIRS data in all of the cases studied. In addition, assimilation of Quality Controlled AIRS temperature soundings performs better than assimilation of AIRS observed radiances. Based on the experiments shown, Tight Quality Control of AIRS temperature profile performs best on the average from the perspective of improving Global 7 day forecast skill.

  14. Representing radar rainfall uncertainty with ensembles based on a time-variant geostatistical error modelling approach

    NASA Astrophysics Data System (ADS)

    Cecinati, Francesca; Rico-Ramirez, Miguel Angel; Heuvelink, Gerard B. M.; Han, Dawei

    2017-05-01

    The application of radar quantitative precipitation estimation (QPE) to hydrology and water quality models can be preferred to interpolated rainfall point measurements because of the wide coverage that radars can provide, together with a good spatio-temporal resolutions. Nonetheless, it is often limited by the proneness of radar QPE to a multitude of errors. Although radar errors have been widely studied and techniques have been developed to correct most of them, residual errors are still intrinsic in radar QPE. An estimation of uncertainty of radar QPE and an assessment of uncertainty propagation in modelling applications is important to quantify the relative importance of the uncertainty associated to radar rainfall input in the overall modelling uncertainty. A suitable tool for this purpose is the generation of radar rainfall ensembles. An ensemble is the representation of the rainfall field and its uncertainty through a collection of possible alternative rainfall fields, produced according to the observed errors, their spatial characteristics, and their probability distribution. The errors are derived from a comparison between radar QPE and ground point measurements. The novelty of the proposed ensemble generator is that it is based on a geostatistical approach that assures a fast and robust generation of synthetic error fields, based on the time-variant characteristics of errors. The method is developed to meet the requirement of operational applications to large datasets. The method is applied to a case study in Northern England, using the UK Met Office NIMROD radar composites at 1 km resolution and at 1 h accumulation on an area of 180 km by 180 km. The errors are estimated using a network of 199 tipping bucket rain gauges from the Environment Agency. 183 of the rain gauges are used for the error modelling, while 16 are kept apart for validation. The validation is done by comparing the radar rainfall ensemble with the values recorded by the validation rain gauges. The validated ensemble is then tested on a hydrological case study, to show the advantage of probabilistic rainfall for uncertainty propagation. The ensemble spread only partially captures the mismatch between the modelled and the observed flow. The residual uncertainty can be attributed to other sources of uncertainty, in particular to model structural uncertainty, parameter identification uncertainty, uncertainty in other inputs, and uncertainty in the observed flow.

  15. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    PubMed

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morley, Steven

    The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less

  17. Improving Forecast Skill by Assimilation of Quality Controlled AIRS Version 5 Temperature Soundings

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Reale, Oreste

    2009-01-01

    The AIRS Science Team Version 5 retrieval algorithm has been finalized and is now operational at the Goddard DAAC in the processing (and reprocessing) of all AIRS data. The AIRS Science Team Version 5 retrieval algorithm contains two significant improvements over Version 4: 1) Improved physics allows for use of AIRS observations in the entire 4.3 micron CO2 absorption band in the retrieval of temperature profile T(p) during both day and night. Tropospheric sounding 15 micron CO2 observations are now used primarily in the generation of cloud cleared radiances R(sub i). This approach allows for the generation of accurate values of R(sub i) and T(p) under most cloud conditions. 2) Another very significant improvement in Version 5 is the ability to generate accurate case-by-case, level-by-level error estimates for the atmospheric temperature profile, as well as for channel-by-channel error estimates for R(sub i). These error estimates are used for Quality Control of the retrieved products. We have conducted forecast impact experiments assimilating AIRS temperature profiles with different levels of Quality Control using the NASA GEOS-5 data assimilation system. Assimilation of Quality Controlled T(p) resulted in significantly improved forecast skill compared to that obtained from analyses obtained when all data used operationally by NCEP, except for AIRS data, is assimilated. We also conducted an experiment assimilating AIRS radiances uncontaminated by clouds, as done operationally by ECMWF and NCEP. Forecast resulting from assimilated AIRS radiances were of poorer quality than those obtained assimilating AIRS temperatures.

  18. Minimax Quantum Tomography: Estimators and Relative Entropy Bounds

    DOE PAGES

    Ferrie, Christopher; Blume-Kohout, Robin

    2016-03-04

    A minimax estimator has the minimum possible error (“risk”) in the worst case. Here we construct the first minimax estimators for quantum state tomography with relative entropy risk. The minimax risk of nonadaptive tomography scales as O (1/more » $$\\sqrt{N}$$ ) —in contrast to that of classical probability estimation, which is O (1/N) —where N is the number of copies of the quantum state used. We trace this deficiency to sampling mismatch: future observations that determine risk may come from a different sample space than the past data that determine the estimate. Lastly, this makes minimax estimators very biased, and we propose a computationally tractable alternative with similar behavior in the worst case, but superior accuracy on most states.« less

  19. Analysis of case-only studies accounting for genotyping error.

    PubMed

    Cheng, K F

    2007-03-01

    The case-only design provides one approach to assess possible interactions between genetic and environmental factors. It has been shown that if these factors are conditionally independent, then a case-only analysis is not only valid but also very efficient. However, a drawback of the case-only approach is that its conclusions may be biased by genotyping errors. In this paper, our main aim is to propose a method for analysis of case-only studies when these errors occur. We show that the bias can be adjusted through the use of internal validation data, which are obtained by genotyping some sampled individuals twice. Our analysis is based on a simple and yet highly efficient conditional likelihood approach. Simulation studies considered in this paper confirm that the new method has acceptable performance under genotyping errors.

  20. Accommodating Sensor Bias in MRAC for State Tracking

    NASA Technical Reports Server (NTRS)

    Patre, Parag; Joshi, Suresh M.

    2011-01-01

    The problem of accommodating unknown sensor bias is considered in a direct model reference adaptive control (MRAC) setting for state tracking using state feedback. Sensor faults can occur during operation, and if the biased state measurements are directly used with a standard MRAC control law, neither closed-loop signal boundedness, nor asymptotic tracking can be guaranteed and the resulting tracking errors may be unbounded or unacceptably large. A modified MRAC law is proposed, which combines a bias estimator with control gain adaptation, and it is shown that signal boundedness can be accomplished, although the tracking error may not go to zero. Further, for the case wherein an asymptotically stable sensor bias estimator is available, an MRAC control law is proposed to accomplish asymptotic tracking and signal boundedness. Such a sensor bias estimator can be designed if additional sensor measurements are available, as illustrated for the case wherein bias is present in the rate gyro and airspeed measurements. Numerical example results are presented to illustrate each of the schemes.

  1. A method of bias correction for maximal reliability with dichotomous measures.

    PubMed

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  2. Estimation of geopotential differences over intercontinental locations using satellite and terrestrial measurements. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Pavlis, Nikolaos K.

    1991-01-01

    An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.

  3. The Impact of Patient Safety Training on Oral and Maxillofacial Surgery Residents' Attitudes and Knowledge: A Mixed Method Case Study

    ERIC Educational Resources Information Center

    Buhrow, Suzanne

    2013-01-01

    It is estimated that in the United States, more than 40,000 patients are injured each day because of preventable medical errors. Patient safety experts and graduate medical education accreditation leaders recognize that medical education reform must include the integration of safety training focused on error causation, system engineering, and…

  4. A feasibility study in adapting Shamos Bickel and Hodges Lehman estimator into T-Method for normalization

    NASA Astrophysics Data System (ADS)

    Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan

    2018-03-01

    T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.

  5. Performability modeling based on real data: A case study

    NASA Technical Reports Server (NTRS)

    Hsueh, M. C.; Iyer, R. K.; Trivedi, K. S.

    1988-01-01

    Described is a measurement-based performability model based on error and resource usage data collected on a multiprocessor system. A method for identifying the model structure is introduced and the resulting model is validated against real data. Model development from the collection of raw data to the estimation of the expected reward is described. Both normal and error behavior of the system are characterized. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model system behavior. A reward function, based on the service rate and the error rate in each state, is then defined in order to estimate the performability of the system and to depict the cost of apparent types of errors.

  6. Empirically Estimable Classification Bounds Based on a Nonparametric Divergence Measure

    PubMed Central

    Berisha, Visar; Wisler, Alan; Hero, Alfred O.; Spanias, Andreas

    2015-01-01

    Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. PMID:26807014

  7. Calibration of Ocean Forcing with satellite Flux Estimates (COFFEE)

    NASA Astrophysics Data System (ADS)

    Barron, Charlie; Jan, Dastugue; Jackie, May; Rowley, Clark; Smith, Scott; Spence, Peter; Gremes-Cordero, Silvia

    2016-04-01

    Predicting the evolution of ocean temperature in regional ocean models depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. Within the COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates, real-time satellite observations are used to estimate shortwave, longwave, sensible, and latent air-sea heat flux corrections to a background estimate from the prior day's regional or global model forecast. These satellite-corrected fluxes are used to prepare a corrected ocean hindcast and to estimate flux error covariances to project the heat flux corrections for a 3-5 day forecast. In this way, satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. While traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle, COFFEE endeavors to appropriately partition and reduce among various surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using operational global or regional atmospheric forcing. Experiment cases combine different levels of flux calibration with assimilation alternatives. The cases use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger regional domains where a full 4DVAR methodology may be cost-prohibitive.

  8. Estimation of regression laws for ground motion parameters using as case of study the Amatrice earthquake

    NASA Astrophysics Data System (ADS)

    Tiberi, Lara; Costa, Giovanni

    2017-04-01

    The possibility to directly associate the damages to the ground motion parameters is always a great challenge, in particular for civil protections. Indeed a ground motion parameter, estimated in near real time that can express the damages occurred after an earthquake, is fundamental to arrange the first assistance after an event. The aim of this work is to contribute to the estimation of the ground motion parameter that better describes the observed intensity, immediately after an event. This can be done calculating for each ground motion parameter estimated in a near real time mode a regression law which correlates the above-mentioned parameter to the observed macro-seismic intensity. This estimation is done collecting high quality accelerometric data in near field, filtering them at different frequency steps. The regression laws are calculated using two different techniques: the non linear least-squares (NLLS) Marquardt-Levenberg algorithm and the orthogonal distance methodology (ODR). The limits of the first methodology are the needed of initial values for the parameters a and b (set 1.0 in this study), and the constraint that the independent variable must be known with greater accuracy than the dependent variable. While the second algorithm is based on the estimation of the errors perpendicular to the line, rather than just vertically. The vertical errors are just the errors in the 'y' direction, so only for the dependent variable whereas the perpendicular errors take into account errors for both the variables, the dependent and the independent. This makes possible also to directly invert the relation, so the a and b values can be used also to express the gmps as function of I. For each law the standard deviation and R2 value are estimated in order to test the quality and the reliability of the found relation. The Amatrice earthquake of 24th August of 2016 is used as case of study to test the goodness of the calculated regression laws.

  9. Uncertainty quantification in application of the enrichment meter principle for nondestructive assay of special nuclear material

    DOE PAGES

    Burr, Tom; Croft, Stephen; Jarman, Kenneth D.

    2015-09-05

    The various methods of nondestructive assay (NDA) of special nuclear material (SNM) have applications in nuclear nonproliferation, including detection and identification of illicit SNM at border crossings, and quantifying SNM at nuclear facilities for safeguards. No assay method is complete without “error bars,” which provide one way of expressing confidence in the assay result. Consequently, NDA specialists typically quantify total uncertainty in terms of “random” and “systematic” components, and then specify error bars for the total mass estimate in multiple items. Uncertainty quantification (UQ) for NDA has always been important, but it is recognized that greater rigor is needed andmore » achievable using modern statistical methods. To this end, we describe the extent to which the guideline for expressing uncertainty in measurements (GUM) can be used for NDA. Also, we propose improvements over GUM for NDA by illustrating UQ challenges that it does not address, including calibration with errors in predictors, model error, and item-specific biases. A case study is presented using low-resolution NaI spectra and applying the enrichment meter principle to estimate the U-235 mass in an item. The case study illustrates how to update the current American Society for Testing and Materials guide for application of the enrichment meter principle using gamma spectra from a NaI detector.« less

  10. Estimating the Standard Error of the Impact Estimator in Individually Randomized Trials with Clustering

    ERIC Educational Resources Information Center

    Weiss, Michael J.; Lockwood, J. R.; McCaffrey, Daniel F.

    2016-01-01

    In the "individually randomized group treatment" (IRGT) experimental design, individuals are first randomly assigned to a treatment arm or a control arm, but then within each arm, are grouped together (e.g., within classrooms/schools, through shared case managers, in group therapy sessions, through shared doctors, etc.) to receive…

  11. Modeling misidentification errors that result from use of genetic tags in capture-recapture studies

    USGS Publications Warehouse

    Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.

    2011-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.

  12. Detecting and quantifying stellar magnetic fields. Sparse Stokes profile approximation using orthogonal matching pursuit

    NASA Astrophysics Data System (ADS)

    Carroll, T. A.; Strassmeier, K. G.

    2014-03-01

    Context. In recent years, we have seen a rapidly growing number of stellar magnetic field detections for various types of stars. Many of these magnetic fields are estimated from spectropolarimetric observations (Stokes V) by using the so-called center-of-gravity (COG) method. Unfortunately, the accuracy of this method rapidly deteriorates with increasing noise and thus calls for a more robust procedure that combines signal detection and field estimation. Aims: We introduce an estimation method that provides not only the effective or mean longitudinal magnetic field from an observed Stokes V profile but also uses the net absolute polarization of the profile to obtain an estimate of the apparent (i.e., velocity resolved) absolute longitudinal magnetic field. Methods: By combining the COG method with an orthogonal-matching-pursuit (OMP) approach, we were able to decompose observed Stokes profiles with an overcomplete dictionary of wavelet-basis functions to reliably reconstruct the observed Stokes profiles in the presence of noise. The elementary wave functions of the sparse reconstruction process were utilized to estimate the effective longitudinal magnetic field and the apparent absolute longitudinal magnetic field. A multiresolution analysis complements the OMP algorithm to provide a robust detection and estimation method. Results: An extensive Monte-Carlo simulation confirms the reliability and accuracy of the magnetic OMP approach where a mean error of under 2% is found. Its full potential is obtained for heavily noise-corrupted Stokes profiles with signal-to-noise variance ratios down to unity. In this case a conventional COG method yields a mean error for the effective longitudinal magnetic field of up to 50%, whereas the OMP method gives a maximum error of 18%. It is, moreover, shown that even in the case of very small residual noise on a level between 10-3 and 10-5, a regime reached by current multiline reconstruction techniques, the conventional COG method incorrectly interprets a large portion of the residual noise as a magnetic field, with values of up to 100 G. The magnetic OMP method, on the other hand, remains largely unaffected by the noise, regardless of the noise level the maximum error is no greater than 0.7 G.

  13. An Anisotropic A posteriori Error Estimator for CFD

    NASA Astrophysics Data System (ADS)

    Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

    In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

  14. On the multivariate total least-squares approach to empirical coordinate transformations. Three algorithms

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard; Felus, Yaron A.

    2008-06-01

    The multivariate total least-squares (MTLS) approach aims at estimating a matrix of parameters, Ξ, from a linear model ( Y- E Y = ( X- E X ) · Ξ) that includes an observation matrix, Y, another observation matrix, X, and matrices of randomly distributed errors, E Y and E X . Two special cases of the MTLS approach include the standard multivariate least-squares approach where only the observation matrix, Y, is perturbed by random errors and, on the other hand, the data least-squares approach where only the coefficient matrix X is affected by random errors. In a previous contribution, the authors derived an iterative algorithm to solve the MTLS problem by using the nonlinear Euler-Lagrange conditions. In this contribution, new lemmas are developed to analyze the iterative algorithm, modify it, and compare it with a new ‘closed form’ solution that is based on the singular-value decomposition. For an application, the total least-squares approach is used to estimate the affine transformation parameters that convert cadastral data from the old to the new Israeli datum. Technical aspects of this approach, such as scaling the data and fixing the columns in the coefficient matrix are investigated. This case study illuminates the issue of “symmetry” in the treatment of two sets of coordinates for identical point fields, a topic that had already been emphasized by Teunissen (1989, Festschrift to Torben Krarup, Geodetic Institute Bull no. 58, Copenhagen, Denmark, pp 335-342). The differences between the standard least-squares and the TLS approach are analyzed in terms of the estimated variance component and a first-order approximation of the dispersion matrix of the estimated parameters.

  15. Application of Consider Covariance to the Extended Kalman Filter

    NASA Technical Reports Server (NTRS)

    Lundberg, John B.

    1996-01-01

    The extended Kalman filter (EKF) is the basis for many applications of filtering theory to real-time problems where estimates of the state of a dynamical system are to be computed based upon some set of observations. The form of the EKF may vary somewhat from one application to another, but the fundamental principles are typically unchanged among these various applications. As is the case in many filtering applications, models of the dynamical system (differential equations describing the state variables) and models of the relationship between the observations and the state variables are created. These models typically employ a set of constants whose values are established my means of theory or experimental procedure. Since the estimates of the state are formed assuming that the models are perfect, any modeling errors will affect the accuracy of the computed estimates. Note that the modeling errors may be errors of commission (errors in terms included in the model) or omission (errors in terms excluded from the model). Consequently, it becomes imperative when evaluating the performance of real-time filters to evaluate the effect of modeling errors on the estimates of the state.

  16. Temporal uncertainty analysis of human errors based on interrelationships among multiple factors: a case of Minuteman III missile accident.

    PubMed

    Rong, Hao; Tian, Jin; Zhao, Tingdi

    2016-01-01

    In traditional approaches of human reliability assessment (HRA), the definition of the error producing conditions (EPCs) and the supporting guidance are such that some of the conditions (especially organizational or managerial conditions) can hardly be included, and thus the analysis is burdened with incomprehensiveness without reflecting the temporal trend of human reliability. A method based on system dynamics (SD), which highlights interrelationships among technical and organizational aspects that may contribute to human errors, is presented to facilitate quantitatively estimating the human error probability (HEP) and its related variables changing over time in a long period. Taking the Minuteman III missile accident in 2008 as a case, the proposed HRA method is applied to assess HEP during missile operations over 50 years by analyzing the interactions among the variables involved in human-related risks; also the critical factors are determined in terms of impact that the variables have on risks in different time periods. It is indicated that both technical and organizational aspects should be focused on to minimize human errors in a long run. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  17. Security proof of a three-state quantum-key-distribution protocol without rotational symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fung, C.-H.F.; Lo, H.-K.

    2006-10-15

    Standard security proofs of quantum-key-distribution (QKD) protocols often rely on symmetry arguments. In this paper, we prove the security of a three-state protocol that does not possess rotational symmetry. The three-state QKD protocol we consider involves three qubit states, where the first two states |0{sub z}> and |1{sub z}> can contribute to key generation, and the third state |+>=(|0{sub z}>+|1{sub z}>)/{radical}(2) is for channel estimation. This protocol has been proposed and implemented experimentally in some frequency-based QKD systems where the three states can be prepared easily. Thus, by founding on the security of this three-state protocol, we prove that thesemore » QKD schemes are, in fact, unconditionally secure against any attacks allowed by quantum mechanics. The main task in our proof is to upper bound the phase error rate of the qubits given the bit error rates observed. Unconditional security can then be proved not only for the ideal case of a single-photon source and perfect detectors, but also for the realistic case of a phase-randomized weak coherent light source and imperfect threshold detectors. Our result in the phase error rate upper bound is independent of the loss in the channel. Also, we compare the three-state protocol with the Bennett-Brassard 1984 (BB84) protocol. For the single-photon source case, our result proves that the BB84 protocol strictly tolerates a higher quantum bit error rate than the three-state protocol, while for the coherent-source case, the BB84 protocol achieves a higher key generation rate and secure distance than the three-state protocol when a decoy-state method is used.« less

  18. Using digital inpainting to estimate incident light intensity for the calculation of red blood cell oxygen saturation from microscopy images.

    PubMed

    Sové, Richard J; Drakos, Nicole E; Fraser, Graham M; Ellis, Christopher G

    2018-05-25

    Red blood cell oxygen saturation is an important indicator of oxygen supply to tissues in the body. Oxygen saturation can be measured by taking advantage of spectroscopic properties of hemoglobin. When this technique is applied to transmission microscopy, the calculation of saturation requires determination of incident light intensity at each pixel occupied by the red blood cell; this value is often approximated from a sequence of images as the maximum intensity over time. This method often fails when the red blood cells are moving too slowly, or if hematocrit is too large since there is not a large enough gap between the cells to accurately calculate the incident intensity value. A new method of approximating incident light intensity is proposed using digital inpainting. This novel approach estimates incident light intensity with an average percent error of approximately 3%, which exceeds the accuracy of the maximum intensity based method in most cases. The error in incident light intensity corresponds to a maximum error of approximately 2% saturation. Therefore, though this new method is computationally more demanding than the traditional technique, it can be used in cases where the maximum intensity-based method fails (e.g. stationary cells), or when higher accuracy is required. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  19. A study of pilot modeling in multi-controller tasks

    NASA Technical Reports Server (NTRS)

    Whitbeck, R. F.; Knight, J. R.

    1972-01-01

    A modeling approach, which utilizes a matrix of transfer functions to describe the human pilot in multiple input, multiple output control situations, is studied. The approach used was to extend a well established scalar Wiener-Hopf minimization technique to the matrix case and then study, via a series of experiments, the data requirements when only finite record lengths are available. One of these experiments was a two-controller roll tracking experiment designed to force the pilot to use rudder in order to coordinate and reduce the effects of aileron yaw. One model was computed for the case where the signals used to generate the spectral matrix are error and bank angle while another model was computed for the case where error and yaw angle are the inputs. Several anomalies were observed to be present in the experimental data. These are defined by the descriptive terms roll up, break up, and roll down. Due to these algorithm induced anomalies, the frequency band over which reliable estimates of power spectra can be achieved is considerably less than predicted by the sampling theorem.

  20. A Method for Calculating the Mean Orbits of Meteor Streams

    NASA Astrophysics Data System (ADS)

    Voloshchuk, Yu. I.; Kashcheev, B. L.

    An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.

  1. Evaluation of Approaches to Deal with Low-Frequency Nuisance Covariates in Population Pharmacokinetic Analyses.

    PubMed

    Lagishetty, Chakradhar V; Duffull, Stephen B

    2015-11-01

    Clinical studies include occurrences of rare variables, like genotypes, which due to their frequency and strength render their effects difficult to estimate from a dataset. Variables that influence the estimated value of a model-based parameter are termed covariates. It is often difficult to determine if such an effect is significant, since type I error can be inflated when the covariate is rare. Their presence may have either an insubstantial effect on the parameters of interest, hence are ignorable, or conversely they may be influential and therefore non-ignorable. In the case that these covariate effects cannot be estimated due to power and are non-ignorable, then these are considered nuisance, in that they have to be considered but due to type 1 error are of limited interest. This study assesses methods of handling nuisance covariate effects. The specific objectives include (1) calibrating the frequency of a covariate that is associated with type 1 error inflation, (2) calibrating its strength that renders it non-ignorable and (3) evaluating methods for handling these non-ignorable covariates in a nonlinear mixed effects model setting. Type 1 error was determined for the Wald test. Methods considered for handling the nuisance covariate effects were case deletion, Box-Cox transformation and inclusion of a specific fixed effects parameter. Non-ignorable nuisance covariates were found to be effectively handled through addition of a fixed effect parameter.

  2. Bayesian assessment of overtriage and undertriage at a level I trauma centre.

    PubMed

    DiDomenico, Paul B; Pietzsch, Jan B; Paté-Cornell, M Elisabeth

    2008-07-13

    We analysed the trauma triage system at a specific level I trauma centre to assess rates of over- and undertriage and to support recommendations for system improvements. The triage process is designed to estimate the severity of patient injury and allocate resources accordingly, with potential errors of overestimation (overtriage) consuming excess resources and underestimation (undertriage) potentially leading to medical errors.We first modelled the overall trauma system using risk analysis methods to understand interdependencies among the actions of the participants. We interviewed six experienced trauma surgeons to obtain their expert opinion of the over- and undertriage rates occurring in the trauma centre. We then assessed actual over- and undertriage rates in a random sample of 86 trauma cases collected over a six-week period at the same centre. We employed Bayesian analysis to quantitatively combine the data with the prior probabilities derived from expert opinion in order to obtain posterior distributions. The results were estimates of overtriage and undertriage in 16.1 and 4.9% of patients, respectively. This Bayesian approach, which provides a quantitative assessment of the error rates using both case data and expert opinion, provides a rational means of obtaining a best estimate of the system's performance. The overall approach that we describe in this paper can be employed more widely to analyse complex health care delivery systems, with the objective of reduced errors, patient risk and excess costs.

  3. Angular and Seasonal Variation of Spectral Surface Reflectance Ratios: Implications for the Remote Sensing of Aerosol over Land

    NASA Technical Reports Server (NTRS)

    Remer, L. A.; Wald, A. E.; Kaufman, Y. J.

    1999-01-01

    We obtain valuable information on the angular and seasonal variability of surface reflectance using a hand-held spectrometer from a light aircraft. The data is used to test a procedure that allows us to estimate visible surface reflectance from the longer wavelength 2.1 micrometer channel (mid-IR). Estimating or avoiding surface reflectance in the visible is a vital first step in most algorithms that retrieve aerosol optical thickness over land targets. The data indicate that specular reflection found when viewing targets from the forward direction can severely corrupt the relationships between the visible and 2.1 micrometer reflectance that were derived from nadir data. There is a month by month variation in the ratios between the visible and the mid-IR, weakly correlated to the Normalized Difference Vegetation Index (NDVI). If specular reflection is not avoided, the errors resulting from estimating surface reflectance from the mid-IR exceed the acceptable limit of DELTA-rho approximately 0.01 in roughly 40% of the cases, using the current algorithm. This is reduced to 25% of the cases if specular reflection is avoided. An alternative method that uses path radiance rather than explicitly estimating visible surface reflectance results in similar errors. The two methods have different strengths and weaknesses that require further study.

  4. Robust radio interferometric calibration using the t-distribution

    NASA Astrophysics Data System (ADS)

    Kazemi, S.; Yatawatta, S.

    2013-10-01

    A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.

  5. Estimating age of sea otters with cementum layers in the first premolar

    USGS Publications Warehouse

    Bodkin, James L.; Ames, J.A.; Jameson, R.J.; Johnson, A.M.; Matson, G.M.

    1997-01-01

    We assessed sources of variation in the use of tooth cementum layers to determine age by comparing counts in premolar tooth sections to known ages of 20 sea otters (Enhydra lutris). Three readers examined each sample 3 times, and the 3 readings of each sample were averaged by reader to provide the mean estimated age. The mean (SE) of known age sample was 5.2 years (1.0) and the 3 mean estimated ages were 7.0 (1.0), 5.9 (1.1) and, 4.4 (0.8). The proportion of estimates accurate to within +/- 1 year were 0.25, 0.55, and 0.65 and to within +/- 2 years 0.65, 0.80, and 0.70, by reader. The proportions of samples estimated with >3 years error were 0.20, 0.10, and 0.05. Errors as large as 7, 6, and 5 years were made among readers. In few instances did all readers uniformly provide either accurate (error 1 yr) counts. In most cases (0.85), 1 or 2 of the readers provided accurate counts. Coefficients of determination (R2) between known ages and mean estimated ages were 0.81, 0.87, and 0.87, by reader. The results of this study suggest that cementum layers within sea otter premolar teeth likely are deposited annually and can be used for age estimation. However, criteria used in interpreting layers apparently varied by reader, occasionally resulting in large errors, which were not consistent among readers. While large errors were evident for some individual otters, there were no differences between the known and estimated age-class distribution generated by each reader. Until accuracy can be improved, application of this ageing technique should be limited to sample sizes of at least 6-7 individuals within age classes of >/=1 year.

  6. Intercontinental height datum connection with GOCE and GPS-levelling data

    NASA Astrophysics Data System (ADS)

    Gruber, T.; Gerlach, C.; Haagmans, R.

    2012-12-01

    In this study an attempt is made to establish height system datum connections based upon a gravity field and steady-state ocean circulation explorer (GOCE) gravity field model and a set of global positioning system (GPS) and levelling data. The procedure applied in principle is straightforward. First local geoid heights are obtained point wise from GPS and levelling data. Then the mean of these geoid heights is computed for regions nominally referring to the same height datum. Subsequently, these local mean geoid heights are compared with a mean global geoid from GOCE for the same region. This way one can identify an offset of the local to the global geoid per region. This procedure is applied to a number of regions distributed worldwide. Results show that the vertical datum offset estimates strongly depend on the nature of the omission error, i.e. the signal not represented in the GOCE model. For a smooth gravity field the commission error of GOCE, the quality of the GPS and levelling data and the averaging control the accuracy of the vertical datum offset estimates. In case the omission error does not cancel out in the mean value computation, because of a sub-optimal point distribution or a characteristic behaviour of the omitted part of the geoid signal, one needs to estimate a correction for the omission error from other sources. For areas with dense and high quality ground observations the EGM2008 global model is a good choice to estimate the omission error correction in theses cases. Relative intercontinental height datum offsets are estimated by applying this procedure between the United State of America (USA), Australia and Germany. These are compared to historical values provided in the literature and computed with the same procedure. The results obtained in this study agree on a level of 10 cm to the historical results. The changes mainly can be attributed to the new global geoid information from GOCE, rather than to the ellipsoidal heights or the levelled heights. These historical levelling data are still in use in many countries. This conclusion is supported by other results on the validation of the GOCE models.

  7. Constraining the thickness of polar ice deposits on Mercury using the Mercury Laser Altimeter and small craters in permanently shadowed regions

    NASA Astrophysics Data System (ADS)

    Deutsch, Ariel N.; Head, James W.; Chabot, Nancy L.; Neumann, Gregory A.

    2018-05-01

    Radar-bright deposits at the poles of Mercury are located in permanently shadowed regions, which provide thermally stable environments for hosting and retaining water ice on the surface or in the near subsurface for geologic timescales. While the areal distribution of these radar-bright deposits is well characterized, their thickness, and thus their total mass and volume, remain poorly constrained. Here we derive thickness estimates for selected water-ice deposits using small, simple craters visible within the permanently shadowed, radar-bright deposits. We examine two endmember scenarios: in Case I, these craters predate the emplacement of the ice, and in Case II, these craters postdate the emplacement of the ice. In Case I, we find the difference between estimated depths of the original unfilled craters and the measured depths of the craters to find the estimated infill of material. The average estimated infilled material for 9 craters assumed to be overlain with water ice is ∼ 41-14+30 m, where 1-σ standard error of the mean is reported as uncertainty. Reported uncertainties are for statistical errors only. Additional systematic uncertainty may stem from georeferencing the images and topographic datasets, from the radial accuracy of the altimeter measurements, or from assumptions in our models including (1) ice is flat in the bowl-shaped crater and (2) there is negligible ice at the crater rims. In Case II, we derive crater excavation depths to investigate the thickness of the ice layer that may have been penetrated by the impact. While the absence of excavated regolith associated with the small craters observed suggests that impacts generally do not penetrate through the ice deposit, the spatial resolution and complex illumination geometry of images may limit the observations. Therefore, it is not possible to conclude whether the small craters in this study penetrate through the ice deposit, and thus Case II does not provide a constraint on the ice thickness. For Mercury's polar deposits, we argue that Case I of the small craters predating the emplacement of the ice deposits is more likely, given other geologic evidence that suggests that these ice deposits are relatively young. Using the ice thickness estimates from Case I to calculate the total amount of water ice currently contained in Mercury's polar deposits results in a value of ∼1014-1015 kg. This is equivalent to ∼100-1000 km3 ice in volume. This volume of water ice is consistent with delivery via micrometeorite bombardment, Jupiter-family comets, or potentially a single impactor.

  8. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  9. Satellite-based Calibration of Heat Flux at the Ocean Surface

    NASA Astrophysics Data System (ADS)

    Barron, C. N.; Dastugue, J. M.; May, J. C.; Rowley, C. D.; Smith, S. R.; Spence, P. L.; Gremes-Cordero, S.

    2016-02-01

    Model forecasts of upper ocean heat content and variability on diurnal to daily scales are highly dependent on estimates of heat flux through the air-sea interface. Satellite remote sensing is applied to not only inform the initial ocean state but also to mitigate errors in surface heat flux and model representations affecting the distribution of heat in the upper ocean. Traditional assimilation of sea surface temperature (SST) observations re-centers ocean models at the start of each forecast cycle. Subsequent evolution depends on estimates of surface heat fluxes and upper-ocean processes over the forecast period. The COFFEE project (Calibration of Ocean Forcing with satellite Flux Estimates) endeavors to correct ocean forecast bias through a responsive error partition among surface heat flux and ocean dynamics sources. A suite of experiments in the southern California Current demonstrates a range of COFFEE capabilities, showing the impact on forecast error relative to a baseline three-dimensional variational (3DVAR) assimilation using Navy operational global or regional atmospheric forcing. COFFEE addresses satellite-calibration of surface fluxes to estimate surface error covariances and links these to the ocean interior. Experiment cases combine different levels of flux calibration with different assimilation alternatives. The cases may use the original fluxes, apply full satellite corrections during the forecast period, or extend hindcast corrections into the forecast period. Assimilation is either baseline 3DVAR or standard strong-constraint 4DVAR, with work proceeding to add a 4DVAR expanded to include a weak constraint treatment of the surface flux errors. Covariance of flux errors is estimated from the recent time series of forecast and calibrated flux terms. While the California Current examples are shown, the approach is equally applicable to other regions. These approaches within a 3DVAR application are anticipated to be useful for global and larger regional domains where a full 4DVAR methodology may be cost-prohibitive.

  10. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  11. Development of advanced techniques for rotorcraft state estimation and parameter identification

    NASA Technical Reports Server (NTRS)

    Hall, W. E., Jr.; Bohn, J. G.; Vincent, J. H.

    1980-01-01

    An integrated methodology for rotorcraft system identification consists of rotorcraft mathematical modeling, three distinct data processing steps, and a technique for designing inputs to improve the identifiability of the data. These elements are as follows: (1) a Kalman filter smoother algorithm which estimates states and sensor errors from error corrupted data. Gust time histories and statistics may also be estimated; (2) a model structure estimation algorithm for isolating a model which adequately explains the data; (3) a maximum likelihood algorithm for estimating the parameters and estimates for the variance of these estimates; and (4) an input design algorithm, based on a maximum likelihood approach, which provides inputs to improve the accuracy of parameter estimates. Each step is discussed with examples to both flight and simulated data cases.

  12. A continuous optimization approach for inferring parameters in mathematical models of regulatory networks.

    PubMed

    Deng, Zhimin; Tian, Tianhai

    2014-07-29

    The advances of systems biology have raised a large number of sophisticated mathematical models for describing the dynamic property of complex biological systems. One of the major steps in developing mathematical models is to estimate unknown parameters of the model based on experimentally measured quantities. However, experimental conditions limit the amount of data that is available for mathematical modelling. The number of unknown parameters in mathematical models may be larger than the number of observation data. The imbalance between the number of experimental data and number of unknown parameters makes reverse-engineering problems particularly challenging. To address the issue of inadequate experimental data, we propose a continuous optimization approach for making reliable inference of model parameters. This approach first uses a spline interpolation to generate continuous functions of system dynamics as well as the first and second order derivatives of continuous functions. The expanded dataset is the basis to infer unknown model parameters using various continuous optimization criteria, including the error of simulation only, error of both simulation and the first derivative, or error of simulation as well as the first and second derivatives. We use three case studies to demonstrate the accuracy and reliability of the proposed new approach. Compared with the corresponding discrete criteria using experimental data at the measurement time points only, numerical results of the ERK kinase activation module show that the continuous absolute-error criteria using both function and high order derivatives generate estimates with better accuracy. This result is also supported by the second and third case studies for the G1/S transition network and the MAP kinase pathway, respectively. This suggests that the continuous absolute-error criteria lead to more accurate estimates than the corresponding discrete criteria. We also study the robustness property of these three models to examine the reliability of estimates. Simulation results show that the models with estimated parameters using continuous fitness functions have better robustness properties than those using the corresponding discrete fitness functions. The inference studies and robustness analysis suggest that the proposed continuous optimization criteria are effective and robust for estimating unknown parameters in mathematical models.

  13. Random Measurement Error Does Not Bias the Treatment Effect Estimate in the Regression-Discontinuity Design. II. When an Interaction Effect Is Present.

    ERIC Educational Resources Information Center

    Trochim, William M. K.; And Others

    1991-01-01

    The regression-discontinuity design involving a treatment interaction effect (TIE), pretest-posttest functional form specification, and choice of point-of-estimation of the TIE are examined. Formulas for controlling the magnitude of TIE in simulations can be used for simulating the randomized experimental case where estimation is not at the…

  14. Sensor failure detection for jet engines

    NASA Technical Reports Server (NTRS)

    Beattie, E. C.; Laprad, R. F.; Akhter, M. M.; Rock, S. M.

    1983-01-01

    Revisions to the advanced sensor failure detection, isolation, and accommodation (DIA) algorithm, developed under the sensor failure detection system program were studied to eliminate the steady state errors due to estimation filter biases. Three algorithm revisions were formulated and one revision for detailed evaluation was chosen. The selected version modifies the DIA algorithm to feedback the actual sensor outputs to the integral portion of the control for the nofailure case. In case of a failure, the estimates of the failed sensor output is fed back to the integral portion. The estimator outputs are fed back to the linear regulator portion of the control all the time. The revised algorithm is evaluated and compared to the baseline algorithm developed previously.

  15. Influence of eye micromotions on spatially resolved refractometry

    NASA Astrophysics Data System (ADS)

    Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Osipova, Irina Y.

    2001-01-01

    The influence eye micromotions on the accuracy of estimation of Zernike coefficients form eye transverse aberration measurements was investigated. By computer modeling, the following found eye aberrations have been examined: defocusing, primary astigmatism, spherical aberration of the 3rd and the 5th orders, as well as their combinations. It was determined that the standard deviation of estimated Zernike coefficients is proportional to the standard deviation of angular eye movements. Eye micromotions cause the estimation errors of Zernike coefficients of present aberrations and produce the appearance of Zernike coefficients of aberrations, absent in the eye. When solely defocusing is present, the biggest errors, cased by eye micromotions, are obtained for aberrations like coma and astigmatism. In comparison with other aberrations, spherical aberration of the 3rd and the 5th orders evokes the greatest increase of the standard deviation of other Zernike coefficients.

  16. Coupling finite element and spectral methods: First results

    NASA Technical Reports Server (NTRS)

    Bernardi, Christine; Debit, Naima; Maday, Yvon

    1987-01-01

    A Poisson equation on a rectangular domain is solved by coupling two methods: the domain is divided in two squares, a finite element approximation is used on the first square and a spectral discretization is used on the second one. Two kinds of matching conditions on the interface are presented and compared. In both cases, error estimates are proved.

  17. Prediction of true test scores from observed item scores and ancillary data.

    PubMed

    Haberman, Shelby J; Yao, Lili; Sinharay, Sandip

    2015-05-01

    In many educational tests which involve constructed responses, a traditional test score is obtained by adding together item scores obtained through holistic scoring by trained human raters. For example, this practice was used until 2008 in the case of GRE(®) General Analytical Writing and until 2009 in the case of TOEFL(®) iBT Writing. With use of natural language processing, it is possible to obtain additional information concerning item responses from computer programs such as e-rater(®). In addition, available information relevant to examinee performance may include scores on related tests. We suggest application of standard results from classical test theory to the available data to obtain best linear predictors of true traditional test scores. In performing such analysis, we require estimation of variances and covariances of measurement errors, a task which can be quite difficult in the case of tests with limited numbers of items and with multiple measurements per item. As a consequence, a new estimation method is suggested based on samples of examinees who have taken an assessment more than once. Such samples are typically not random samples of the general population of examinees, so that we apply statistical adjustment methods to obtain the needed estimated variances and covariances of measurement errors. To examine practical implications of the suggested methods of analysis, applications are made to GRE General Analytical Writing and TOEFL iBT Writing. Results obtained indicate that substantial improvements are possible both in terms of reliability of scoring and in terms of assessment reliability. © 2015 The British Psychological Society.

  18. Inference of reactive transport model parameters using a Bayesian multivariate approach

    NASA Astrophysics Data System (ADS)

    Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick

    2014-08-01

    Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.

  19. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  20. MP estimation applied to platykurtic sets of geodetic observations

    NASA Astrophysics Data System (ADS)

    Wiśniewski, Zbigniew

    2017-06-01

    MP estimation is a method which concerns estimating of the location parameters when the probabilistic models of observations differ from the normal distributions in the kurtosis or asymmetry. The system of Pearson's distributions is the probabilistic basis for the method. So far, such a method was applied and analyzed mostly for leptokurtic or mesokurtic distributions (Pearson's distributions of types IV or VII), which predominate practical cases. The analyses of geodetic or astronomical observations show that we may also deal with sets which have moderate asymmetry or small negative excess kurtosis. Asymmetry might result from the influence of many small systematic errors, which were not eliminated during preprocessing of data. The excess kurtosis can be related with bigger or smaller (in relations to the Hagen hypothesis) frequency of occurrence of the elementary errors which are close to zero. Considering that fact, this paper focuses on the estimation with application of the Pearson platykurtic distributions of types I or II. The paper presents the solution of the corresponding optimization problem and its basic properties. Although platykurtic distributions are rare in practice, it was an interesting issue to find out what results can be provided by MP estimation in the case of such observation distributions. The numerical tests which are presented in the paper are rather limited; however, they allow us to draw some general conclusions.

  1. Peripheral dysgraphia characterized by the co-occurrence of case substitutions in uppercase and letter substitutions in lowercase writing.

    PubMed

    Di Pietro, M; Schnider, A; Ptak, R

    2011-10-01

    Patients with peripheral dysgraphia due to impairment at the allographic level produce writing errors that affect the letter-form and are characterized by case confusions or the failure to write in a specific case or style (e.g., cursive). We studied the writing errors of a patient with pure peripheral dysgraphia who had entirely intact oral spelling, but produced many well-formed letter errors in written spelling. The comparison of uppercase print and lowercase cursive spelling revealed an uncommon pattern: while most uppercase errors were case substitutions (e.g., A - a), almost all lowercase errors were letter substitutions (e.g., n - r). Analyses of the relationship between target letters and substitution errors showed that errors were neither influenced by consonant-vowel status nor by letter frequency, though word length affected error frequency in lowercase writing. Moreover, while graphomotor similarity did not predict either the occurrence of uppercase or lowercase errors, visuospatial similarity was a significant predictor of lowercase errors. These results suggest that lowercase representations of cursive letter-forms are based on a description of entire letters (visuospatial features) and are not - as previously found for uppercase letters - specified in terms of strokes (graphomotor features). Copyright © 2010 Elsevier Srl. All rights reserved.

  2. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  3. Validation of Student and Parent Report Data on the Basic Grant Application Form. Final Report. Volume II, Estimated Income: CRT Look-Up Study, Individual Case Follow-Up Study, IRS Sample Study.

    ERIC Educational Resources Information Center

    Novalis, Carol

    The use of estimated income to analyze financial need of applicants to the Basic Educational Opportunity Grant (BEOG) program was investigated. Attention was focused on: how well applicants estimate their income; reasons for errors in estimation, and whether applicants supplying income tax returns supply true versions. For 1,547 eligible BEOG…

  4. Numerical Analysis of an H 1-Galerkin Mixed Finite Element Method for Time Fractional Telegraph Equation

    PubMed Central

    Wang, Jinfeng; Zhao, Meng; Zhang, Min; Liu, Yang; Li, Hong

    2014-01-01

    We discuss and analyze an H 1-Galerkin mixed finite element (H 1-GMFE) method to look for the numerical solution of time fractional telegraph equation. We introduce an auxiliary variable to reduce the original equation into lower-order coupled equations and then formulate an H 1-GMFE scheme with two important variables. We discretize the Caputo time fractional derivatives using the finite difference methods and approximate the spatial direction by applying the H 1-GMFE method. Based on the discussion on the theoretical error analysis in L 2-norm for the scalar unknown and its gradient in one dimensional case, we obtain the optimal order of convergence in space-time direction. Further, we also derive the optimal error results for the scalar unknown in H 1-norm. Moreover, we derive and analyze the stability of H 1-GMFE scheme and give the results of a priori error estimates in two- or three-dimensional cases. In order to verify our theoretical analysis, we give some results of numerical calculation by using the Matlab procedure. PMID:25184148

  5. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    PubMed

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate access to guidelines or unclear organisational routines". Medication errors regarded as malpractice in Sweden were of the same character as medication errors worldwide. A complex interplay between individual and system factors often contributed to the errors.

  6. Application of spectral decomposition algorithm for mapping water quality in a turbid lake (Lake Kasumigaura, Japan) from Landsat TM data

    NASA Astrophysics Data System (ADS)

    Oyama, Youichi; Matsushita, Bunkei; Fukushima, Takehiko; Matsushige, Kazuo; Imai, Akio

    The remote sensing of Case 2 water has been far less successful than that of Case 1 water, due mainly to the complex interactions among optically active substances (e.g., phytoplankton, suspended sediments, colored dissolved organic matter, and water) in the former. To address this problem, we developed a spectral decomposition algorithm (SDA), based on a spectral linear mixture modeling approach. Through a tank experiment, we found that the SDA-based models were superior to conventional empirical models (e.g. using single band, band ratio, or arithmetic calculation of band) for accurate estimates of water quality parameters. In this paper, we develop a method for applying the SDA to Landsat-5 TM data on Lake Kasumigaura, a eutrophic lake in Japan characterized by high concentrations of suspended sediment, for mapping chlorophyll-a (Chl-a) and non-phytoplankton suspended sediment (NPSS) distributions. The results show that the SDA-based estimation model can be obtained by a tank experiment. Moreover, by combining this estimation model with satellite-SRSs (standard reflectance spectra: i.e., spectral end-members) derived from bio-optical modeling, we can directly apply the model to a satellite image. The same SDA-based estimation model for Chl-a concentration was applied to two Landsat-5 TM images, one acquired in April 1994 and the other in February 2006. The average Chl-a estimation error between the two was 9.9%, a result that indicates the potential robustness of the SDA-based estimation model. The average estimation error of NPSS concentration from the 2006 Landsat-5 TM image was 15.9%. The key point for successfully applying the SDA-based estimation model to satellite data is the method used to obtain a suitable satellite-SRS for each end-member.

  7. Disclosing harmful medical errors to patients: tackling three tough cases.

    PubMed

    Gallagher, Thomas H; Bell, Sigall K; Smith, Kelly M; Mello, Michelle M; McDonald, Timothy B

    2009-09-01

    A gap exists between recommendations to disclose errors to patients and current practice. This gap may reflect important, yet unanswered questions about implementing disclosure principles. We explore some of these unanswered questions by presenting three real cases that pose challenging disclosure dilemmas. The first case involves a pancreas transplant that failed due to the pancreas graft being discarded, an error that was not disclosed partly because the family did not ask clarifying questions. Relying on patient or family questions to determine the content of disclosure is problematic. We propose a standard of materiality that can help clinicians to decide what information to disclose. The second case involves a fatal diagnostic error that the patient's widower was unaware had happened. The error was not disclosed out of concern that disclosure would cause the widower more harm than good. This case highlights how institutions can overlook patients' and families' needs following errors and emphasizes that benevolent deception has little role in disclosure. Institutions should consider whether involving neutral third parties could make disclosures more patient centered. The third case presents an intraoperative cardiac arrest due to a large air embolism where uncertainty around the clinical event was high and complicated the disclosure. Uncertainty is common to many medical errors but should not deter open conversations with patients and families about what is and is not known about the event. Continued discussion within the medical profession about applying disclosure principles to real-world cases can help to better meet patients' and families' needs following medical errors.

  8. Local error estimates for discontinuous solutions of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1989-01-01

    Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

  9. Error mitigation for CCSD compressed imager data

    NASA Astrophysics Data System (ADS)

    Gladkova, Irina; Grossberg, Michael; Gottipati, Srikanth; Shahriar, Fazlul; Bonev, George

    2009-08-01

    To efficiently use the limited bandwidth available on the downlink from satellite to ground station, imager data is usually compressed before transmission. Transmission introduces unavoidable errors, which are only partially removed by forward error correction and packetization. In the case of the commonly used CCSD Rice-based compression, it results in a contiguous sequence of dummy values along scan lines in a band of the imager data. We have developed a method capable of using the image statistics to provide a principled estimate of the missing data. Our method outperforms interpolation yet can be performed fast enough to provide uninterrupted data flow. The estimation of the lost data provides significant value to end users who may use only part of the data, may not have statistical tools, or lack the expertise to mitigate the impact of the lost data. Since the locations of the lost data will be clearly marked as meta-data in the HDF or NetCDF header, experts who prefer to handle error mitigation themselves will be free to use or ignore our estimates as they see fit.

  10. Estimating multivariate response surface model with data outliers, case study in enhancing surface layer properties of an aircraft aluminium alloy

    NASA Astrophysics Data System (ADS)

    Widodo, Edy; Kariyam

    2017-03-01

    To determine the input variable settings that create the optimal compromise in response variable used Response Surface Methodology (RSM). There are three primary steps in the RSM problem, namely data collection, modelling, and optimization. In this study focused on the establishment of response surface models, using the assumption that the data produced is correct. Usually the response surface model parameters are estimated by OLS. However, this method is highly sensitive to outliers. Outliers can generate substantial residual and often affect the estimator models. Estimator models produced can be biased and could lead to errors in the determination of the optimal point of fact, that the main purpose of RSM is not reached. Meanwhile, in real life, the collected data often contain some response variable and a set of independent variables. Treat each response separately and apply a single response procedures can result in the wrong interpretation. So we need a development model for the multi-response case. Therefore, it takes a multivariate model of the response surface that is resistant to outliers. As an alternative, in this study discussed on M-estimation as a parameter estimator in multivariate response surface models containing outliers. As an illustration presented a case study on the experimental results to the enhancement of the surface layer of aluminium alloy air by shot peening.

  11. 45 CFR 98.102 - Content of Error Rate Reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Funds and State Matching and Maintenance-of-Effort (MOE Funds): (1) Percentage of cases with an error... cases in the sample with an error compared to the total number of cases in the sample; (2) Percentage of cases with an improper payment (both over and under payments), expressed as the total number of cases in...

  12. Global identification of stochastic dynamical systems under different pseudo-static operating conditions: The functionally pooled ARMAX case

    NASA Astrophysics Data System (ADS)

    Sakellariou, J. S.; Fassois, S. D.

    2017-01-01

    The identification of a single global model for a stochastic dynamical system operating under various conditions is considered. Each operating condition is assumed to have a pseudo-static effect on the dynamics and be characterized by a single measurable scheduling variable. Identification is accomplished within a recently introduced Functionally Pooled (FP) framework, which offers a number of advantages over Linear Parameter Varying (LPV) identification techniques. The focus of the work is on the extension of the framework to include the important FP-ARMAX model case. Compared to their simpler FP-ARX counterparts, FP-ARMAX models are much more general and offer improved flexibility in describing various types of stochastic noise, but at the same time lead to a more complicated, non-quadratic, estimation problem. Prediction Error (PE), Maximum Likelihood (ML), and multi-stage estimation methods are postulated, and the PE estimator optimality, in terms of consistency and asymptotic efficiency, is analytically established. The postulated estimators are numerically assessed via Monte Carlo experiments, while the effectiveness of the approach and its superiority over its FP-ARX counterpart are demonstrated via an application case study pertaining to simulated railway vehicle suspension dynamics under various mass loading conditions.

  13. Validation of prostate-specific antigen laboratory values recorded in Surveillance, Epidemiology, and End Results registries.

    PubMed

    Adamo, Margaret Peggy; Boten, Jessica A; Coyle, Linda M; Cronin, Kathleen A; Lam, Clara J K; Negoita, Serban; Penberthy, Lynne; Stevens, Jennifer L; Ward, Kevin C

    2017-02-15

    Researchers have used prostate-specific antigen (PSA) values collected by central cancer registries to evaluate tumors for potential aggressive clinical disease. An independent study collecting PSA values suggested a high error rate (18%) related to implied decimal points. To evaluate the error rate in the Surveillance, Epidemiology, and End Results (SEER) program, a comprehensive review of PSA values recorded across all SEER registries was performed. Consolidated PSA values for eligible prostate cancer cases in SEER registries were reviewed and compared with text documentation from abstracted records. Four types of classification errors were identified: implied decimal point errors, abstraction or coding implementation errors, nonsignificant errors, and changes related to "unknown" values. A total of 50,277 prostate cancer cases diagnosed in 2012 were reviewed. Approximately 94.15% of cases did not have meaningful changes (85.85% correct, 5.58% with a nonsignificant change of <1 ng/mL, and 2.80% with no clinical change). Approximately 5.70% of cases had meaningful changes (1.93% due to implied decimal point errors, 1.54% due to abstract or coding errors, and 2.23% due to errors related to unknown categories). Only 419 of the original 50,277 cases (0.83%) resulted in a change in disease stage due to a corrected PSA value. The implied decimal error rate was only 1.93% of all cases in the current validation study, with a meaningful error rate of 5.81%. The reasons for the lower error rate in SEER are likely due to ongoing and rigorous quality control and visual editing processes by the central registries. The SEER program currently is reviewing and correcting PSA values back to 2004 and will re-release these data in the public use research file. Cancer 2017;123:697-703. © 2016 American Cancer Society. © 2016 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  14. Population viability analysis with species occurrence data from museum collections.

    PubMed

    Skarpaas, Olav; Stabbetorp, Odd E

    2011-06-01

    The most comprehensive data on many species come from scientific collections. Thus, we developed a method of population viability analysis (PVA) in which this type of occurrence data can be used. In contrast to classical PVA, our approach accounts for the inherent observation error in occurrence data and allows the estimation of the population parameters needed for viability analysis. We tested the sensitivity of the approach to spatial resolution of the data, length of the time series, sampling effort, and detection probability with simulated data and conducted PVAs for common, rare, and threatened species. We compared the results of these PVAs with results of standard method PVAs in which observation error is ignored. Our method provided realistic estimates of population growth terms and quasi-extinction risk in cases in which the standard method without observation error could not. For low values of any of the sampling variables we tested, precision decreased, and in some cases biased estimates resulted. The results of our PVAs with the example species were consistent with information in the literature on these species. Our approach may facilitate PVA for a wide range of species of conservation concern for which demographic data are lacking but occurrence data are readily available. ©2011 Society for Conservation Biology.

  15. Error analysis of speed of sound reconstruction in ultrasound limited angle transmission tomography.

    PubMed

    Jintamethasawat, Rungroj; Lee, Won-Mean; Carson, Paul L; Hooi, Fong Ming; Fowlkes, J Brian; Goodsitt, Mitchell M; Sampson, Richard; Wenisch, Thomas F; Wei, Siyuan; Zhou, Jian; Chakrabarti, Chaitali; Kripfgans, Oliver D

    2018-04-07

    We have investigated limited angle transmission tomography to estimate speed of sound (SOS) distributions for breast cancer detection. That requires both accurate delineations of major tissues, in this case by segmentation of prior B-mode images, and calibration of the relative positions of the opposed transducers. Experimental sensitivity evaluation of the reconstructions with respect to segmentation and calibration errors is difficult with our current system. Therefore, parametric studies of SOS errors in our bent-ray reconstructions were simulated. They included mis-segmentation of an object of interest or a nearby object, and miscalibration of relative transducer positions in 3D. Close correspondence of reconstruction accuracy was verified in the simplest case, a cylindrical object in homogeneous background with induced segmentation and calibration inaccuracies. Simulated mis-segmentation in object size and lateral location produced maximum SOS errors of 6.3% within 10 mm diameter change and 9.1% within 5 mm shift, respectively. Modest errors in assumed transducer separation produced the maximum SOS error from miscalibrations (57.3% within 5 mm shift), still, correction of this type of error can easily be achieved in the clinic. This study should aid in designing adequate transducer mounts and calibration procedures, and in specification of B-mode image quality and segmentation algorithms for limited angle transmission tomography relying on ray tracing algorithms. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Modeling misidentification errors in capture-recapture studies using photographic identification of evolving marks

    USGS Publications Warehouse

    Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.

    2009-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.

  17. Correcting AUC for Measurement Error.

    PubMed

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  18. A game theory approach to target tracking in sensor networks.

    PubMed

    Gu, Dongbing

    2011-02-01

    In this paper, we investigate a moving-target tracking problem with sensor networks. Each sensor node has a sensor to observe the target and a processor to estimate the target position. It also has wireless communication capability but with limited range and can only communicate with neighbors. The moving target is assumed to be an intelligent agent, which is "smart" enough to escape from the detection by maximizing the estimation error. This adversary behavior makes the target tracking problem more difficult. We formulate this target estimation problem as a zero-sum game in this paper and use a minimax filter to estimate the target position. The minimax filter is a robust filter that minimizes the estimation error by considering the worst case noise. Furthermore, we develop a distributed version of the minimax filter for multiple sensor nodes. The distributed computation is implemented via modeling the information received from neighbors as measurements in the minimax filter. The simulation results show that the target tracking algorithm proposed in this paper provides a satisfactory result.

  19. The systematic component of phylogenetic error as a function of taxonomic sampling under parsimony.

    PubMed

    Debry, Ronald W

    2005-06-01

    The effect of taxonomic sampling on phylogenetic accuracy under parsimony is examined by simulating nucleotide sequence evolution. Random error is minimized by using very large numbers of simulated characters. This allows estimation of the consistency behavior of parsimony, even for trees with up to 100 taxa. Data were simulated on 8 distinct 100-taxon model trees and analyzed as stratified subsets containing either 25 or 50 taxa, in addition to the full 100-taxon data set. Overall accuracy decreased in a majority of cases when taxa were added. However, the magnitude of change in the cases in which accuracy increased was larger than the magnitude of change in the cases in which accuracy decreased, so, on average, overall accuracy increased as more taxa were included. A stratified sampling scheme was used to assess accuracy for an initial subsample of 25 taxa. The 25-taxon analyses were compared to 50- and 100-taxon analyses that were pruned to include only the original 25 taxa. On average, accuracy for the 25 taxa was improved by taxon addition, but there was considerable variation in the degree of improvement among the model trees and across different rates of substitution.

  20. Ensemble-Based Parameter Estimation in a Coupled General Circulation Model

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-09-10

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  1. CME Velocity and Acceleration Error Estimates Using the Bootstrap Method

    NASA Technical Reports Server (NTRS)

    Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji

    2017-01-01

    The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs (e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.

  2. Robust estimation of thermodynamic parameters (ΔH, ΔS and ΔCp) for prediction of retention time in gas chromatography - Part II (Application).

    PubMed

    Claumann, Carlos Alberto; Wüst Zibetti, André; Bolzan, Ariovaldo; Machado, Ricardo A F; Pinto, Leonel Teixeira

    2015-12-18

    For this work, an analysis of parameter estimation for the retention factor in GC model was performed, considering two different criteria: sum of square error, and maximum error in absolute value; relevant statistics are described for each case. The main contribution of this work is the implementation of an initialization scheme (specialized) for the estimated parameters, which features fast convergence (low computational time) and is based on knowledge of the surface of the error criterion. In an application to a series of alkanes, specialized initialization resulted in significant reduction to the number of evaluations of the objective function (reducing computational time) in the parameter estimation. The obtained reduction happened between one and two orders of magnitude, compared with the simple random initialization. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Improving Forecast Skill by Assimilation of AIRS Cloud Cleared Radiances RiCC

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Rosenberg, Robert I.; Iredell, Lena

    2015-01-01

    ECMWF, NCEP, and GMAO routinely assimilate radiosonde and other in-situ observations along with satellite IR and MW Sounder radiance observations. NCEP and GMAO use the NCEP GSI Data Assimilation System (DAS).GSI DAS assimilates AIRS, CrIS, IASI channel radiances Ri on a channel-by-channel, case-by-case basis, only for those channels i thought to be unaffected by cloud cover. This test excludes Ri for most tropospheric sounding channels under partial cloud cover conditions. AIRS Version-6 RiCC is a derived quantity representative of what AIRS channel i would have seen if the AIRS FOR were cloud free. All values of RiCC have case-by-case error estimates RiCC associated with them. Our experiments present to the GSI QCd values of AIRS RiCC in place of AIRS Ri observations. GSI DAS assimilates only those values of RiCC it thinks are cloud free. This potentially allows for better coverage of assimilated QCd values of RiCC as compared to Ri.

  4. Bayesian design criteria: computation, comparison, and application to a pharmacokinetic and a pharmacodynamic model.

    PubMed

    Merlé, Y; Mentré, F

    1995-02-01

    In this paper 3 criteria to design experiments for Bayesian estimation of the parameters of nonlinear models with respect to their parameters, when a prior distribution is available, are presented: the determinant of the Bayesian information matrix, the determinant of the pre-posterior covariance matrix, and the expected information provided by an experiment. A procedure to simplify the computation of these criteria is proposed in the case of continuous prior distributions and is compared with the criterion obtained from a linearization of the model about the mean of the prior distribution for the parameters. This procedure is applied to two models commonly encountered in the area of pharmacokinetics and pharmacodynamics: the one-compartment open model with bolus intravenous single-dose injection and the Emax model. They both involve two parameters. Additive as well as multiplicative gaussian measurement errors are considered with normal prior distributions. Various combinations of the variances of the prior distribution and of the measurement error are studied. Our attention is restricted to designs with limited numbers of measurements (1 or 2 measurements). This situation often occurs in practice when Bayesian estimation is performed. The optimal Bayesian designs that result vary with the variances of the parameter distribution and with the measurement error. The two-point optimal designs sometimes differ from the D-optimal designs for the mean of the prior distribution and may consist of replicating measurements. For the studied cases, the determinant of the Bayesian information matrix and its linearized form lead to the same optimal designs. In some cases, the pre-posterior covariance matrix can be far from its lower bound, namely, the inverse of the Bayesian information matrix, especially for the Emax model and a multiplicative measurement error. The expected information provided by the experiment and the determinant of the pre-posterior covariance matrix generally lead to the same designs except for the Emax model and the multiplicative measurement error. Results show that these criteria can be easily computed and that they could be incorporated in modules for designing experiments.

  5. Quantum algorithm for association rules mining

    NASA Astrophysics Data System (ADS)

    Yu, Chao-Hua; Gao, Fei; Wang, Qing-Le; Wen, Qiao-Yan

    2016-10-01

    Association rules mining (ARM) is one of the most important problems in knowledge discovery and data mining. Given a transaction database that has a large number of transactions and items, the task of ARM is to acquire consumption habits of customers by discovering the relationships between itemsets (sets of items). In this paper, we address ARM in the quantum settings and propose a quantum algorithm for the key part of ARM, finding frequent itemsets from the candidate itemsets and acquiring their supports. Specifically, for the case in which there are Mf(k ) frequent k -itemsets in the Mc(k ) candidate k -itemsets (Mf(k )≤Mc(k ) ), our algorithm can efficiently mine these frequent k -itemsets and estimate their supports by using parallel amplitude estimation and amplitude amplification with complexity O (k/√{Mc(k )Mf(k ) } ɛ ) , where ɛ is the error for estimating the supports. Compared with the classical counterpart, i.e., the classical sampling-based algorithm, whose complexity is O (k/Mc(k ) ɛ2) , our quantum algorithm quadratically improves the dependence on both ɛ and Mc(k ) in the best case when Mf(k )≪Mc(k ) and on ɛ alone in the worst case when Mf(k )≈Mc(k ) .

  6. 45 CFR 98.100 - Error Rate Report.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND... rates, which is defined as the percentage of cases with an error (expressed as the total number of cases with an error compared to the total number of cases); the percentage of cases with an improper payment...

  7. Robust Vision-Based Pose Estimation Algorithm for AN Uav with Known Gravity Vector

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2016-06-01

    Accurate estimation of camera external orientation with respect to a known object is one of the central problems in photogrammetry and computer vision. In recent years this problem is gaining an increasing attention in the field of UAV autonomous flight. Such application requires a real-time performance and robustness of the external orientation estimation algorithm. The accuracy of the solution is strongly dependent on the number of reference points visible on the given image. The problem only has an analytical solution if 3 or more reference points are visible. However, in limited visibility conditions it is often needed to perform external orientation with only 2 visible reference points. In such case the solution could be found if the gravity vector direction in the camera coordinate system is known. A number of algorithms for external orientation estimation for the case of 2 known reference points and a gravity vector were developed to date. Most of these algorithms provide analytical solution in the form of polynomial equation that is subject to large errors in the case of complex reference points configurations. This paper is focused on the development of a new computationally effective and robust algorithm for external orientation based on positions of 2 known reference points and a gravity vector. The algorithm implementation for guidance of a Parrot AR.Drone 2.0 micro-UAV is discussed. The experimental evaluation of the algorithm proved its computational efficiency and robustness against errors in reference points positions and complex configurations.

  8. Random Versus Nonrandom Peer Review: A Case for More Meaningful Peer Review.

    PubMed

    Itri, Jason N; Donithan, Adam; Patel, Sohil H

    2018-05-10

    Random peer review programs are not optimized to discover cases with diagnostic error and thus have inherent limitations with respect to educational and quality improvement value. Nonrandom peer review offers an alternative approach in which diagnostic error cases are targeted for collection during routine clinical practice. The objective of this study was to compare error cases identified through random and nonrandom peer review approaches at an academic center. During the 1-year study period, the number of discrepancy cases and score of discrepancy were determined from each approach. The nonrandom peer review process collected 190 cases, of which 60 were scored as 2 (minor discrepancy), 94 as 3 (significant discrepancy), and 36 as 4 (major discrepancy). In the random peer review process, 1,690 cases were reviewed, of which 1,646 were scored as 1 (no discrepancy), 44 were scored as 2 (minor discrepancy), and none were scored as 3 or 4. Several teaching lessons and quality improvement measures were developed as a result of analysis of error cases collected through the nonrandom peer review process. Our experience supports the implementation of nonrandom peer review as a replacement to random peer review, with nonrandom peer review serving as a more effective method for collecting diagnostic error cases with educational and quality improvement value. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  9. Generally objective measurement of human temperature and reading ability: some corollaries.

    PubMed

    Stenner, A Jackson; Stone, Mark

    2010-01-01

    We argue that a goal of measurement is general objectivity: point estimates of a person's measure (height, temperature, and reader ability) should be independent of the instrument and independent of the sample in which the person happens to find herself. In contrast, Rasch's concept of specific objectivity requires only differences (i.e., comparisons) between person measures to be independent of the instrument. We present a canonical case in which there is no overlap between instruments and persons: each person is measured by a unique instrument. We then show what is required to estimate measures in this degenerate case. The canonical case encourages a simplification and reconceptualization of validity and reliability. Not surprisingly, this reconceptualization looks a lot like the way physicists and chemometricians think about validity and measurement error. We animate this presentation with a technology that blurs the distinction between instruction, assessment, and generally objective measurement of reader ability. We encourage adaptation of this model to health outcomes measurement.

  10. A nudging data assimilation algorithm for the identification of groundwater pumping

    NASA Astrophysics Data System (ADS)

    Cheng, Wei-Chen; Kendall, Donald R.; Putti, Mario; Yeh, William W.-G.

    2009-08-01

    This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measured data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistent physical interpretation for pumping rate identification. The algorithm identifies the unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rates, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show an excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.

  11. A nudging data assimilation algorithm for the identification of groundwater pumping

    NASA Astrophysics Data System (ADS)

    Cheng, W.; Kendall, D. R.; Putti, M.; Yeh, W. W.

    2008-12-01

    This study develops a nudging data assimilation algorithm for estimating unknown pumping from private wells in an aquifer system using measurement data of hydraulic head. The proposed algorithm treats the unknown pumping as an additional sink term in the governing equation of groundwater flow and provides a consistently physical interpretation for pumping rate identification. The algorithm identifies unknown pumping and, at the same time, reduces the forecast error in hydraulic heads. We apply the proposed algorithm to the Las Posas Groundwater Basin in southern California. We consider the following three pumping scenarios: constant pumping rate, spatially varying pumping rates, and temporally varying pumping rates. We also study the impact of head measurement errors on the proposed algorithm. In the case study, we seek to estimate the six unknown pumping rates from private wells using head measurements from four observation wells. The results show excellent rate of convergence for pumping estimation. The case study demonstrates the applicability, accuracy, and efficiency of the proposed data assimilation algorithm for the identification of unknown pumping in an aquifer system.

  12. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    PubMed

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y.; Liu, Z.; Zhang, S.

    Parameter estimation provides a potentially powerful approach to reduce model bias for complex climate models. Here, in a twin experiment framework, the authors perform the first parameter estimation in a fully coupled ocean–atmosphere general circulation model using an ensemble coupled data assimilation system facilitated with parameter estimation. The authors first perform single-parameter estimation and then multiple-parameter estimation. In the case of the single-parameter estimation, the error of the parameter [solar penetration depth (SPD)] is reduced by over 90% after ~40 years of assimilation of the conventional observations of monthly sea surface temperature (SST) and salinity (SSS). The results of multiple-parametermore » estimation are less reliable than those of single-parameter estimation when only the monthly SST and SSS are assimilated. Assimilating additional observations of atmospheric data of temperature and wind improves the reliability of multiple-parameter estimation. The errors of the parameters are reduced by 90% in ~8 years of assimilation. Finally, the improved parameters also improve the model climatology. With the optimized parameters, the bias of the climatology of SST is reduced by ~90%. Altogether, this study suggests the feasibility of ensemble-based parameter estimation in a fully coupled general circulation model.« less

  14. How recalibration method, pricing, and coding affect DRG weights

    PubMed Central

    Carter, Grace M.; Rogowski, Jeannette A.

    1992-01-01

    We compared diagnosis-related group (DRG) weights calculated using the hospital-specific relative-value (HSR V) methodology with those calculated using the standard methodology for each year from 1985 through 1989 and analyzed differences between the two methods in detail for 1989. We provide evidence suggesting that classification error and subsidies of higher weighted cases by lower weighted cases caused compression in the weights used for payment as late as the fifth year of the prospective payment system. However, later weights calculated by the standard method are not compressed because a statistical correlation between high markups and high case-mix indexes offsets the cross-subsidization. HSR V weights from the same files are compressed because this methodology is more sensitive to cross-subsidies. However, both sets of weights produce equally good estimates of hospital-level costs net of those expenses that are paid by outlier payments. The greater compression of the HSR V weights is counterbalanced by the fact that more high-weight cases qualify as outliers. PMID:10127456

  15. New analysis strategies for micro aspheric lens metrology

    NASA Astrophysics Data System (ADS)

    Gugsa, Solomon Abebe

    Effective characterization of an aspheric micro lens is critical for understanding and improving processing in micro-optic manufacturing. Since most microlenses are plano-convex, where the convex geometry is a conic surface, current practice is often limited to obtaining an estimate of the lens conic constant, which average out the surface geometry that departs from an exact conic surface and any addition surface irregularities. We have developed a comprehensive approach of estimating the best fit conic and its uncertainty, and in addition propose an alternative analysis that focuses on surface errors rather than best-fit conic constant. We describe our new analysis strategy based on the two most dominant micro lens metrology methods in use today, namely, scanning white light interferometry (SWLI) and phase shifting interferometry (PSI). We estimate several parameters from the measurement. The major uncertainty contributors for SWLI are the estimates of base radius of curvature, the aperture of the lens, the sag of the lens, noise in the measurement, and the center of the lens. In the case of PSI the dominant uncertainty contributors are noise in the measurement, the radius of curvature, and the aperture. Our best-fit conic procedure uses least squares minimization to extract a best-fit conic value, which is then subjected to a Monte Carlo analysis to capture combined uncertainty. In our surface errors analysis procedure, we consider the surface errors as the difference between the measured geometry and the best-fit conic surface or as the difference between the measured geometry and the design specification for the lens. We focus on a Zernike polynomial description of the surface error, and again a Monte Carlo analysis is used to estimate a combined uncertainty, which in this case is an uncertainty for each Zernike coefficient. Our approach also allows us to investigate the effect of individual uncertainty parameters and measurement noise on both the best-fit conic constant analysis and the surface errors analysis, and compare the individual contributions to the overall uncertainty.

  16. Evaluation of a load cell model for dynamic calibration of the rotor systems research aircraft

    NASA Technical Reports Server (NTRS)

    Duval, R. W.; Bahrami, H.; Wellman, B.

    1985-01-01

    The Rotor Systems Research Aircraft uses load cells to isolate the rotor/transmission system from the fuselage. An analytical model of the relationship between applied rotor loads and the resulting load cell measurements is derived by applying a force-and-moment balance to the isolated rotor/transmission system. The model is then used to estimate the applied loads from measured load cell data, as obtained from a ground-based shake test. Using nominal design values for the parameters, the estimation errors, for the case of lateral forcing, were shown to be on the order of the sensor measurement noise in all but the roll axis. An unmodeled external load appears to be the source of the error in this axis.

  17. Per-pixel bias-variance decomposition of continuous errors in data-driven geospatial modeling: A case study in environmental remote sensing

    NASA Astrophysics Data System (ADS)

    Gao, Jing; Burt, James E.

    2017-12-01

    This study investigates the usefulness of a per-pixel bias-variance error decomposition (BVD) for understanding and improving spatially-explicit data-driven models of continuous variables in environmental remote sensing (ERS). BVD is a model evaluation method originated from machine learning and have not been examined for ERS applications. Demonstrated with a showcase regression tree model mapping land imperviousness (0-100%) using Landsat images, our results showed that BVD can reveal sources of estimation errors, map how these sources vary across space, reveal the effects of various model characteristics on estimation accuracy, and enable in-depth comparison of different error metrics. Specifically, BVD bias maps can help analysts identify and delineate model spatial non-stationarity; BVD variance maps can indicate potential effects of ensemble methods (e.g. bagging), and inform efficient training sample allocation - training samples should capture the full complexity of the modeled process, and more samples should be allocated to regions with more complex underlying processes rather than regions covering larger areas. Through examining the relationships between model characteristics and their effects on estimation accuracy revealed by BVD for both absolute and squared errors (i.e. error is the absolute or the squared value of the difference between observation and estimate), we found that the two error metrics embody different diagnostic emphases, can lead to different conclusions about the same model, and may suggest different solutions for performance improvement. We emphasize BVD's strength in revealing the connection between model characteristics and estimation accuracy, as understanding this relationship empowers analysts to effectively steer performance through model adjustments.

  18. Effects of Including Misidentified Sharks in Life History Analyses: A Case Study on the Grey Reef Shark Carcharhinus amblyrhynchos from Papua New Guinea

    PubMed Central

    Smart, Jonathan J.; Chin, Andrew; Baje, Leontine; Green, Madeline E.; Appleyard, Sharon A.; Tobin, Andrew J.; Simpfendorfer, Colin A.; White, William T.

    2016-01-01

    Fisheries observer programs are used around the world to collect crucial information and samples that inform fisheries management. However, observer error may misidentify similar-looking shark species. This raises questions about the level of error that species misidentifications could introduce to estimates of species’ life history parameters. This study addressed these questions using the Grey Reef Shark Carcharhinus amblyrhynchos as a case study. Observer misidentification rates were quantified by validating species identifications using diagnostic photographs taken on board supplemented with DNA barcoding. Length-at-age and maturity ogive analyses were then estimated and compared with and without the misidentified individuals. Vertebrae were retained from a total of 155 sharks identified by observers as C. amblyrhynchos. However, 22 (14%) of these were sharks were misidentified by the observers and were subsequently re-identified based on photographs and/or DNA barcoding. Of the 22 individuals misidentified as C. amblyrhynchos, 16 (73%) were detected using photographs and a further 6 via genetic validation. If misidentified individuals had been included, substantial error would have been introduced to both the length-at-age and the maturity estimates. Thus validating the species identification, increased the accuracy of estimated life history parameters for C. amblyrhynchos. From the corrected sample a multi-model inference approach was used to estimate growth for C. amblyrhynchos using three candidate models. The model averaged length-at-age parameters for C. amblyrhynchos with the sexes combined were  L¯∞ = 159 cm TL and  L¯0 = 72 cm TL. Females mature at a greater length (l50 = 136 cm TL) and older age (A50 = 9.1 years) than males (l50 = 123 cm TL; A50 = 5.9 years). The inclusion of techniques to reduce misidentification in observer programs will improve the results of life history studies and ultimately improve management through the use of more accurate data for assessments. PMID:27058734

  19. Effects of Including Misidentified Sharks in Life History Analyses: A Case Study on the Grey Reef Shark Carcharhinus amblyrhynchos from Papua New Guinea.

    PubMed

    Smart, Jonathan J; Chin, Andrew; Baje, Leontine; Green, Madeline E; Appleyard, Sharon A; Tobin, Andrew J; Simpfendorfer, Colin A; White, William T

    2016-01-01

    Fisheries observer programs are used around the world to collect crucial information and samples that inform fisheries management. However, observer error may misidentify similar-looking shark species. This raises questions about the level of error that species misidentifications could introduce to estimates of species' life history parameters. This study addressed these questions using the Grey Reef Shark Carcharhinus amblyrhynchos as a case study. Observer misidentification rates were quantified by validating species identifications using diagnostic photographs taken on board supplemented with DNA barcoding. Length-at-age and maturity ogive analyses were then estimated and compared with and without the misidentified individuals. Vertebrae were retained from a total of 155 sharks identified by observers as C. amblyrhynchos. However, 22 (14%) of these were sharks were misidentified by the observers and were subsequently re-identified based on photographs and/or DNA barcoding. Of the 22 individuals misidentified as C. amblyrhynchos, 16 (73%) were detected using photographs and a further 6 via genetic validation. If misidentified individuals had been included, substantial error would have been introduced to both the length-at-age and the maturity estimates. Thus validating the species identification, increased the accuracy of estimated life history parameters for C. amblyrhynchos. From the corrected sample a multi-model inference approach was used to estimate growth for C. amblyrhynchos using three candidate models. The model averaged length-at-age parameters for C. amblyrhynchos with the sexes combined were L∞ = 159 cm TL and L0 = 72 cm TL. Females mature at a greater length (l50 = 136 cm TL) and older age (A50 = 9.1 years) than males (l50 = 123 cm TL; A50 = 5.9 years). The inclusion of techniques to reduce misidentification in observer programs will improve the results of life history studies and ultimately improve management through the use of more accurate data for assessments.

  20. Door-to-door capture of incident and prevalent stroke cases in Durango, Mexico: the Brain Attack Surveillance in Durango Study.

    PubMed

    Cantu-Brito, Carlos; Majersik, Jennifer J; Sánchez, Brisa N; Ruano, Angel; Becerra-Mendoza, Daniela; Wing, Jeffrey J; Morgenstern, Lewis B

    2011-03-01

    Stroke incidence and prevalence estimates in developing countries should include stroke cases not presenting to hospital. We performed door-to-door stroke case ascertainment in Durango Municipality, Mexico, to estimate stroke incidence and prevalence and to determine the error made by only ascertaining hospital cases. Between September 2008 and March 2009, 1996 housing units were randomly sampled to screen for stroke in Durango Municipality residents 35 years of age and older. Field workers utilized a validated screening tool. Those screening positive were referred to a neurologist for history and examination and a head CT scan. Prevalence and cumulative incidence from the door-to-door surveillance were calculated and compared with previously reported hospitalization rates during the same defined time. Respondents included 2437 subjects from 1419 homes. The refusal rate was 3.8%. Twenty subjects had verified or probable stroke. The prevalence of probable or verified stroke was 7.7 per 1000 (95% CI, 4.3 per 1000-11.2 per 1000). Five patients had a stroke during the time of the hospital surveillance, yielding a cumulative incidence of 232.3 per 100 000 (95% CI, 27.8-436.9). Two of the 5 cases were captured by door-to-door surveillance but not by hospital surveillance. This study provides the first community-based stroke prevalence and incidence estimates in Mexico. The wide confidence intervals, despite the large number of surveyed housing units, suggest the need for more advanced sampling strategies for stroke surveillance in the developing world.

  1. Door-to-door capture of incident and prevalent stroke cases in Durango, Mexico – the Brain Attack Surveillance in Durango (BASID) Study

    PubMed Central

    Cantu-Brito, Carlos; Majersik, Jennifer J; Sánchez, Brisa N; Ruano, Angel; Becerra-Mendoza, Daniela; Wing, Jeffrey J; Morgenstern, Lewis B

    2011-01-01

    Background and Purpose Stroke incidence and prevalence estimates in developing countries should include stroke cases not presenting to hospital. We performed door-to-door stroke case ascertainment in Durango Municipality, Mexico to estimate stroke incidence and prevalence, and to determine the error made by only ascertaining hospital cases. Methods Between September 2008 and March 2009, 1996 housing units were randomly sampled to screen for stroke in Durango Municipality residents ≥35 years of age. Field workers utilized a validated screening tool. Those screening positive were referred to a neurologist for history and examination and if confirmed, a head CT scan. Prevalence and cumulative incidence from the door-to-door surveillance were calculated and compared with previously reported hospitalization rates during the same defined time. Results Respondents included 2437 subjects from 1419 homes. The refusal rate was 3.8%. Twenty subjects had verified or probable stroke. The prevalence of probable or verified stroke was 7.7/1000 (95% CI: 4.3/1000, 11.2/1000). Five patients had a stroke during the time of the hospital surveillance, yielding a cumulative incidence of 232.3/100,000 (95% CI: 27.8, 436.9). Two of the 5 cases were captured by door-to-door surveillance but not by hospital surveillance. Conclusions This study provides the first community-based stroke prevalence and incidence estimates in Mexico. The wide confidence intervals, despite the large number of surveyed housing units, suggest the need for more advanced sampling strategies for stroke surveillance in the developing world. PMID:21212398

  2. Modeling habitat dynamics accounting for possible misclassification

    USGS Publications Warehouse

    Veran, Sophie; Kleiner, Kevin J.; Choquet, Remi; Collazo, Jaime; Nichols, James D.

    2012-01-01

    Land cover data are widely used in ecology as land cover change is a major component of changes affecting ecological systems. Landscape change estimates are characterized by classification errors. Researchers have used error matrices to adjust estimates of areal extent, but estimation of land cover change is more difficult and more challenging, with error in classification being confused with change. We modeled land cover dynamics for a discrete set of habitat states. The approach accounts for state uncertainty to produce unbiased estimates of habitat transition probabilities using ground information to inform error rates. We consider the case when true and observed habitat states are available for the same geographic unit (pixel) and when true and observed states are obtained at one level of resolution, but transition probabilities estimated at a different level of resolution (aggregations of pixels). Simulation results showed a strong bias when estimating transition probabilities if misclassification was not accounted for. Scaling-up does not necessarily decrease the bias and can even increase it. Analyses of land cover data in the Southeast region of the USA showed that land change patterns appeared distorted if misclassification was not accounted for: rate of habitat turnover was artificially increased and habitat composition appeared more homogeneous. Not properly accounting for land cover misclassification can produce misleading inferences about habitat state and dynamics and also misleading predictions about species distributions based on habitat. Our models that explicitly account for state uncertainty should be useful in obtaining more accurate inferences about change from data that include errors.

  3. `Inverse Crime' and Model Integrity in Lightcurve Inversion applied to unresolved Space Object Identification

    NASA Astrophysics Data System (ADS)

    Henderson, Laura S.; Subbarao, Kamesh

    2017-12-01

    This work presents a case wherein the selection of models when producing synthetic light curves affects the estimation of the size of unresolved space objects. Through this case, "inverse crime" (using the same model for the generation of synthetic data and data inversion), is illustrated. This is done by using two models to produce the synthetic light curve and later invert it. It is shown here that the choice of model indeed affects the estimation of the shape/size parameters. When a higher fidelity model (henceforth the one that results in the smallest error residuals after the crime is committed) is used to both create, and invert the light curve model the estimates of the shape/size parameters are significantly better than those obtained when a lower fidelity model (in comparison) is implemented for the estimation. It is therefore of utmost importance to consider the choice of models when producing synthetic data, which later will be inverted, as the results might be misleadingly optimistic.

  4. Explicit approximations to estimate the perturbative diffusivity in the presence of convectivity and damping. III. Cylindrical approximations for heat waves traveling inwards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM-FOM, Trilateral Euregio Cluster, P.O. Box 1207, 3430 BE Nieuwegein

    In this paper, a number of new explicit approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on the heat equation in cylindrical geometry using the symmetry (Neumann) boundary condition at the plasma center. This means that the approximations derived here should be used only to estimate transport coefficients between the plasma center and the off-axis perturbative source. If the effect of cylindrical geometry is small, it is also possiblemore » to use semi-infinite domain approximations presented in Part I and Part II of this series. A number of new approximations are derived in this part, Part III, based upon continued fractions of the modified Bessel function of the first kind and the confluent hypergeometric function of the first kind. These approximations together with the approximations based on semi-infinite domains are compared for heat waves traveling towards the center. The relative error for the different derived approximations is presented for different values of the frequency, transport coefficients, and dimensionless radius. Moreover, it is shown how combinations of different explicit formulas can be used to estimate the transport coefficients over a large parameter range for cases without convection and damping, cases with damping only, and cases with convection and damping. The relative error between the approximation and its underlying model is below 2% for the case, where only diffusivity and damping are considered. If also convectivity is considered, the diffusivity can be estimated well in a large region, but there is also a large region in which no suitable approximation is found. This paper is the third part (Part III) of a series of three papers. In Part I, the semi-infinite slab approximations have been treated. In Part II, cylindrical approximations are treated for heat waves traveling towards the plasma edge assuming a semi-infinite domain.« less

  5. Evaluation of the accuracy of brain optical properties estimation at different ages using the frequency-domain multi-distance method

    NASA Astrophysics Data System (ADS)

    Dehaes, Mathieu; Grant, P. Ellen; Sliva, Danielle D.; Roche-Labarbe, Nadège; Pienaar, Rudolph; Boas, David A.; Franceschini, Maria Angela; Selb, Juliette

    2011-03-01

    NIRS is safe, non-invasive and offers the possibility to record local hemodynamic parameters at the bedside, avoiding the transportation of neonates and critically ill patients. In this work, we evaluate the accuracy of the frequency-domain multi-distance (FD-MD) method to retrieve brain optical properties from neonate to adult. Realistic measurements are simulated using a 3D Monte Carlo modeling of light propagation. Height different ages were investigated: a term newborn of 38 weeks gestational age, two infants of 6 and 12 months of age, a toddler of 2 year (yr.) old, two children of 5 and 10 years of age, a teenager of 14 yr. old, and an adult. Measurements are generated at multiple distances on the right parietal area of head models and fitted to a homogeneous FD-MD model to estimate the brain optical properties. In the newborn, infants, toddler and 5 yr. old child models, the error was dominated by the head curvature, while the superficial layer in the 10 yr. old child, teenager and adult heads. The influence of the CSF is also evaluated. In this case, absorption coefficients suffer from an additional error. In all cases, measurements at 5 mm provided worse estimation because of the diffusion approximation.

  6. An assessment of envelope-based demodulation in case of proximity of carrier and modulation frequencies

    NASA Astrophysics Data System (ADS)

    Shahriar, Md Rifat; Borghesani, Pietro; Randall, R. B.; Tan, Andy C. C.

    2017-11-01

    Demodulation is a necessary step in the field of diagnostics to reveal faults whose signatures appear as an amplitude and/or frequency modulation. The Hilbert transform has conventionally been used for the calculation of the analytic signal required in the demodulation process. However, the carrier and modulation frequencies must meet the conditions set by the Bedrosian identity for the Hilbert transform to be applicable for demodulation. This condition, basically requiring the carrier frequency to be sufficiently higher than the frequency of the modulation harmonics, is usually satisfied in many traditional diagnostic applications (e.g. vibration analysis of gear and bearing faults) due to the order-of-magnitude ratio between the carrier and modulation frequency. However, the diversification of the diagnostic approaches and applications shows cases (e.g. electrical signature analysis-based diagnostics) where the carrier frequency is in close proximity to the modulation frequency, thus challenging the applicability of the Bedrosian theorem. This work presents an analytic study to quantify the error introduced by the Hilbert transform-based demodulation when the Bedrosian identity is not satisfied and proposes a mitigation strategy to combat the error. An experimental study is also carried out to verify the analytical results. The outcome of the error analysis sets a confidence limit on the estimated modulation (both shape and magnitude) achieved through the Hilbert transform-based demodulation in case of violated Bedrosian theorem. However, the proposed mitigation strategy is found effective in combating the demodulation error aroused in this scenario, thus extending applicability of the Hilbert transform-based demodulation.

  7. Impact of fitting algorithms on errors of parameter estimates in dynamic contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Debus, C.; Floca, R.; Nörenberg, D.; Abdollahi, A.; Ingrisch, M.

    2017-12-01

    Parameter estimation in dynamic contrast-enhanced MRI (DCE MRI) is usually performed by non-linear least square (NLLS) fitting of a pharmacokinetic model to a measured concentration-time curve. The two-compartment exchange model (2CXM) describes the compartments ‘plasma’ and ‘interstitial volume’ and their exchange in terms of plasma flow and capillary permeability. The model function can be defined by either a system of two coupled differential equations or a closed-form analytical solution. The aim of this study was to compare these two representations in terms of accuracy, robustness and computation speed, depending on parameter combination and temporal sampling. The impact on parameter estimation errors was investigated by fitting the 2CXM to simulated concentration-time curves. Parameter combinations representing five tissue types were used, together with two arterial input functions, a measured and a theoretical population based one, to generate 4D concentration images at three different temporal resolutions. Images were fitted by NLLS techniques, where the sum of squared residuals was calculated by either numeric integration with the Runge-Kutta method or convolution. Furthermore two example cases, a prostate carcinoma and a glioblastoma multiforme patient, were analyzed in order to investigate the validity of our findings in real patient data. The convolution approach yields improved results in precision and robustness of determined parameters. Precision and stability are limited in curves with low blood flow. The model parameter ve shows great instability and little reliability in all cases. Decreased temporal resolution results in significant errors for the differential equation approach in several curve types. The convolution excelled in computational speed by three orders of magnitude. Uncertainties in parameter estimation at low temporal resolution cannot be compensated by usage of the differential equations. Fitting with the convolution approach is superior in computational time, with better stability and accuracy at the same time.

  8. Estimating ages of white-tailed deer: Age and sex patterns of error using tooth wear-and-replacement and consistency of cementum annuli

    USGS Publications Warehouse

    Samuel, Michael D.; Storm, Daniel J.; Rolley, Robert E.; Beissel, Thomas; Richards, Bryan J.; Van Deelen, Timothy R.

    2014-01-01

    The age structure of harvested animals provides the basis for many demographic analyses. Ages of harvested white-tailed deer (Odocoileus virginianus) and other ungulates often are estimated by evaluating replacement and wear patterns of teeth, which is subjective and error-prone. Few previous studies however, examined age- and sex-specific error rates. Counting cementum annuli of incisors is an alternative, more accurate method of estimating age, but factors that influence consistency of cementum annuli counts are poorly known. We estimated age of 1,261 adult (≥1.5 yr old) white-tailed deer harvested in Wisconsin and Illinois (USA; 2005–2008) using both wear-and-replacement and cementum annuli. We compared cementum annuli with wear-and-replacement estimates to assess misclassification rates by sex and age. Wear-and-replacement for estimating ages of white-tailed deer resulted in substantial misclassification compared with cementum annuli. Age classes of females were consistently underestimated, while those of males were underestimated for younger age classes but overestimated for older age classes. Misclassification resulted in an impression of a younger age-structure than actually was the case. Additionally, we obtained paired age-estimates from cementum annuli for 295 deer. Consistency of paired cementum annuli age-estimates decreased with age, was lower in females than males, and decreased as age estimates became less certain. Our results indicated that errors in the wear-and-replacement techniques are substantial and could impact demographic analyses that use age-structure information. 

  9. A New Stratified Sampling Procedure which Decreases Error Estimation of Varroa Mite Number on Sticky Boards.

    PubMed

    Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y

    2015-06-01

    A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Estimating Ocean Currents from Automatic Identification System Based Ship Drift Measurements

    NASA Astrophysics Data System (ADS)

    Jakub, Thomas D.

    Ship drift is a technique that has been used over the last century and a half to estimate ocean currents. Several of the shortcomings of the ship drift technique include obtaining the data from multiple ships, the time delay in getting those ship positions to a data center for processing and the limited resolution based on the amount of time between position measurements. These shortcomings can be overcome through the use of the Automatic Identification System (AIS). AIS enables more precise ocean current estimates, the option of finer resolution and more timely estimates. In this work, a demonstration of the use of AIS to compute ocean currents is performed. A corresponding error and sensitivity analysis is performed to help identify under which conditions errors will be smaller. A case study in San Francisco Bay with constant AIS message updates was compared against high frequency radar and demonstrated ocean current magnitude residuals of 19 cm/s for ship tracks in a high signal to noise environment. These ship tracks were only minutes long compared to the normally 12 to 24 hour ship tracks. The Gulf of Mexico case study demonstrated the ability to estimate ocean currents over longer baselines and identified the dependency of the estimates on the accuracy of time measurements. Ultimately, AIS measurements when combined with ship drift can provide another method of estimating ocean currents, particularly when other measurements techniques are not available.

  11. a Computer Simulation Study of Coherent Optical Fibre Communication Systems

    NASA Astrophysics Data System (ADS)

    Urey, Zafer

    Available from UMI in association with The British Library. A computer simulation study of coherent optical fibre communication systems is presented in this thesis. The Wiener process is proposed as the simulation model of laser phase noise and verified to be a good one. This model is included in the simulation experiments along with the other noise sources (i.e shot noise, thermal noise and laser intensity noise) and the models that represent the various waveform processing blocks in a system such as filtering, demodulation, etc. A novel mixed-semianalytical simulation procedure is designed and successfully applied for the estimation of bit error rates as low as 10^{-10 }. In this technique the noise processes and the ISI effects at the decision time are characterized from simulation experiments but the calculation of the probability of error is obtained by numerically integrating the noise statistics over the error region using analytical expressions. Simulation of only 4096 bits is found to give estimates of BER's corresponding to received optical power within 1 dB of the theoretical calculations using this approach. This number is very small when compared with the pure simulation techniques. Hence, the technique is proved to be very efficient in terms of the computation time and the memory requirements. A command driven simulation software which runs on a DEC VAX computer under the UNIX operating system is written by the author and a series of simulation experiments are carried out using this software. In particular, the effects of IF filtering on the performance of PSK heterodyne receivers with synchronous demodulation are examined when both the phase noise and the shot noise are included in the simulations. The BER curves of this receiver are estimated for the first time for various cases of IF filtering using the mixed-semianalytical approach. At a power penalty of 1 dB the IF linewidth requirement of this receiver with the matched filter is estimated to be less than 650 kHz at the modulation rate of 1 Gbps and BER of 10 ^{-9}. The IF linewidth requirement for other IF filtering cases are also estimated. The results are not found to be much different from the matched filter case. Therefore, it is concluded that IF filtering does not have any effect for the reduction of phase noise in PSK heterodyne systems with synchronous demodulation.

  12. Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; Smith, Mark S.

    2010-01-01

    Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.

  13. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  14. Image-Guided Intraoperative Cortical Deformation Recovery Using Game Theory: Application to Neocortical Epilepsy Surgery

    PubMed Central

    DeLorenzo, Christine; Papademetris, Xenophon; Staib, Lawrence H.; Vives, Kenneth P.; Spencer, Dennis D.; Duncan, James S.

    2010-01-01

    During neurosurgery, nonrigid brain deformation prevents preoperatively-acquired images from accurately depicting the intraoperative brain. Stereo vision systems can be used to track intraoperative cortical surface deformation and update preoperative brain images in conjunction with a biomechanical model. However, these stereo systems are often plagued with calibration error, which can corrupt the deformation estimation. In order to decouple the effects of camera calibration from the surface deformation estimation, a framework that can solve for disparate and often competing variables is needed. Game theory, which was developed to handle decision making in this type of competitive environment, has been applied to various fields from economics to biology. In this paper, game theory is applied to cortical surface tracking during neocortical epilepsy surgery and used to infer information about the physical processes of brain surface deformation and image acquisition. The method is successfully applied to eight in vivo cases, resulting in an 81% decrease in mean surface displacement error. This includes a case in which some of the initial camera calibration parameters had errors of 70%. Additionally, the advantages of using a game theoretic approach in neocortical epilepsy surgery are clearly demonstrated in its robustness to initial conditions. PMID:20129844

  15. Adaptive Offset Correction for Intracortical Brain Computer Interfaces

    PubMed Central

    Homer, Mark L.; Perge, János A.; Black, Michael J.; Harrison, Matthew T.; Cash, Sydney S.; Hochberg, Leigh R.

    2014-01-01

    Intracortical brain computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user’s ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called MOCA, was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors (10.6 ±10.1%; p<0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs. PMID:24196868

  16. Adaptive offset correction for intracortical brain-computer interfaces.

    PubMed

    Homer, Mark L; Perge, Janos A; Black, Michael J; Harrison, Matthew T; Cash, Sydney S; Hochberg, Leigh R

    2014-03-01

    Intracortical brain-computer interfaces (iBCIs) decode intended movement from neural activity for the control of external devices such as a robotic arm. Standard approaches include a calibration phase to estimate decoding parameters. During iBCI operation, the statistical properties of the neural activity can depart from those observed during calibration, sometimes hindering a user's ability to control the iBCI. To address this problem, we adaptively correct the offset terms within a Kalman filter decoder via penalized maximum likelihood estimation. The approach can handle rapid shifts in neural signal behavior (on the order of seconds) and requires no knowledge of the intended movement. The algorithm, called multiple offset correction algorithm (MOCA), was tested using simulated neural activity and evaluated retrospectively using data collected from two people with tetraplegia operating an iBCI. In 19 clinical research test cases, where a nonadaptive Kalman filter yielded relatively high decoding errors, MOCA significantly reduced these errors ( 10.6 ± 10.1% ; p < 0.05, pairwise t-test). MOCA did not significantly change the error in the remaining 23 cases where a nonadaptive Kalman filter already performed well. These results suggest that MOCA provides more robust decoding than the standard Kalman filter for iBCIs.

  17. Discrepancies in medication entries between anesthetic and pharmacy records using electronic databases.

    PubMed

    Vigoda, Michael M; Gencorelli, Frank J; Lubarsky, David A

    2007-10-01

    Accurate recording of disposition of controlled substances is required by regulatory agencies. Linking anesthesia information management systems (AIMS) with medication dispensing systems may facilitate automated reconciliation of medication discrepancies. In this retrospective investigation at a large academic hospital, we reviewed 11,603 cases (spanning an 8-mo period) comparing records of medications (i.e., narcotics, benzodiazepines, ketamine, and thiopental) recorded as removed from our automated medication dispensing system with medications recorded as administered in our AIMS. In 15% of cases, we found discrepancies between dispensed versus administered medications. Discrepancies occurred in both the AIMS (8% cases) and the medication dispensing system (10% cases). Although there were many different types of user errors, nearly 75% of them resulted from either an error in the amount of drug waste documented in the medication dispensing system (35%); or an error in documenting the medication in the AIMS (40%). A significant percentage of cases contained data entry errors in both the automated dispensing and AIMS. This error rate limits the current practicality of automating the necessary reconciliation. An electronic interface between an AIMS and a medication dispensing system could alert users of medication entry errors prior to finalizing a case, thus reducing the time (and cost) of reconciling discrepancies.

  18. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  19. Catastrophic photometric redshift errors: Weak-lensing survey requirements

    DOE PAGES

    Bernstein, Gary; Huterer, Dragan

    2010-01-11

    We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number N spec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of N spec is ~10 6 we findmore » that using only the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in N spec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the z s – z p distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less

  20. Causal Inference for fMRI Time Series Data with Systematic Errors of Measurement in a Balanced On/Off Study of Social Evaluative Threat.

    PubMed

    Sobel, Michael E; Lindquist, Martin A

    2014-07-01

    Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.

  1. The Vulnerability of People to Landslides: A Case Study on the Relationship between the Casualties and Volume of Landslides in China.

    PubMed

    Lin, Qigen; Wang, Ying; Liu, Tianxue; Zhu, Yingqi; Sui, Qi

    2017-02-21

    The lack of a detailed landslide inventory makes research on the vulnerability of people to landslides highly limited. In this paper, the authors collect information on the landslides that have caused casualties in China, and established the Landslides Casualties Inventory of China . 100 landslide cases from 2003 to 2012 were utilized to develop an empirical relationship between the volume of a landslide event and the casualties caused by the occurrence of the event. The error bars were used to describe the uncertainty of casualties resulting from landslides and to establish a threshold curve of casualties caused by landslides in China. The threshold curve was then applied to the landslide cases occurred in 2013 and 2014. The validation results show that the estimated casualties of the threshold curve were in good agreement with the real casualties with a small deviation. Therefore, the threshold curve can be used for estimating potential casualties and landslide vulnerability, which is meaningful for emergency rescue operations after landslides occurred and for risk assessment research.

  2. The Vulnerability of People to Landslides: A Case Study on the Relationship between the Casualties and Volume of Landslides in China

    PubMed Central

    Lin, Qigen; Wang, Ying; Liu, Tianxue; Zhu, Yingqi; Sui, Qi

    2017-01-01

    The lack of a detailed landslide inventory makes research on the vulnerability of people to landslides highly limited. In this paper, the authors collect information on the landslides that have caused casualties in China, and established the Landslides Casualties Inventory of China. 100 landslide cases from 2003 to 2012 were utilized to develop an empirical relationship between the volume of a landslide event and the casualties caused by the occurrence of the event. The error bars were used to describe the uncertainty of casualties resulting from landslides and to establish a threshold curve of casualties caused by landslides in China. The threshold curve was then applied to the landslide cases occurred in 2013 and 2014. The validation results show that the estimated casualties of the threshold curve were in good agreement with the real casualties with a small deviation. Therefore, the threshold curve can be used for estimating potential casualties and landslide vulnerability, which is meaningful for emergency rescue operations after landslides occurred and for risk assessment research. PMID:28230810

  3. Improving precision of forage yield trials: A case study

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to several facto...

  4. Exploring Situational Awareness in Diagnostic Errors in Primary Care

    PubMed Central

    Singh, Hardeep; Giardina, Traber Davis; Petersen, Laura A.; Smith, Michael; Wilson, Lindsey; Dismukes, Key; Bhagwath, Gayathri; Thomas, Eric J.

    2013-01-01

    Objective Diagnostic errors in primary care are harmful but poorly studied. To facilitate understanding of diagnostic errors in real-world primary care settings using electronic health records (EHRs), this study explored the use of the Situational Awareness (SA) framework from aviation human factors research. Methods A mixed-methods study was conducted involving reviews of EHR data followed by semi-structured interviews of selected providers from two institutions in the US. The study population included 380 consecutive patients with colorectal and lung cancers diagnosed between February 2008 and January 2009. Using a pre-tested data collection instrument, trained physicians identified diagnostic errors, defined as lack of timely action on one or more established indications for diagnostic work-up for lung and colorectal cancers. Twenty-six providers involved in cases with and without errors were interviewed. Interviews probed for providers' lack of SA and how this may have influenced the diagnostic process. Results Of 254 cases meeting inclusion criteria, errors were found in 30 (32.6%) of 92 lung cancer cases and 56 (33.5%) of 167 colorectal cancer cases. Analysis of interviews related to error cases revealed evidence of lack of one of four levels of SA applicable to primary care practice: information perception, information comprehension, forecasting future events, and choosing appropriate action based on the first three levels. In cases without error, the application of the SA framework provided insight into processes involved in attention management. Conclusions A framework of SA can help analyze and understand diagnostic errors in primary care settings that use EHRs. PMID:21890757

  5. A Compact VLSI System for Bio-Inspired Visual Motion Estimation.

    PubMed

    Shi, Cong; Luo, Gang

    2018-04-01

    This paper proposes a bio-inspired visual motion estimation algorithm based on motion energy, along with its compact very-large-scale integration (VLSI) architecture using low-cost embedded systems. The algorithm mimics motion perception functions of retina, V1, and MT neurons in a primate visual system. It involves operations of ternary edge extraction, spatiotemporal filtering, motion energy extraction, and velocity integration. Moreover, we propose the concept of confidence map to indicate the reliability of estimation results on each probing location. Our algorithm involves only additions and multiplications during runtime, which is suitable for low-cost hardware implementation. The proposed VLSI architecture employs multiple (frame, pixel, and operation) levels of pipeline and massively parallel processing arrays to boost the system performance. The array unit circuits are optimized to minimize hardware resource consumption. We have prototyped the proposed architecture on a low-cost field-programmable gate array platform (Zynq 7020) running at 53-MHz clock frequency. It achieved 30-frame/s real-time performance for velocity estimation on 160 × 120 probing locations. A comprehensive evaluation experiment showed that the estimated velocity by our prototype has relatively small errors (average endpoint error < 0.5 pixel and angular error < 10°) for most motion cases.

  6. Estimation of Right-Lobe Graft Weight From Computed Tomographic Volumetry for Living Donor Liver Transplantation.

    PubMed

    Yang, X; Chu, C W; Yang, J D; Yang, K H; Yu, H C; Cho, B H; You, H

    2017-03-01

    The objective of the study was to establish a right-lobe graft weight (GW) estimation formula for living donor liver transplantation (LDLT) from right-lobe graft volume without veins (GV w/o_veins ), including portal vein and hepatic vein measured by computed tomographic (CT) volumetry, and to compare its estimation accuracy with those of existing formulas. Right-lobe GW estimation formulas established with the use of graft volume with veins (GV w_veins ) sacrifice accuracy because GW measured intra-operatively excludes the weight of blood in the veins. Right-lobe GW estimation formulas have been established with the use of right-lobe GV w/o_veins , but a more accurate formula must be developed. The present study developed right-lobe GW estimation formulas based on GV w/o_veins as well as GV w_veins , using 40 cases of Korean donors: GW = 29.1 + 0.943 × GV w/o_veins (adjusted R 2  = 0.94) and GW = 74.7 + 0.773 × GV w_veins (adjusted R 2  = 0.87). The proposed GW estimation formulas were compared with existing GV w_veins - and GV w/o_veins -based models, using 43 cases additionally obtained from two medical centers for cross-validation. The GV w/o_veins -based formula developed in the present study was most preferred (absolute error = 21.5 ± 16.5 g and percentage of absolute error = 3.0 ± 2.3%). The GV w/o_veins -based formula is preferred to the GV w_veins -based formula in GW estimation. Accurate CT volumetry and alignment between planned and actual surgical cutting lines are crucial in the establishment of a better GW estimation formula. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Estimation of particulate nutrient load using turbidity meter.

    PubMed

    Yamamoto, K; Suetsugi, T

    2006-01-01

    The "Nutrient Load Hysteresis Coefficient" was proposed to evaluate the hysteresis of the nutrient loads to flow rate quantitatively. This could classify the runoff patterns of nutrient load into 15 patterns. Linear relationships between the turbidity and the concentrations of particulate nutrients were observed. It was clarified that the linearity was caused by the influence of the particle size on turbidity output and accumulation of nutrients on smaller particles (diameter < 23 microm). The L-Q-Turb method, which is a new method for the estimation of runoff loads of nutrients using a regression curve between the turbidity and the concentrations of particulate nutrients, was developed. This method could raise the precision of the estimation of nutrient loads even if they had strong hysteresis to flow rate. For example, as for the runoff load of total phosphorus load on flood events in a total of eight cases, the averaged error of estimation of total phosphorus load by the L-Q-Turb method was 11%, whereas the averaged estimation error by the regression curve between flow rate and nutrient load was 28%.

  8. A Galerkin approximation for linear elastic shallow shells

    NASA Astrophysics Data System (ADS)

    Figueiredo, I. N.; Trabucho, L.

    1992-03-01

    This work is a generalization to shallow shell models of previous results for plates by B. Miara (1989). Using the same basis functions as in the plate case, we construct a Galerkin approximation of the three-dimensional linearized elasticity problem, and establish some error estimates as a function of the thickness, the curvature, the geometry of the shell, the forces and the Lamé costants.

  9. Global cost of correcting vision impairment from uncorrected refractive error.

    PubMed

    Fricke, T R; Holden, B A; Wilson, D A; Schlenther, G; Naidoo, K S; Resnikoff, S; Frick, K D

    2012-10-01

    To estimate the global cost of establishing and operating the educational and refractive care facilities required to provide care to all individuals who currently have vision impairment resulting from uncorrected refractive error (URE). The global cost of correcting URE was estimated using data on the population, the prevalence of URE and the number of existing refractive care practitioners in individual countries, the cost of establishing and operating educational programmes for practitioners and the cost of establishing and operating refractive care facilities. The assumptions made ensured that costs were not underestimated and an upper limit to the costs was derived using the most expensive extreme for each assumption. There were an estimated 158 million cases of distance vision impairment and 544 million cases of near vision impairment caused by URE worldwide in 2007. Approximately 47 000 additional full-time functional clinical refractionists and 18 000 ophthalmic dispensers would be required to provide refractive care services for these individuals. The global cost of educating the additional personnel and of establishing, maintaining and operating the refractive care facilities needed was estimated to be around 20 000 million United States dollars (US$) and the upper-limit cost was US$ 28 000 million. The estimated loss in global gross domestic product due to distance vision impairment caused by URE was US$ 202 000 million annually. The cost of establishing and operating the educational and refractive care facilities required to deal with vision impairment resulting from URE was a small proportion of the global loss in productivity associated with that vision impairment.

  10. An Improved Unsupervised Image Segmentation Evaluation Approach Based on - and Over-Segmentation Aware

    NASA Astrophysics Data System (ADS)

    Su, Tengfei

    2018-04-01

    In this paper, an unsupervised evaluation scheme for remote sensing image segmentation is developed. Based on a method called under- and over-segmentation aware (UOA), the new approach is improved by overcoming the defect in the part of estimating over-segmentation error. Two cases of such error-prone defect are listed, and edge strength is employed to devise a solution to this issue. Two subsets of high resolution remote sensing images were used to test the proposed algorithm, and the experimental results indicate its superior performance, which is attributed to its improved OSE detection model.

  11. The modal surface interpolation method for damage localization

    NASA Astrophysics Data System (ADS)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  12. Correcting for deformation in skin-based marker systems.

    PubMed

    Alexander, E J; Andriacchi, T P

    2001-03-01

    A new technique is described that reduces error due to skin movement artifact in the opto-electronic measurement of in vivo skeletal motion. This work builds on a previously described point cluster technique marker set and estimation algorithm by extending the transformation equations to the general deformation case using a set of activity-dependent deformation models. Skin deformation during activities of daily living are modeled as consisting of a functional form defined over the observation interval (the deformation model) plus additive noise (modeling error). The method is described as an interval deformation technique. The method was tested using simulation trials with systematic and random components of deformation error introduced into marker position vectors. The technique was found to substantially outperform methods that require rigid-body assumptions. The method was tested in vivo on a patient fitted with an external fixation device (Ilizarov). Simultaneous measurements from markers placed on the Ilizarov device (fixed to bone) were compared to measurements derived from skin-based markers. The interval deformation technique reduced the errors in limb segment pose estimate by 33 and 25% compared to the classic rigid-body technique for position and orientation, respectively. This newly developed method has demonstrated that by accounting for the changing shape of the limb segment, a substantial improvement in the estimates of in vivo skeletal movement can be achieved.

  13. SU-E-J-243: Possibility of Exposure Dose Reduction of Cone-Beam Computed Tomography in An Image Guided Patient Positioning System by Using Various Noise Suppression Filters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamezawa, H; Fujimoto General Hospital, Miyakonojo, Miyazaki; Arimura, H

    Purpose: To investigate the possibility of exposure dose reduction of the cone-beam computed tomography (CBCT) in an image guided patient positioning system by using 6 noise suppression filters. Methods: First, a reference dose (RD) and low-dose (LD)-CBCT (X-ray volume imaging system, Elekta Co.) images were acquired with a reference dose of 86.2 mGy (weighted CT dose index: CTDIw) and various low doses of 1.4 to 43.1 mGy, respectively. Second, an automated rigid registration for three axes was performed for estimating setup errors between a planning CT image and the LD-CBCT images, which were processed by 6 noise suppression filters, i.e.,more » averaging filter (AF), median filter (MF), Gaussian filter (GF), bilateral filter (BF), edge preserving smoothing filter (EPF) and adaptive partial median filter (AMF). Third, residual errors representing the patient positioning accuracy were calculated as an Euclidean distance between the setup error vectors estimated using the LD-CBCT image and RD-CBCT image. Finally, the relationships between the residual error and CTDIw were obtained for 6 noise suppression filters, and then the CTDIw for LD-CBCT images processed by the noise suppression filters were measured at the same residual error, which was obtained with the RD-CBCT. This approach was applied to an anthropomorphic pelvic phantom and two cancer patients. Results: For the phantom, the exposure dose could be reduced from 61% (GF) to 78% (AMF) by applying the noise suppression filters to the CBCT images. The exposure dose in a prostate cancer case could be reduced from 8% (AF) to 61% (AMF), and the exposure dose in a lung cancer case could be reduced from 9% (AF) to 37% (AMF). Conclusion: Using noise suppression filters, particularly an adaptive partial median filter, could be feasible to decrease the additional exposure dose to patients in image guided patient positioning systems.« less

  14. Parallel Processing of Broad-Band PPM Signals

    NASA Technical Reports Server (NTRS)

    Gray, Andrew; Kang, Edward; Lay, Norman; Vilnrotter, Victor; Srinivasan, Meera; Lee, Clement

    2010-01-01

    A parallel-processing algorithm and a hardware architecture to implement the algorithm have been devised for timeslot synchronization in the reception of pulse-position-modulated (PPM) optical or radio signals. As in the cases of some prior algorithms and architectures for parallel, discrete-time, digital processing of signals other than PPM, an incoming broadband signal is divided into multiple parallel narrower-band signals by means of sub-sampling and filtering. The number of parallel streams is chosen so that the frequency content of the narrower-band signals is low enough to enable processing by relatively-low speed complementary metal oxide semiconductor (CMOS) electronic circuitry. The algorithm and architecture are intended to satisfy requirements for time-varying time-slot synchronization and post-detection filtering, with correction of timing errors independent of estimation of timing errors. They are also intended to afford flexibility for dynamic reconfiguration and upgrading. The architecture is implemented in a reconfigurable CMOS processor in the form of a field-programmable gate array. The algorithm and its hardware implementation incorporate three separate time-varying filter banks for three distinct functions: correction of sub-sample timing errors, post-detection filtering, and post-detection estimation of timing errors. The design of the filter bank for correction of timing errors, the method of estimating timing errors, and the design of a feedback-loop filter are governed by a host of parameters, the most critical one, with regard to processing very broadband signals with CMOS hardware, being the number of parallel streams (equivalently, the rate-reduction parameter).

  15. Impact of miscommunication in medical dispute cases in Japan.

    PubMed

    Aoki, Noriaki; Uda, Kenji; Ohta, Sachiko; Kiuchi, Takahiro; Fukui, Tsuguya

    2008-10-01

    Medical disputes between physicians and patients can occur in non-negligent circumstances and may even result in compensation. We reviewed medical dispute cases to investigate the impact of miscommunication, especially in non-negligent situations. Systematic review of medical dispute records was done to identify the presence of the adverse events, the type of medical error, preventability, the perception of miscommunication by patients and the amount of compensation. The study was performed in Kyoto, Japan. We analyzed 155 medical dispute cases. We compared (i) frequency of miscommunication cases between negligent and non-negligent cases, and (ii) proportions of positive compensation between non-miscommunication and miscommunication cases stratified according to the existence of negligence. Multivariate logistic analysis was conducted to assess the independent factors related to positive compensation. Approximately 40% of the medical disputes (59/155) did not involve medical error (i.e. non-negligent). In the non-negligent cases, 64.4% (38/59) involved miscommunication, whereas in dispute cases with errors, 21.9% (21/96) involved miscommunications. (P

  16. Peak flow estimation in ungauged basins by means of water level data analysis

    NASA Astrophysics Data System (ADS)

    Corato, G.; Moramarco, T.; Tucciarelli, T.

    2009-04-01

    Discharge hydrograph estimation in rivers is usually carried out by means of water level measurements and the use of a water depth - discharge relationship. The water depth - discharge curve is obtained by integrating local velocities measured in a given section at specified water depth values. To build up such curve is very expensive and very often the highest points, used for the peak flow estimation, are the result of rough extrapolation of points corresponding to much lower water depths. Recently, discharge estimation methodologies based only on the analysis of synchronous water level data recorded in two different river sections far some kilometers from each other have been developed. These methodologies are based only on the analysis of the water levels, the knowledge of the river bed elevations within the two sections, and the use of a diffusive flow routing numerical model. The bed roughness estimation, in terms of average Manning coefficient, is carried out along with the discharge hydrograph estimation. The 1D flow routing model is given by the following Saint Venant equations, simplified according to the diffusive hypothesis: ‚-+ ‚q-= 0 ‚t ‚x (1) ‚h+ (Sf - S0) = 0 ‚x (2) where q(x,t) is the discharge, h(x,t) is the water depth, Sf is the energy slope and S0 is the bed slope. The energy slope is related to the average n Manning coefficient by the Chezy relationship: -q2n2- Sf = 2ℜ4•3 (3) whereℜ is the hydraulic radius and gs the river section. The upstream boundary condition of the flow routing model is given by the measured upstream water level hydrograph. The computational domain is extended some kilometers downstream the second measurement section and the downstream boundary condition is properly approximated. This avoids the use of the downstream measured data for the solution of the system (1)-(3) and limits the model error even in the case of subcritical flow. The optimal average Manning coefficient is obtained by fitting the water level data available in the downstream measurement section with the computed ones. The optimal discharge hydrograph estimated in the upstream measurement section is given by the function q(0,t) computed in the first section (where x = 0) using the optimal Manning coefficient. Two different fitting quality criteria are compared and their practical implications are discussed; the first one is the equality of the computed and the measured time peak lag between the first and the second measurement section; the second one is the minimization of the total square error between the measured and the computed downstream water level hydrographs. The uniqueness and identifiability properties of the associated inverse problem are analyzed, and a model error analysis is carried out addressing the most relevant sources of error arising from the adopted approximations. Three case studies previously used for the validation of the proposed methodology are reviewed. The first two are water level hydrographs collected in two sections of the Arno river (Tuscany, Italy) and the Tiber river (Umbria, Italy). Water level and discharge hydrographs recorded during many storm events were available in both cases. The optimal average Manning coefficient has been estimated in both cases using the data of a single event, properly selected among all the available ones. In the third case, concerning hystorical data collected in a small tributary of the Tanagro river (Campania, Italy), three water level hydrographs were measured in three different sections of the channel. This allowed to carry on the discharge estimation using the data collected in only two of the three sections, using the data of the third one for validation. The results obtained in the three test cases highlight the advantages and the limits of the adopted analysis. The advantage is the simplicity of the hardware required for the data acquisition, that can be easily performed continuously in time, also during very bad weather conditions and using a long distance control. A first limit is the assumption of negligible inflow between the two measurement sections. Because the distance between the two sections must be large enough to measure the time lag between the two hydrographs, this limit can result in a difficult selection of the measurement sections. A second limit is the real heterogeneity of the bed roughness, that provides a shape of the water level hydrograph different from the computed one. Preliminary results of a new, multiparametric data analysis, are finally presented.

  17. The Significance of the Record Length in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Senarath, S. U.

    2013-12-01

    Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.

  18. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  19. Incorporating approximation error in surrogate based Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.; Li, W.; Wu, L.

    2015-12-01

    There are increasing interests in applying surrogates for inverse Bayesian modeling to reduce repetitive evaluations of original model. In this way, the computational cost is expected to be saved. However, the approximation error of surrogate model is usually overlooked. This is partly because that it is difficult to evaluate the approximation error for many surrogates. Previous studies have shown that, the direct combination of surrogates and Bayesian methods (e.g., Markov Chain Monte Carlo, MCMC) may lead to biased estimations when the surrogate cannot emulate the highly nonlinear original system. This problem can be alleviated by implementing MCMC in a two-stage manner. However, the computational cost is still high since a relatively large number of original model simulations are required. In this study, we illustrate the importance of incorporating approximation error in inverse Bayesian modeling. Gaussian process (GP) is chosen to construct the surrogate for its convenience in approximation error evaluation. Numerical cases of Bayesian experimental design and parameter estimation for contaminant source identification are used to illustrate this idea. It is shown that, once the surrogate approximation error is well incorporated into Bayesian framework, promising results can be obtained even when the surrogate is directly used, and no further original model simulations are required.

  20. Are volcanic seismic b-values high, and if so when?

    NASA Astrophysics Data System (ADS)

    Roberts, Nick S.; Bell, Andrew F.; Main, Ian G.

    2015-12-01

    The Gutenberg-Richter exponent b is a measure of the relative proportion of large and small earthquakes. It is commonly used to infer material properties such as heterogeneity, or mechanical properties such as the state of stress from earthquake populations. It is 'well known' that the b-value tends to be high or very high for volcanic earthquake populations relative to b = 1 for those of tectonic earthquakes, and that b varies significantly with time during periods of unrest. We first review the supporting evidence from 34 case studies, and identify weaknesses in this argument due predominantly to small sample size, the narrow bandwidth of magnitude scales available, variability in the methods used to assess the minimum or cutoff magnitude Mc, and to infer b. Informed by this, we use synthetic realisations to quantify the effect of choice of the cutoff magnitude on maximum likelihood estimates of b, and suggest a new work flow for this choice. We present the first quantitative estimate of the error in b introduced by uncertainties in estimating Mc, as a function of the number of events and the b-value itself. This error can significantly exceed the commonly-quoted statistical error in the estimated b-value, especially for the case that the underlying b-value is high. We apply the new methods to data sets from recent periods of unrest in El Hierro and Mount Etna. For El Hierro we confirm significantly high b-values of 1.5-2.5 prior to the 10 October 2011 eruption. For Mount Etna the b-values are indistinguishable from b = 1 within error, except during the flank eruptions at Mount Etna in 2001-2003, when 1.5 < b < 2.0. For the time period analysed, they are rarely lower than b = 1. Our results confirm that these volcano-tectonic earthquake populations can have systematically high b-values, especially when associated with eruptions. At other times they can be indistinguishable from those of tectonic earthquakes within the total error. The results have significant implications for operational forecasting informed by b-value variability, in particular in assessing the significance of b-value variations identified by sample sizes with fewer than 200 events above the completeness threshold.

  1. Comparison of Two Methods for Estimating the Sampling-Related Uncertainty of Satellite Rainfall Averages Based on a Large Radar Data Set

    NASA Technical Reports Server (NTRS)

    Lau, William K. M. (Technical Monitor); Bell, Thomas L.; Steiner, Matthias; Zhang, Yu; Wood, Eric F.

    2002-01-01

    The uncertainty of rainfall estimated from averages of discrete samples collected by a satellite is assessed using a multi-year radar data set covering a large portion of the United States. The sampling-related uncertainty of rainfall estimates is evaluated for all combinations of 100 km, 200 km, and 500 km space domains, 1 day, 5 day, and 30 day rainfall accumulations, and regular sampling time intervals of 1 h, 3 h, 6 h, 8 h, and 12 h. These extensive analyses are combined to characterize the sampling uncertainty as a function of space and time domain, sampling frequency, and rainfall characteristics by means of a simple scaling law. Moreover, it is shown that both parametric and non-parametric statistical techniques of estimating the sampling uncertainty produce comparable results. Sampling uncertainty estimates, however, do depend on the choice of technique for obtaining them. They can also vary considerably from case to case, reflecting the great variability of natural rainfall, and should therefore be expressed in probabilistic terms. Rainfall calibration errors are shown to affect comparison of results obtained by studies based on data from different climate regions and/or observation platforms.

  2. Optimally weighted least-squares steganalysis

    NASA Astrophysics Data System (ADS)

    Ker, Andrew D.

    2007-02-01

    Quantitative steganalysis aims to estimate the amount of payload in a stego object, and such estimators seem to arise naturally in steganalysis of Least Significant Bit (LSB) replacement in digital images. However, as with all steganalysis, the estimators are subject to errors, and their magnitude seems heavily dependent on properties of the cover. In very recent work we have given the first derivation of estimation error, for a certain method of steganalysis (the Least-Squares variant of Sample Pairs Analysis) of LSB replacement steganography in digital images. In this paper we make use of our theoretical results to find an improved estimator and detector. We also extend the theoretical analysis to another (more accurate) steganalysis estimator (Triples Analysis) and hence derive an improved version of that estimator too. Experimental results show that the new steganalyzers have improved accuracy, particularly in the difficult case of never-compressed covers.

  3. A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.

    PubMed

    Hu, Y J; Lin, D Y; Sun, W; Zeng, D

    2014-10-01

    Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.

  4. A fiber Bragg grating sensor system for estimating the large deflection of a lightweight flexible beam

    NASA Astrophysics Data System (ADS)

    Peng, Te; Yang, Yangyang; Ma, Lina; Yang, Huayong

    2016-10-01

    A sensor system based on fiber Bragg grating (FBG) is presented which is to estimate the deflection of a lightweight flexible beam, including the tip position and the tip rotation angle. In this paper, the classical problem of the deflection of a lightweight flexible beam of linear elastic material is analysed. We present the differential equation governing the behavior of a physical system and show that this equation although straightforward in appearance, is in fact rather difficult to solve due to the presence of a non-linear term. We used epoxy glue to attach the FBG sensors to specific locations upper and lower surface of the beam in order to measure local strain measurements. A quasi-distributed FBG static strain sensor network is designed and established. The estimation results from FBG sensors are also compared to reference displacements from the ANSYS simulation results and the experimental results obtained in the laboratory in the static case. The errors of the estimation by FBG sensors are analysed for further error-correction and option-design. When the load weight is 20g, the precision is the highest, the position errors ex and ex are 0.19%, 0.14% respectively, the rotation error eθ, is 1.23%.

  5. Information Content and Sensitivity of the 3β+2α Lidar Measurement System for Microphysical Retrievals

    NASA Astrophysics Data System (ADS)

    Burton, S. P.; Liu, X.; Chemyakin, E.; Hostetler, C. A.; Stamnes, S.; Moore, R.; Sawamura, P.; Ferrare, R. A.; Knobelspiesse, K. D.

    2015-12-01

    There is considerable interest in retrieving aerosol effective radius, number concentration and refractive index from lidar measurements of extinction and backscatter at several wavelengths. The 3 backscatter + 2 extinction (3β+2α) combination is particularly important since the planned NASA Aerosol-Clouds-Ecosystem (ACE) mission recommends this combination of measurements. The 2nd-generation NASA Langley airborne High Spectral Resolution Lidar (HSRL-2) has been making 3β+2α measurements since 2012. Here we develop a deeper understanding of the information content and sensitivities of the 3β+2α system in terms of aerosol microphysical parameters of interest. We determine best case results using a retrieval-free methodology. We calculate information content and uncertainty metrics from Optimal Estimation techniques using only a simplified forward model look-up table, with no explicit inversion. Simplifications include spherical particles, mono-modal log-normal size distributions, and wavelength-independent refractive indices. Since we only use the forward model with no retrieval, our results are applicable as a best case for all existing retrievals. Retrieval-dependent errors due to mismatch between the assumptions and true atmospheric aerosols are not included. The sensitivity metrics allow for identifying (1) information content of the measurements versus a priori information; (2) best-case error bars on the retrieved parameters; and (3) potential sources of cross-talk or "compensating" errors wherein different retrieval parameters are not independently captured by the measurements. These results suggest that even in the best case, this retrieval system is underdetermined. Recommendations are given for addressing cross-talk between effective radius and number concentration. A potential solution to the under-determination problem is a combined active (lidar) and passive (polarimeter) retrieval, which is the subject of a new funded NASA project by our team.

  6. Improved Atmospheric Soundings and Error Estimates from Analysis of AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2007-01-01

    The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave C02 channel observations in the spectral region 700 cm-' to 750 cm-' are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm-' to 2395 cm-' are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.

  7. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    ERIC Educational Resources Information Center

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  8. Maximum likelihood phase-retrieval algorithm: applications.

    PubMed

    Nahrstedt, D A; Southwell, W H

    1984-12-01

    The maximum likelihood estimator approach is shown to be effective in determining the wave front aberration in systems involving laser and flow field diagnostics and optical testing. The robustness of the algorithm enables convergence even in cases of severe wave front error and real, nonsymmetrical, obscured amplitude distributions.

  9. Intuitive theories of information: beliefs about the value of redundancy.

    PubMed

    Soll, J B

    1999-03-01

    In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.

  10. Prescribing Errors Involving Medication Dosage Forms

    PubMed Central

    Lesar, Timothy S

    2002-01-01

    CONTEXT Prescribing errors involving medication dose formulations have been reported to occur frequently in hospitals. No systematic evaluations of the characteristics of errors related to medication dosage formulation have been performed. OBJECTIVE To quantify the characteristics, frequency, and potential adverse patient effects of prescribing errors involving medication dosage forms . DESIGN Evaluation of all detected medication prescribing errors involving or related to medication dosage forms in a 631-bed tertiary care teaching hospital. MAIN OUTCOME MEASURES Type, frequency, and potential for adverse effects of prescribing errors involving or related to medication dosage forms. RESULTS A total of 1,115 clinically significant prescribing errors involving medication dosage forms were detected during the 60-month study period. The annual number of detected errors increased throughout the study period. Detailed analysis of the 402 errors detected during the last 16 months of the study demonstrated the most common errors to be: failure to specify controlled release formulation (total of 280 cases; 69.7%) both when prescribing using the brand name (148 cases; 36.8%) and when prescribing using the generic name (132 cases; 32.8%); and prescribing controlled delivery formulations to be administered per tube (48 cases; 11.9%). The potential for adverse patient outcome was rated as potentially “fatal or severe” in 3 cases (0.7%), and “serious” in 49 cases (12.2%). Errors most commonly involved cardiovascular agents (208 cases; 51.7%). CONCLUSIONS Hospitalized patients are at risk for adverse outcomes due to prescribing errors related to inappropriate use of medication dosage forms. This information should be considered in the development of strategies to prevent adverse patient outcomes resulting from such errors. PMID:12213138

  11. The Role of Inflation and Price Escalation Adjustments in Properly Estimating Program Costs: F-35 Case Study

    DTIC Science & Technology

    2016-04-30

    costs of new defense systems. An inappropriate price index can introduce errors in both development of cost estimating relationships ( CERs ) and in...indexes derived from CERs . These indexes isolate changes in price due to factors other than changes in quality over time. We develop a “Baseline” CER ...The hedonic index application has commonalities with cost estimating relationships ( CERs ), which also model system costs as a function of quality

  12. Extending "Deep Blue" aerosol retrieval coverage to cases of absorbing aerosols above clouds: Sensitivity analysis and first case studies

    NASA Astrophysics Data System (ADS)

    Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Lee, J.; Redemann, J.; Schmid, B.; Shinozuka, Y.

    2016-05-01

    Cases of absorbing aerosols above clouds (AACs), such as smoke or mineral dust, are omitted from most routinely processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar sensors, for incorporation into a future version of the "Deep Blue" AOD data product. Detailed retrieval simulations suggest that these sensors should be able to determine AAC AOD with a typical level of uncertainty ˜25-50% (with lower uncertainties for more strongly absorbing aerosol types) and COD with an uncertainty ˜10-20%, if an appropriate aerosol optical model is known beforehand. Errors are larger, particularly if the aerosols are only weakly absorbing, if the aerosol optical properties are not known, and the appropriate model to use must also be retrieved. Actual retrieval errors are also compared to uncertainty envelopes obtained through the optimal estimation (OE) technique; OE-based uncertainties are found to be generally reasonable for COD but larger than actual retrieval errors for AOD, due in part to difficulties in quantifying the degree of spectral correlation of forward model error. The algorithm is also applied to two MODIS scenes (one smoke and one dust) for which near-coincident NASA Ames Airborne Tracking Sun photometer (AATS) data were available to use as a ground truth AOD data source, and found to be in good agreement, demonstrating the validity of the technique with real observations.

  13. Microionization chamber for reference dosimetry in IMRT verification: clinical implications on OAR dosimetric errors

    NASA Astrophysics Data System (ADS)

    Sánchez-Doblado, Francisco; Capote, Roberto; Leal, Antonio; Roselló, Joan V.; Lagares, Juan I.; Arráns, Rafael; Hartmann, Günther H.

    2005-03-01

    Intensity modulated radiotherapy (IMRT) has become a treatment of choice in many oncological institutions. Small fields or beamlets with sizes of 1 to 5 cm2 are now routinely used in IMRT delivery. Therefore small ionization chambers (IC) with sensitive volumes <=0.1 cm3are generally used for dose verification of an IMRT treatment. The measurement conditions during verification may be quite different from reference conditions normally encountered in clinical beam calibration, so dosimetry of these narrow photon beams pertains to the so-called non-reference conditions for beam calibration. This work aims at estimating the error made when measuring the organ at risk's (OAR) absolute dose by a micro ion chamber (μIC) in a typical IMRT treatment. The dose error comes from the assumption that the dosimetric parameters determining the absolute dose are the same as for the reference conditions. We have selected two clinical cases, treated by IMRT, for our dose error evaluations. Detailed geometrical simulation of the μIC and the dose verification set-up was performed. The Monte Carlo (MC) simulation allows us to calculate the dose measured by the chamber as a dose averaged over the air cavity within the ion-chamber active volume (Dair). The absorbed dose to water (Dwater) is derived as the dose deposited inside the same volume, in the same geometrical position, filled and surrounded by water in the absence of the ion chamber. Therefore, the Dwater/Dair dose ratio is the MC estimator of the total correction factor needed to convert the absorbed dose in air into the absorbed dose in water. The dose ratio was calculated for the μIC located at the isocentre within the OARs for both clinical cases. The clinical impact of the calculated dose error was found to be negligible for the studied IMRT treatments.

  14. The next organizational challenge: finding and addressing diagnostic error.

    PubMed

    Graber, Mark L; Trowbridge, Robert; Myers, Jennifer S; Umscheid, Craig A; Strull, William; Kanter, Michael H

    2014-03-01

    Although health care organizations (HCOs) are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. WHY HEALTH CARE ORGANIZATIONS NEED TO GET INVOLVED: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. FINDING DIAGNOSTIC ERRORS: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches--Maine Medical Center's case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente's electronic health record--based reports that detect process breakdowns in the followup of abnormal findings--are described in case studies. By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.

  15. Interaction force and motion estimators facilitating impedance control of the upper limb rehabilitation robot.

    PubMed

    Mancisidor, Aitziber; Zubizarreta, Asier; Cabanes, Itziar; Bengoa, Pablo; Jung, Je Hyung

    2017-07-01

    In order to enhance the performance of rehabilitation robots, it is imperative to know both force and motion caused by the interaction between user and robot. However, common direct measurement of both signals through force and motion sensors not only increases the complexity of the system but also impedes affordability of the system. As an alternative of the direct measurement, in this work, we present new force and motion estimators for the proper control of the upper-limb rehabilitation Universal Haptic Pantograph (UHP) robot. The estimators are based on the kinematic and dynamic model of the UHP and the use of signals measured by means of common low-cost sensors. In order to demonstrate the effectiveness of the estimators, several experimental tests were carried out. The force and impedance control of the UHP was implemented first by directly measuring the interaction force using accurate extra sensors and the robot performance was compared to the case where the proposed estimators replace the direct measured values. The experimental results reveal that the controller based on the estimators has similar performance to that using direct measurement (less than 1 N difference in root mean square error between two cases), indicating that the proposed force and motion estimators can facilitate implementation of interactive controller for the UHP in robotmediated rehabilitation trainings.

  16. Satellite angular velocity estimation based on star images and optical flow techniques.

    PubMed

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-09-25

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components.

  17. Satellite Angular Velocity Estimation Based on Star Images and Optical Flow Techniques

    PubMed Central

    Fasano, Giancarmine; Rufino, Giancarlo; Accardo, Domenico; Grassi, Michele

    2013-01-01

    An optical flow-based technique is proposed to estimate spacecraft angular velocity based on sequences of star-field images. It does not require star identification and can be thus used to also deliver angular rate information when attitude determination is not possible, as during platform de tumbling or slewing. Region-based optical flow calculation is carried out on successive star images preprocessed to remove background. Sensor calibration parameters, Poisson equation, and a least-squares method are then used to estimate the angular velocity vector components in the sensor rotating frame. A theoretical error budget is developed to estimate the expected angular rate accuracy as a function of camera parameters and star distribution in the field of view. The effectiveness of the proposed technique is tested by using star field scenes generated by a hardware-in-the-loop testing facility and acquired by a commercial-off-the shelf camera sensor. Simulated cases comprise rotations at different rates. Experimental results are presented which are consistent with theoretical estimates. In particular, very accurate angular velocity estimates are generated at lower slew rates, while in all cases the achievable accuracy in the estimation of the angular velocity component along boresight is about one order of magnitude worse than the other two components. PMID:24072023

  18. A new method of hybrid frequency hopping signals selection and blind parameter estimation

    NASA Astrophysics Data System (ADS)

    Zeng, Xiaoyu; Jiao, Wencheng; Sun, Huixian

    2018-04-01

    Frequency hopping communication is widely used in military communications at home and abroad. In the case of single-channel reception, it is scarce to process multiple frequency hopping signals both effectively and simultaneously. A method of hybrid FH signals selection and blind parameter estimation is proposed. The method makes use of spectral transformation, spectral entropy calculation and PRI transformation basic theory to realize the sorting and parameter estimation of the components in the hybrid frequency hopping signal. The simulation results show that this method can correctly classify the frequency hopping component signal, and the estimated error of the frequency hopping period is about 5% and the estimated error of the frequency hopping frequency is less than 1% when the SNR is 10dB. However, the performance of this method deteriorates seriously at low SNR.

  19. Simplified Estimation and Testing in Unbalanced Repeated Measures Designs.

    PubMed

    Spiess, Martin; Jordan, Pascal; Wendt, Mike

    2018-05-07

    In this paper we propose a simple estimator for unbalanced repeated measures design models where each unit is observed at least once in each cell of the experimental design. The estimator does not require a model of the error covariance structure. Thus, circularity of the error covariance matrix and estimation of correlation parameters and variances are not necessary. Together with a weak assumption about the reason for the varying number of observations, the proposed estimator and its variance estimator are unbiased. As an alternative to confidence intervals based on the normality assumption, a bias-corrected and accelerated bootstrap technique is considered. We also propose the naive percentile bootstrap for Wald-type tests where the standard Wald test may break down when the number of observations is small relative to the number of parameters to be estimated. In a simulation study we illustrate the properties of the estimator and the bootstrap techniques to calculate confidence intervals and conduct hypothesis tests in small and large samples under normality and non-normality of the errors. The results imply that the simple estimator is only slightly less efficient than an estimator that correctly assumes a block structure of the error correlation matrix, a special case of which is an equi-correlation matrix. Application of the estimator and the bootstrap technique is illustrated using data from a task switch experiment based on an experimental within design with 32 cells and 33 participants.

  20. Quality of Impressions and Work Authorizations Submitted by Dental Students Supervised by Prosthodontists and General Dentists.

    PubMed

    Imbery, Terence A; Diaz, Nicholas; Greenfield, Kristy; Janus, Charles; Best, Al M

    2016-10-01

    Preclinical fixed prosthodontics is taught by Department of Prosthodontics faculty members at Virginia Commonwealth University School of Dentistry; however, 86% of all clinical cases in academic year 2012 were staffed by faculty members from the Department of General Practice. The aims of this retrospective study were to quantify the quality of impressions, accuracy of laboratory work authorizations, and most common errors and to determine if there were differences between the rate of errors in cases supervised by the prosthodontists and the general dentists. A total of 346 Fixed Prosthodontic Laboratory Tracking Sheets for the 2012 academic year were reviewed. The results showed that, overall, 73% of submitted impressions were acceptable at initial evaluation, 16% had to be poured first and re-evaluated for quality prior to pindexing, 7% had multiple impressions submitted for transfer dies, and 4% were rejected for poor quality. There were higher acceptance rates for impressions and work authorizations for cases staffed by prosthodontists than by general dentists, but the differences were not statistically significant (p=0.0584 and p=0.0666, respectively). Regarding the work authorizations, 43% overall did not provide sufficient information or had technical errors that delayed prosthesis fabrication. The most common errors were incorrect mountings, absence of solid casts, inadequate description of margins for porcelain fused to metal crowns, inaccurate die trimming, and margin marking. The percentages of errors in cases supervised by general dentists and prosthodontists were similar for 17 of the 18 types of errors identified; only for margin description was the percentage of errors statistically significantly higher for general dentist-supervised than prosthodontist-supervised cases. These results highlighted the ongoing need for faculty development and calibration to ensure students receive the highest quality education from all faculty members teaching fixed prosthodontics.

  1. Prediction of optimum sorption isotherm: comparison of linear and non-linear method.

    PubMed

    Kumar, K Vasanth; Sivanesan, S

    2005-11-11

    Equilibrium parameters for Bismarck brown onto rice husk were estimated by linear least square and a trial and error non-linear method using Freundlich, Langmuir and Redlich-Peterson isotherms. A comparison between linear and non-linear method of estimating the isotherm parameters was reported. The best fitting isotherm was Langmuir isotherm and Redlich-Peterson isotherm equation. The results show that non-linear method could be a better way to obtain the parameters. Redlich-Peterson isotherm is a special case of Langmuir isotherm when the Redlich-Peterson isotherm constant g was unity.

  2. Influence of an independent quarterly audit on publicly reported vancomycin-resistant enterocococi bacteremia data in Ontario, Canada.

    PubMed

    Prematunge, Chatura; Policarpio, Michelle E; Johnstone, Jennie; Adomako, Kwaku; Nadolny, Emily; Lam, Freda; Li, Ye; Brown, Kevin A; Garber, Gary

    2018-04-13

    All Ontario hospitals are mandated to self-report vancomycin-resistant enterocococi (VRE) bacteremias to Ontario's Ministry of Health and Long-term Care for public reporting purposes. Independent quarterly audits of publicly reported VRE bacteremias between September 2013 and June 2015 were carried out by Public Health Ontario. VRE bacteremia case-reporting errors between January 2009 and August 2013 were identified by a single retrospective audit. Employing a quasiexperimental pre-post study design, the relative risk of VRE bacteremia reporting errors before and after quarterly audits were modeled using Poisson regression adjusting for hospital type, case counts reported to the Ministry of Health and Long-term Care, and autocorrelation via generalized estimating equation. Overall, 24.5% (126 out of 514) of VRE bacteremias were reported in error; 114 out of 367 (31%) VRE bacteremias reported before quarterly audits and 12 out of 147 (8.1%) reported after audits were found to be incorrect. In adjusted analysis, quarterly audits of VRE bacteremias were associated with significant reductions in reporting errors when compared with before quarterly auditing (relative risk, 0.17; 95% confidence interval, 0.05-0.63). Risk of reporting errors among community hospitals were greater than acute teaching hospitals of the region (relative risk, 4.39; 95% CI, 3.07-5.70). This study found independent quarterly audits of publicly reported VRE bacteremias to be associated with significant reductions in reporting errors. Public reporting systems should consider adopting routine data audits and hospital-targeted training to improve data accuracy. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.

  3. Life cycle assessment based environmental impact estimation model for pre-stressed concrete beam bridge in the early design phase

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Kyong Ju, E-mail: kjkim@cau.ac.kr; Yun, Won Gun, E-mail: ogun78@naver.com; Cho, Namho, E-mail: nhc51@cau.ac.kr

    The late rise in global concern for environmental issues such as global warming and air pollution is accentuating the need for environmental assessments in the construction industry. Promptly evaluating the environmental loads of the various design alternatives during the early stages of a construction project and adopting the most environmentally sustainable candidate is therefore of large importance. Yet, research on the early evaluation of a construction project's environmental load in order to aid the decision making process is hitherto lacking. In light of this dilemma, this study proposes a model for estimating the environmental load by employing only the mostmore » basic information accessible during the early design phases of a project for the pre-stressed concrete (PSC) beam bridge, the most common bridge structure. Firstly, a life cycle assessment (LCA) was conducted on the data from 99 bridges by integrating the bills of quantities (BOQ) with a life cycle inventory (LCI) database. The processed data was then utilized to construct a case based reasoning (CBR) model for estimating the environmental load. The accuracy of the estimation model was then validated using five test cases; the model's mean absolute error rates (MAER) for the total environmental load was calculated as 7.09%. Such test results were shown to be superior compared to those obtained from a multiple-regression based model and a slab area base-unit analysis model. Henceforth application of this model during the early stages of a project is expected to highly complement environmentally friendly designs and construction by facilitating the swift evaluation of the environmental load from multiple standpoints. - Highlights: • This study is to develop the model of assessing the environmental impacts on LCA. • Bills of quantity from completed designs of PSC Beam were linked with the LCI DB. • Previous cases were used to estimate the environmental load of new case by CBR model. • CBR model produces more accurate estimations (7.09%) than other conventional models. • This study supports decision making process in the early stage of a new construction case.« less

  4. Performance of two predictive uncertainty estimation approaches for conceptual Rainfall-Runoff Model: Bayesian Joint Inference and Hydrologic Uncertainty Post-processing

    NASA Astrophysics Data System (ADS)

    Hernández-López, Mario R.; Romero-Cuéllar, Jonathan; Camilo Múnera-Estrada, Juan; Coccia, Gabriele; Francés, Félix

    2017-04-01

    It is noticeably important to emphasize the role of uncertainty particularly when the model forecasts are used to support decision-making and water management. This research compares two approaches for the evaluation of the predictive uncertainty in hydrological modeling. First approach is the Bayesian Joint Inference of hydrological and error models. Second approach is carried out through the Model Conditional Processor using the Truncated Normal Distribution in the transformed space. This comparison is focused on the predictive distribution reliability. The case study is applied to two basins included in the Model Parameter Estimation Experiment (MOPEX). These two basins, which have different hydrological complexity, are the French Broad River (North Carolina) and the Guadalupe River (Texas). The results indicate that generally, both approaches are able to provide similar predictive performances. However, the differences between them can arise in basins with complex hydrology (e.g. ephemeral basins). This is because obtained results with Bayesian Joint Inference are strongly dependent on the suitability of the hypothesized error model. Similarly, the results in the case of the Model Conditional Processor are mainly influenced by the selected model of tails or even by the selected full probability distribution model of the data in the real space, and by the definition of the Truncated Normal Distribution in the transformed space. In summary, the different hypotheses that the modeler choose on each of the two approaches are the main cause of the different results. This research also explores a proper combination of both methodologies which could be useful to achieve less biased hydrological parameter estimation. For this approach, firstly the predictive distribution is obtained through the Model Conditional Processor. Secondly, this predictive distribution is used to derive the corresponding additive error model which is employed for the hydrological parameter estimation with the Bayesian Joint Inference methodology.

  5. Robust Tests for Additive Gene-Environment Interaction in Case-Control Studies Using Gene-Environment Independence.

    PubMed

    Liu, Gang; Mukherjee, Bhramar; Lee, Seunggeun; Lee, Alice W; Wu, Anna H; Bandera, Elisa V; Jensen, Allan; Rossing, Mary Anne; Moysich, Kirsten B; Chang-Claude, Jenny; Doherty, Jennifer A; Gentry-Maharaj, Aleksandra; Kiemeney, Lambertus; Gayther, Simon A; Modugno, Francesmary; Massuger, Leon; Goode, Ellen L; Fridley, Brooke L; Terry, Kathryn L; Cramer, Daniel W; Ramus, Susan J; Anton-Culver, Hoda; Ziogas, Argyrios; Tyrer, Jonathan P; Schildkraut, Joellen M; Kjaer, Susanne K; Webb, Penelope M; Ness, Roberta B; Menon, Usha; Berchuck, Andrew; Pharoah, Paul D; Risch, Harvey; Pearce, Celeste Leigh

    2018-02-01

    There have been recent proposals advocating the use of additive gene-environment interaction instead of the widely used multiplicative scale, as a more relevant public health measure. Using gene-environment independence enhances statistical power for testing multiplicative interaction in case-control studies. However, under departure from this assumption, substantial bias in the estimates and inflated type I error in the corresponding tests can occur. In this paper, we extend the empirical Bayes (EB) approach previously developed for multiplicative interaction, which trades off between bias and efficiency in a data-adaptive way, to the additive scale. An EB estimator of the relative excess risk due to interaction is derived, and the corresponding Wald test is proposed with a general regression setting under a retrospective likelihood framework. We study the impact of gene-environment association on the resultant test with case-control data. Our simulation studies suggest that the EB approach uses the gene-environment independence assumption in a data-adaptive way and provides a gain in power compared with the standard logistic regression analysis and better control of type I error when compared with the analysis assuming gene-environment independence. We illustrate the methods with data from the Ovarian Cancer Association Consortium. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  7. Detecting non-orthology in the COGs database and other approaches grouping orthologs using genome-specific best hits.

    PubMed

    Dessimoz, Christophe; Boeckmann, Brigitte; Roth, Alexander C J; Gonnet, Gaston H

    2006-01-01

    Correct orthology assignment is a critical prerequisite of numerous comparative genomics procedures, such as function prediction, construction of phylogenetic species trees and genome rearrangement analysis. We present an algorithm for the detection of non-orthologs that arise by mistake in current orthology classification methods based on genome-specific best hits, such as the COGs database. The algorithm works with pairwise distance estimates, rather than computationally expensive and error-prone tree-building methods. The accuracy of the algorithm is evaluated through verification of the distribution of predicted cases, case-by-case phylogenetic analysis and comparisons with predictions from other projects using independent methods. Our results show that a very significant fraction of the COG groups include non-orthologs: using conservative parameters, the algorithm detects non-orthology in a third of all COG groups. Consequently, sequence analysis sensitive to correct orthology assignments will greatly benefit from these findings.

  8. Characterizing Protease Specificity: How Many Substrates Do We Need?

    PubMed Central

    Schauperl, Michael; Fuchs, Julian E.; Waldner, Birgit J.; Huber, Roland G.; Kramer, Christian; Liedl, Klaus R.

    2015-01-01

    Calculation of cleavage entropies allows to quantify, map and compare protease substrate specificity by an information entropy based approach. The metric intrinsically depends on the number of experimentally determined substrates (data points). Thus a statistical analysis of its numerical stability is crucial to estimate the systematic error made by estimating specificity based on a limited number of substrates. In this contribution, we show the mathematical basis for estimating the uncertainty in cleavage entropies. Sets of cleavage entropies are calculated using experimental cleavage data and modeled extreme cases. By analyzing the underlying mathematics and applying statistical tools, a linear dependence of the metric in respect to 1/n was found. This allows us to extrapolate the values to an infinite number of samples and to estimate the errors. Analyzing the errors, a minimum number of 30 substrates was found to be necessary to characterize substrate specificity, in terms of amino acid variability, for a protease (S4-S4’) with an uncertainty of 5 percent. Therefore, we encourage experimental researchers in the protease field to record specificity profiles of novel proteases aiming to identify at least 30 peptide substrates of maximum sequence diversity. We expect a full characterization of protease specificity helpful to rationalize biological functions of proteases and to assist rational drug design. PMID:26559682

  9. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    NASA Astrophysics Data System (ADS)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  10. Variance estimation when using inverse probability of treatment weighting (IPTW) with survival analysis.

    PubMed

    Austin, Peter C

    2016-12-30

    Propensity score methods are used to reduce the effects of observed confounding when using observational data to estimate the effects of treatments or exposures. A popular method of using the propensity score is inverse probability of treatment weighting (IPTW). When using this method, a weight is calculated for each subject that is equal to the inverse of the probability of receiving the treatment that was actually received. These weights are then incorporated into the analyses to minimize the effects of observed confounding. Previous research has found that these methods result in unbiased estimation when estimating the effect of treatment on survival outcomes. However, conventional methods of variance estimation were shown to result in biased estimates of standard error. In this study, we conducted an extensive set of Monte Carlo simulations to examine different methods of variance estimation when using a weighted Cox proportional hazards model to estimate the effect of treatment. We considered three variance estimation methods: (i) a naïve model-based variance estimator; (ii) a robust sandwich-type variance estimator; and (iii) a bootstrap variance estimator. We considered estimation of both the average treatment effect and the average treatment effect in the treated. We found that the use of a bootstrap estimator resulted in approximately correct estimates of standard errors and confidence intervals with the correct coverage rates. The other estimators resulted in biased estimates of standard errors and confidence intervals with incorrect coverage rates. Our simulations were informed by a case study examining the effect of statin prescribing on mortality. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  11. Isohaline position as a habitat indicator for estuarine populations

    USGS Publications Warehouse

    Jassby, Alan D.; Kimmerer, W.J.; Monismith, Stephen G.; Armor, C.; Cloern, James E.; Powell, T.M.; Vedlinski, Timothy J.

    1995-01-01

    The striped bass survival data were also used to illustrate a related important point: incorporating additionalexplanatory variables may decrease the prediction error for a population or process, but it can increase theuncertainty in parameter estimates and management strategies based on these estimates. Even in cases wherethe uncertainty is currently too large to guide management decisions, an uncertainty analysis can identify themost practical direction for future data acquisition.

  12. Assessment and Calibration of Ultrasonic Measurement Errors in Estimating Weathering Index of Stone Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Keehm, Y.

    2011-12-01

    Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by National Research Institute of Cultural Heritage of Cultural Heritage Administration(No. NRICH-1107-B01F).

  13. The juvenile face as a suitable age indicator in child pornography cases: a pilot study on the reliability of automated and visual estimation approaches.

    PubMed

    Ratnayake, M; Obertová, Z; Dose, M; Gabriel, P; Bröker, H M; Brauckmann, M; Barkus, A; Rizgeliene, R; Tutkuviene, J; Ritz-Timme, S; Marasciuolo, L; Gibelli, D; Cattaneo, C

    2014-09-01

    In cases of suspected child pornography, the age of the victim represents a crucial factor for legal prosecution. The conventional methods for age estimation provide unreliable age estimates, particularly if teenage victims are concerned. In this pilot study, the potential of age estimation for screening purposes is explored for juvenile faces. In addition to a visual approach, an automated procedure is introduced, which has the ability to rapidly scan through large numbers of suspicious image data in order to trace juvenile faces. Age estimations were performed by experts, non-experts and the Demonstrator of a developed software on frontal facial images of 50 females aged 10-19 years from Germany, Italy, and Lithuania. To test the accuracy, the mean absolute error (MAE) between the estimates and the real ages was calculated for each examiner and the Demonstrator. The Demonstrator achieved the lowest MAE (1.47 years) for the 50 test images. Decreased image quality had no significant impact on the performance and classification results. The experts delivered slightly less accurate MAE (1.63 years). Throughout the tested age range, both the manual and the automated approach led to reliable age estimates within the limits of natural biological variability. The visual analysis of the face produces reasonably accurate age estimates up to the age of 18 years, which is the legally relevant age threshold for victims in cases of pedo-pornography. This approach can be applied in conjunction with the conventional methods for a preliminary age estimation of juveniles depicted on images.

  14. Potential loss of revenue due to errors in clinical coding during the implementation of the Malaysia diagnosis related group (MY-DRG®) Casemix system in a teaching hospital in Malaysia.

    PubMed

    Zafirah, S A; Nur, Amrizal Muhammad; Puteh, Sharifa Ezat Wan; Aljunid, Syed Mohamed

    2018-01-25

    The accuracy of clinical coding is crucial in the assignment of Diagnosis Related Groups (DRGs) codes, especially if the hospital is using Casemix System as a tool for resource allocations and efficiency monitoring. The aim of this study was to estimate the potential loss of income due to an error in clinical coding during the implementation of the Malaysia Diagnosis Related Group (MY-DRG ® ) Casemix System in a teaching hospital in Malaysia. Four hundred and sixty-four (464) coded medical records were selected, re-examined and re-coded by an independent senior coder (ISC). This ISC re-examined and re-coded the error code that was originally entered by the hospital coders. The pre- and post-coding results were compared, and if there was any disagreement, the codes by the ISC were considered the accurate codes. The cases were then re-grouped using a MY-DRG ® grouper to assess and compare the changes in the DRG assignment and the hospital tariff assignment. The outcomes were then verified by a casemix expert. Coding errors were found in 89.4% (415/424) of the selected patient medical records. Coding errors in secondary diagnoses were the highest, at 81.3% (377/464), followed by secondary procedures at 58.2% (270/464), principal procedures of 50.9% (236/464) and primary diagnoses at 49.8% (231/464), respectively. The coding errors resulted in the assignment of different MY-DRG ® codes in 74.0% (307/415) of the cases. From this result, 52.1% (160/307) of the cases had a lower assigned hospital tariff. In total, the potential loss of income due to changes in the assignment of the MY-DRG ® code was RM654,303.91. The quality of coding is a crucial aspect in implementing casemix systems. Intensive re-training and the close monitoring of coder performance in the hospital should be performed to prevent the potential loss of hospital income.

  15. Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bishop, Joseph E.; Brown, Judith Alice

    In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less

  16. Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques

    DOE PAGES

    Bishop, Joseph E.; Brown, Judith Alice

    2018-06-15

    In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less

  17. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    NASA Astrophysics Data System (ADS)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.

  18. Evaluating Precipitation from Orbital Data Products of TRMM and GPM over the Indian Subcontinent

    NASA Astrophysics Data System (ADS)

    Jayaluxmi, I.; Kumar, D. N.

    2015-12-01

    The rapidly growing records of microwave based precipitation data made available from various earth observation satellites have instigated a pressing need towards evaluating the associated uncertainty which arise from different sources such as retrieval error, spatial/temporal sampling error and sensor dependent error. Pertaining to microwave remote sensing, most of the studies in literature focus on gridded data products, fewer studies exist on evaluating the uncertainty inherent in orbital data products. Evaluation of the latter are essential as they potentially cause large uncertainties during real time flood forecasting studies especially at the watershed scale. The present study evaluates the uncertainty of precipitation data derived from the orbital data products of the Tropical Rainfall Measuring Mission (TRMM) satellite namely the 2A12, 2A25 and 2B31 products. Case study results over the flood prone basin of Mahanadi, India, are analyzed for precipitation uncertainty through these three facets viz., a) Uncertainty quantification using the volumetric metrics from the contingency table [Aghakouchak and Mehran 2014] b) Error characterization using additive and multiplicative error models c) Error decomposition to identify systematic and random errors d) Comparative assessment with the orbital data from GPM mission. The homoscedastic random errors from multiplicative error models justify a better representation of precipitation estimates by the 2A12 algorithm. It can be concluded that although the radiometer derived 2A12 precipitation data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that they are in excellent agreement with the reference estimates for the data period considered [Indu and Kumar 2015]. References A. AghaKouchak and A. Mehran (2014), Extended contingency table: Performance metrics for satellite observations and climate model simulations, Water Resources Research, vol. 49, 7144-7149; J. Indu and D. Nagesh Kumar (2015), Evaluation of Precipitation Retrievals from Orbital Data Products of TRMM over a Subtropical basin in India, IEEE Transactions on Geoscience and Remote Sensing, in press, doi: 10.1109/TGRS.2015.2440338.

  19. Effects of measurement resolution on the analysis of temperature time series for stream-aquifer flux estimation

    NASA Astrophysics Data System (ADS)

    Soto-López, Carlos D.; Meixner, Thomas; Ferré, Ty P. A.

    2011-12-01

    From its inception in the mid-1960s, the use of temperature time series (thermographs) to estimate vertical fluxes has found increasing use in the hydrologic community. Beginning in 2000, researchers have examined the impacts of measurement and parameter uncertainty on the estimates of vertical fluxes. To date, the effects of temperature measurement discretization (resolution), a characteristic of all digital temperature loggers, on the determination of vertical fluxes has not been considered. In this technical note we expand the analysis of recently published work to include the effects of temperature measurement resolution on estimates of vertical fluxes using temperature amplitude and phase shift information. We show that errors in thermal front velocity estimation introduced by discretizing thermographs differ when amplitude or phase shift data are used to estimate vertical fluxes. We also show that under similar circumstances sensor resolution limits the range over which vertical velocities are accurately reproduced more than uncertainty in temperature measurements, uncertainty in sensor separation distance, and uncertainty in the thermal diffusivity combined. These effects represent the baseline error present and thus the best-case scenario when discrete temperature measurements are used to infer vertical fluxes. The errors associated with measurement resolution can be minimized by using the highest-resolution sensors available. But thoughtful experimental design could allow users to select the most cost-effective temperature sensors to fit their measurement needs.

  20. Fuzzy-Estimation Control for Improvement Microwave Connection for Iraq Electrical Grid

    NASA Astrophysics Data System (ADS)

    Hoomod, Haider K.; Radi, Mohammed

    2018-05-01

    The demand for broadband wireless services is increasing day by day (as internet or radio broadcast and TV etc.) for this reason and optimal exploiting for this bandwidth may be other reasons indeed be there is problem in the communication channels. it’s necessary that exploiting the good part form this bandwidth. In this paper, we propose to use estimation technique for estimate channel availability in that moment and next one to know the error in the bandwidth channel for controlling the possibility data transferring through the channel. The proposed estimation based on the combination of the least Minimum square (LMS), Standard Kalman filter, and Modified Kalman filter. The error estimation in channel use as control parameter in fuzzy rules to adjusted the rate and size sending data through the network channel, and rearrangement the priorities of the buffered data (workstation control parameters, Texts, phone call, images, and camera video) for the worst cases of error in channel. The propose system is designed to management data communications through the channels connect among the Iraqi electrical grid stations. The proposed results show that the modified Kalman filter have a best result in time and noise estimation (0.1109 for 5% noise estimation to 0.3211 for 90% noise estimation) and the packets loss rate is reduced with ratio from (35% to 385%).

  1. The cost of adherence mismeasurement in serious mental illness: a claims-based analysis.

    PubMed

    Shafrin, Jason; Forma, Felicia; Scherer, Ethan; Hatch, Ainslie; Vytlacil, Edward; Lakdawalla, Darius

    2017-05-01

    To quantify how adherence mismeasurement affects the estimated impact of adherence on inpatient costs among patients with serious mental illness (SMI). Proportion of days covered (PDC) is a common claims-based measure of medication adherence. Because PDC does not measure medication ingestion, however, it may inaccurately measure adherence. We derived a formula to correct the bias that occurs in adherence-utilization studies resulting from errors in claims-based measures of adherence. We conducted a literature review to identify the correlation between gold-standard and claims-based adherence measures. We derived a bias-correction methodology to address claims-based medication adherence measurement error. We then applied this methodology to a case study of patients with SMI who initiated atypical antipsychotics in 2 large claims databases. Our literature review identified 6 studies of interest. The 4 most relevant ones measured correlations between 0.38 and 0.91. Our preferred estimate implies that the effect of adherence on inpatient spending estimated from claims data would understate the true effect by a factor of 5.3, if there were no other sources of bias. Although our procedure corrects for measurement error, such error also may amplify or mitigate other potential biases. For instance, if adherent patients are healthier than nonadherent ones, measurement error makes the resulting bias worse. On the other hand, if adherent patients are sicker, measurement error mitigates the other bias. Measurement error due to claims-based adherence measures is worth addressing, alongside other more widely emphasized sources of bias in inference.

  2. Event-Based Sensing and Control for Remote Robot Guidance: An Experimental Case

    PubMed Central

    Santos, Carlos; Martínez-Rey, Miguel; Santiso, Enrique

    2017-01-01

    This paper describes the theoretical and practical foundations for remote control of a mobile robot for nonlinear trajectory tracking using an external localisation sensor. It constitutes a classical networked control system, whereby event-based techniques for both control and state estimation contribute to efficient use of communications and reduce sensor activity. Measurement requests are dictated by an event-based state estimator by setting an upper bound to the estimation error covariance matrix. The rest of the time, state prediction is carried out with the Unscented transformation. This prediction method makes it possible to select the appropriate instants at which to perform actuations on the robot so that guidance performance does not degrade below a certain threshold. Ultimately, we obtained a combined event-based control and estimation solution that drastically reduces communication accesses. The magnitude of this reduction is set according to the tracking error margin of a P3-DX robot following a nonlinear trajectory, remotely controlled with a mini PC and whose pose is detected by a camera sensor. PMID:28878144

  3. Quantifying uncertainty in geoacoustic inversion. II. Application to broadband, shallow-water data.

    PubMed

    Dosso, Stan E; Nielsen, Peter L

    2002-01-01

    This paper applies the new method of fast Gibbs sampling (FGS) to estimate the uncertainties of seabed geoacoustic parameters in a broadband, shallow-water acoustic survey, with the goal of interpreting the survey results and validating the method for experimental data. FGS applies a Bayesian approach to geoacoustic inversion based on sampling the posterior probability density to estimate marginal probability distributions and parameter covariances. This requires knowledge of the statistical distribution of the data errors, including both measurement and theory errors, which is generally not available. Invoking the simplifying assumption of independent, identically distributed Gaussian errors allows a maximum-likelihood estimate of the data variance and leads to a practical inversion algorithm. However, it is necessary to validate these assumptions, i.e., to verify that the parameter uncertainties obtained represent meaningful estimates. To this end, FGS is applied to a geoacoustic experiment carried out at a site off the west coast of Italy where previous acoustic and geophysical studies have been performed. The parameter uncertainties estimated via FGS are validated by comparison with: (i) the variability in the results of inverting multiple independent data sets collected during the experiment; (ii) the results of FGS inversion of synthetic test cases designed to simulate the experiment and data errors; and (iii) the available geophysical ground truth. Comparisons are carried out for a number of different source bandwidths, ranges, and levels of prior information, and indicate that FGS provides reliable and stable uncertainty estimates for the geoacoustic inverse problem.

  4. Generalized Fisher matrices

    NASA Astrophysics Data System (ADS)

    Heavens, A. F.; Seikel, M.; Nord, B. D.; Aich, M.; Bouffanais, Y.; Bassett, B. A.; Hobson, M. P.

    2014-12-01

    The Fisher Information Matrix formalism (Fisher 1935) is extended to cases where the data are divided into two parts (X, Y), where the expectation value of Y depends on X according to some theoretical model, and X and Y both have errors with arbitrary covariance. In the simplest case, (X, Y) represent data pairs of abscissa and ordinate, in which case the analysis deals with the case of data pairs with errors in both coordinates, but X can be any measured quantities on which Y depends. The analysis applies for arbitrary covariance, provided all errors are Gaussian, and provided the errors in X are small, both in comparison with the scale over which the expected signal Y changes, and with the width of the prior distribution. This generalizes the Fisher Matrix approach, which normally only considers errors in the `ordinate' Y. In this work, we include errors in X by marginalizing over latent variables, effectively employing a Bayesian hierarchical model, and deriving the Fisher Matrix for this more general case. The methods here also extend to likelihood surfaces which are not Gaussian in the parameter space, and so techniques such as DALI (Derivative Approximation for Likelihoods) can be generalized straightforwardly to include arbitrary Gaussian data error covariances. For simple mock data and theoretical models, we compare to Markov Chain Monte Carlo experiments, illustrating the method with cosmological supernova data. We also include the new method in the FISHER4CAST software.

  5. A decade of Australian methotrexate dosing errors.

    PubMed

    Cairns, Rose; Brown, Jared A; Lynch, Ann-Maree; Robinson, Jeff; Wylie, Carol; Buckley, Nicholas A

    2016-06-06

    Accidental daily dosing of methotrexate can result in life-threatening toxicity. We investigated methotrexate dosing errors reported to the National Coronial Information System (NCIS), the Therapeutic Goods Administration Database of Adverse Event Notifications (TGA DAEN) and Australian Poisons Information Centres (PICs). A retrospective review of coronial cases in the NCIS (2000-2014), and of reports to the TGA DAEN (2004-2014) and Australian PICs (2004-2015). Cases were included if dosing errors were accidental, with evidence of daily dosing on at least 3 consecutive days. Events per year, dose, consecutive days of methotrexate administration, reasons for the error, clinical features. Twenty-two deaths linked with methotrexate were identified in the NCIS, including seven cases in which erroneous daily dosing was documented. Methotrexate medication error was listed in ten cases in the DAEN, including two deaths. Australian PIC databases contained 92 cases, with a worrying increase seen during 2014-2015. Reasons for the errors included patient misunderstanding and incorrect packaging of dosette packs by pharmacists. The recorded clinical effects of daily dosage were consistent with those previously reported for methotrexate toxicity. Dosing errors with methotrexate can be lethal and continue to occur despite a number of safety initiatives in the past decade. Further strategies to reduce these preventable harms need to be implemented and evaluated. Recent suggestions include further changes in packet size, mandatory weekly dosing labelling on packaging, improving education, and including alerts in prescribing and dispensing software.

  6. Correct consideration of the index of refraction using blackbody radiation.

    PubMed

    Hartmann, Jurgen

    2006-09-04

    The correct consideration of the index of refraction when using blackbody radiators as standard sources for optical radiation is derived and discussed. It is shown that simply using the index of refraction of air at laboratory conditions is not sufficient. A combination of the index of refraction of the media used inside the blackbody radiator and for the optical path between blackbody and detector has to be used instead. A worst case approximation for the introduced error when neglecting these effects is presented, showing that the error is below 0.1 % for wavelengths above 200 nm. Nevertheless, for the determination of the spectral radiance for the purpose of radiation temperature measurements the correct consideration of the refractive index is mandatory. The worst case estimation reveals that the introduced error in temperature at a blackbody temperature of 3000 degrees C can be as high as 400 mk at a wavelength of 650 nm and even higher at longer wavelengths.

  7. Data error and highly parameterized groundwater models

    USGS Publications Warehouse

    Hill, M.C.

    2008-01-01

    Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.

  8. Maximum-Likelihood Estimation for Frequency-Modulated Continuous-Wave Laser Ranging Using Photon-Counting Detectors

    DTIC Science & Technology

    2013-01-01

    are calculated from coherently -detected fields, e.g., coherent Doppler lidar . Our CRB results reveal that the best-case mean-square error scales as 1...1088 (2001). 7. K. Asaka, Y. Hirano, K. Tatsumi, K. Kasahara, and T. Tajime, “A pseudo-random frequency modulation continuous wave coherent lidar using...multiple returns,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 2170–2180 (2007). 11. T. J. Karr, “Atmospheric phase error in coherent laser radar

  9. Definition of Linear Color Models in the RGB Vector Color Space to Detect Red Peaches in Orchard Images Taken under Natural Illumination

    PubMed Central

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates. PMID:22969369

  10. Definition of linear color models in the RGB vector color space to detect red peaches in orchard images taken under natural illumination.

    PubMed

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-01-01

    This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa) peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.

  11. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    PubMed Central

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2015-01-01

    Background A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. Methods This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. Results The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). Conclusion The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided. PMID:27014485

  12. [Refractive errors in patients with cerebral palsy].

    PubMed

    Mrugacz, Małgorzata; Bandzul, Krzysztof; Kułak, Wojciech; Poppe, Ewa; Jurowski, Piotr

    2013-04-01

    Ocular changes are common in patients with cerebral palsy (CP) and they exist in about 50% of cases. The most common are refractive errors and strabismus disease. The aim of the paper was to estimate the relativeness between refractive errors and neurological pathologies in patients with selected types of CP. MATERIAL AND METHODS. The subject of the analysis was showing refractive errors in patients within two groups of CP: diplegia spastica and tetraparesis, with nervous system pathologies taken into account. Results. This study was proven some correlations between refractive errors and type of CP and severity of the CP classified in GMFCS scale. Refractive errors were more common in patients with tetraparesis than with diplegia spastica. In the group with diplegia spastica more common were myopia and astigmatism, however in tetraparesis - hyperopia.

  13. Neural networks for tracking of unknown SISO discrete-time nonlinear dynamic systems.

    PubMed

    Aftab, Muhammad Saleheen; Shafiq, Muhammad

    2015-11-01

    This article presents a Lyapunov function based neural network tracking (LNT) strategy for single-input, single-output (SISO) discrete-time nonlinear dynamic systems. The proposed LNT architecture is composed of two feedforward neural networks operating as controller and estimator. A Lyapunov function based back propagation learning algorithm is used for online adjustment of the controller and estimator parameters. The controller and estimator error convergence and closed-loop system stability analysis is performed by Lyapunov stability theory. Moreover, two simulation examples and one real-time experiment are investigated as case studies. The achieved results successfully validate the controller performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Error-Trellis Construction for Convolutional Codes Using Shifted Error/Syndrome-Subsequences

    NASA Astrophysics Data System (ADS)

    Tajima, Masato; Okino, Koji; Miyagoshi, Takashi

    In this paper, we extend the conventional error-trellis construction for convolutional codes to the case where a given check matrix H(D) has a factor Dl in some column (row). In the first case, there is a possibility that the size of the state space can be reduced using shifted error-subsequences, whereas in the second case, the size of the state space can be reduced using shifted syndrome-subsequences. The construction presented in this paper is based on the adjoint-obvious realization of the corresponding syndrome former HT(D). In the case where all the columns and rows of H(D) are delay free, the proposed construction is reduced to the conventional one of Schalkwijk et al. We also show that the proposed construction can equally realize the state-space reduction shown by Ariel et al. Moreover, we clarify the difference between their construction and that of ours using examples.

  15. A generalized least squares regression approach for computing effect sizes in single-case research: application examples.

    PubMed

    Maggin, Daniel M; Swaminathan, Hariharan; Rogers, Helen J; O'Keeffe, Breda V; Sugai, George; Horner, Robert H

    2011-06-01

    A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of treatment effect from baseline to treatment phases in standard deviation units. In this paper, the method is applied to two published examples using common single case designs (i.e., withdrawal and multiple-baseline). The results from these studies are described, and the method is compared to ten desirable criteria for single-case effect sizes. Based on the results of this application, we conclude with observations about the use of GLS as a support to visual analysis, provide recommendations for future research, and describe implications for practice. Copyright © 2011 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  16. An investigation into false-negative transthoracic fine needle aspiration and core biopsy specimens.

    PubMed

    Minot, Douglas M; Gilman, Elizabeth A; Aubry, Marie-Christine; Voss, Jesse S; Van Epps, Sarah G; Tuve, Delores J; Sciallis, Andrew P; Henry, Michael R; Salomao, Diva R; Lee, Peter; Carlson, Stephanie K; Clayton, Amy C

    2014-12-01

    Transthoracic fine needle aspiration (TFNA)/core needle biopsy (CNB) under computed tomography (CT) guidance has proved useful in the assessment of pulmonary nodules. We sought to determine the TFNA false-negative (FN) rate at our institution and identify potential causes of FN diagnoses. Medical records were reviewed from 1,043 consecutive patients who underwent CT-guided TFNA with or without CNB of lung nodules over a 5-year time period (2003-2007). Thirty-seven FN cases of "negative" TFNA/CNB with malignant outcome were identified with 36 cases available for review, of which 35 had a corresponding CNB. Cases were reviewed independently (blinded to original diagnosis) by three pathologists with 15 age- and sex-matched positive and negative controls. Diagnosis (i.e., nondiagnostic, negative or positive for malignancy, atypical or suspicious) and qualitative assessments were recorded. Consensus diagnosis was suspicious or positive in 10 (28%) of 36 TFNA cases and suspicious in 1 (3%) of 35 CNB cases, indicating potential interpretive errors. Of the 11 interpretive errors (including both suspicious and positive cases), 8 were adenocarcinomas, 1 squamous cell carcinoma, 1 metastatic renal cell carcinoma, and 1 lymphoma. The remaining 25 FN cases (69.4%) were considered sampling errors and consisted of 7 adenocarcinomas, 3 nonsmall cell carcinomas, 3 lymphomas, 2 squamous cell carcinomas, and 2 renal cell carcinomas. Interpretive and sampling error cases were more likely to abut the pleura, while histopathologically, they tended to be necrotic and air-dried. The overall FN rate in this patient cohort is 3.5% (1.1% interpretive and 2.4% sampling errors). © 2014 Wiley Periodicals, Inc.

  17. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  18. Drought Persistence Errors in Global Climate Models

    NASA Astrophysics Data System (ADS)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  19. Development, implementation and outcome analysis of semi-automated alerts for metformin dose adjustment in hospitalized patients with renal impairment.

    PubMed

    Niedrig, David; Krattinger, Regina; Jödicke, Annika; Gött, Carmen; Bucklar, Guido; Russmann, Stefan

    2016-10-01

    Overdosing of the oral antidiabetic metformin in impaired renal function is an important contributory cause to life-threatening lactic acidosis. The presented project aimed to quantify and prevent this avoidable medication error in clinical practice. We developed and implemented an algorithm into a hospital's clinical information system that prospectively identifies metformin prescriptions if the estimated glomerular filtration rate is below 60 mL/min. Resulting real-time electronic alerts are sent to clinical pharmacologists and pharmacists, who validate cases in electronic medical records and contact prescribing physicians with recommendations if necessary. The screening algorithm has been used in routine clinical practice for 3 years and generated 2145 automated alerts (about 2 per day). Validated expert recommendations regarding metformin therapy, i.e., dose reduction or stop, were issued for 381 patients (about 3 per week). Follow-up was available for 257 cases, and prescribers' compliance with recommendations was 79%. Furthermore, during 3 years, we identified eight local cases of lactic acidosis associated with metformin therapy in renal impairment that could not be prevented, e.g., because metformin overdosing had occurred before hospitalization. Automated sensitive screening followed by specific expert evaluation and personal recommendations can prevent metformin overdosing in renal impairment with high efficiency and efficacy. Repeated cases of metformin-associated lactic acidosis in renal impairment underline the clinical relevance of this medication error. Our locally developed and customized alert system is a successful proof of concept for a proactive clinical drug safety program that is now expanded to other clinically and economically relevant medication errors. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Monitoring forest areas from continental to territorial levels using a sample of medium spatial resolution satellite imagery

    NASA Astrophysics Data System (ADS)

    Eva, Hugh; Carboni, Silvia; Achard, Frédéric; Stach, Nicolas; Durieux, Laurent; Faure, Jean-François; Mollicone, Danilo

    A global systematic sampling scheme has been developed by the UN FAO and the EC TREES project to estimate rates of deforestation at global or continental levels at intervals of 5 to 10 years. This global scheme can be intensified to produce results at the national level. In this paper, using surrogate observations, we compare the deforestation estimates derived from these two levels of sampling intensities (one, the global, for the Brazilian Amazon the other, national, for French Guiana) to estimates derived from the official inventories. We also report the precisions that are achieved due to sampling errors and, in the case of French Guiana, compare such precision with the official inventory precision. We extract nine sample data sets from the official wall-to-wall deforestation map derived from satellite interpretations produced for the Brazilian Amazon for the year 2002 to 2003. This global sampling scheme estimate gives 2.81 million ha of deforestation (mean from nine simulated replicates) with a standard error of 0.10 million ha. This compares with the full population estimate from the wall-to-wall interpretations of 2.73 million ha deforested, which is within one standard error of our sampling test estimate. The relative difference between the mean estimate from sampling approach and the full population estimate is 3.1%, and the standard error represents 4.0% of the full population estimate. This global sampling is then intensified to a territorial level with a case study over French Guiana to estimate deforestation between the years 1990 and 2006. For the historical reference period, 1990, Landsat-5 Thematic Mapper data were used. A coverage of SPOT-HRV imagery at 20 m × 20 m resolution acquired at the Cayenne receiving station in French Guiana was used for year 2006. Our estimates from the intensified global sampling scheme over French Guiana are compared with those produced by the national authority to report on deforestation rates under the Kyoto protocol rules for its overseas department. The latter estimates come from a sample of nearly 17,000 plots analyzed from same spatial imagery acquired between year 1990 and year 2006. This sampling scheme is derived from the traditional forest inventory methods carried out by IFN (Inventaire Forestier National). Our intensified global sampling scheme leads to an estimate of 96,650 ha deforested between 1990 and 2006, which is within the 95% confidence interval of the IFN sampling scheme, which gives an estimate of 91,722 ha, representing a relative difference from the IFN of 5.4%. These results demonstrate that the intensification of the global sampling scheme can provide forest area change estimates close to those achieved by official forest inventories (<6%), with precisions of between 4% and 7%, although we only estimate errors from sampling, not from the use of surrogate data. Such methods could be used by developing countries to demonstrate that they are fulfilling requirements for reducing emissions from deforestation in the framework of an REDD (Reducing Emissions from Deforestation in Developing Countries) mechanism under discussion within the United Nations Framework Convention on Climate Change (UNFCCC). Monitoring systems at national levels in tropical countries can also benefit from pan-tropical and regional observations, to ensure consistency between different national monitoring systems.

  1. Inversion of In Situ Light Absorption and Attenuation Measurements to Estimate Constituent Concentrations in Optically Complex Shelf Seas

    NASA Astrophysics Data System (ADS)

    Ramírez-Pérez, M.; Twardowski, M.; Trees, C.; Piera, J.; McKee, D.

    2018-01-01

    A deconvolution approach is presented to use spectral light absorption and attenuation data to estimate the concentration of the major nonwater compounds in complex shelf sea waters. The inversion procedure requires knowledge of local material-specific inherent optical properties (SIOPs) which are determined from natural samples using a bio-optical model that differentiates between Case I and Case II waters and uses least squares linear regression analysis to provide optimal SIOP values. A synthetic data set is used to demonstrate that the approach is fundamentally consistent and to test the sensitivity to injection of controlled levels of artificial noise into the input data. Self-consistency of the approach is further demonstrated by application to field data collected in the Ligurian Sea, with chlorophyll (Chl), the nonbiogenic component of total suspended solids (TSSnd), and colored dissolved organic material (CDOM) retrieved with RMSE of 0.61 mg m-3, 0.35 g m-3, and 0.02 m-1, respectively. The utility of the approach is finally demonstrated by application to depth profiles of in situ absorption and attenuation data resulting in profiles of optically significant constituents with associated error bar estimates. The advantages of this procedure lie in the simple input requirements, the avoidance of error amplification, full exploitation of the available spectral information from both absorption and attenuation channels, and the reasonably successful retrieval of constituent concentrations in an optically complex shelf sea.

  2. Characteristics associated with requests by pathologists for second opinions on breast biopsies

    PubMed Central

    Geller, Berta M; Nelson, Heidi D; Weaver, Donald L; Frederick, Paul D; Allison, Kimberly H; Onega, Tracy; Carney, Patricia A; Tosteson, Anna N A; Elmore, Joann G

    2018-01-01

    Aims Second opinions in pathology improve patient safety by reducing diagnostic errors, leading to more appropriate clinical treatment decisions. Little objective data are available regarding the factors triggering a request for second opinion despite second opinion consultations being part of the diagnostic system of pathology. Therefore we sought to assess breast biopsy cases and interpreting pathologists characteristics associated with second opinion requests. Methods Collected pathologist surveys and their interpretations of 60 test set cases were used to explore the relationships between case characteristics, pathologist characteristics and case perceptions, and requests for second opinions. Data were evaluated by logistic regression and generalised estimating equations. Results 115 pathologists provided 6900 assessments; pathologists requested second opinions on 70% (4827/6900) of their assessments 36% (1731/4827) of these would not have been required by policy. All associations between case characteristics and requesting second opinions were statistically significant, including diagnostic category, breast density, biopsy type, and number of diagnoses noted per case. Exclusive of institutional policies, pathologists wanted second opinions most frequently for atypia (66%) and least frequently for invasive cancer (20%). Second opinion rates were higher when the pathologist had lower assessment confidence, in cases with higher perceived difficulty, and cases with borderline diagnoses. Conclusions Pathologists request second opinions for challenging cases, particularly those with atypia, high breast density, core needle biopsies, or many co-existing diagnoses. Further studies should evaluate whether the case characteristics identified in this study could be used as clinical criteria to prompt system-level strategies for mandating second opinions. PMID:28465449

  3. Extending "Deep Blue" Aerosol Retrieval Coverage to Cases of Absorbing Aerosols Above Clouds: Sensitivity Analysis and First Case Studies

    NASA Technical Reports Server (NTRS)

    Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Lee, J.; Redemann, J.; Schmid, B.; Shinozuka, Y.

    2016-01-01

    Cases of absorbing aerosols above clouds (AACs), such as smoke or mineral dust, are omitted from most routinely processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar sensors, for incorporation into a future version of the "Deep Blue" AOD data product. Detailed retrieval simulations suggest that these sensors should be able to determine AAC AOD with a typical level of uncertainty approximately 25-50 percent (with lower uncertainties for more strongly absorbing aerosol types) and COD with an uncertainty approximately10-20 percent, if an appropriate aerosol optical model is known beforehand. Errors are larger, particularly if the aerosols are only weakly absorbing, if the aerosol optical properties are not known, and the appropriate model to use must also be retrieved. Actual retrieval errors are also compared to uncertainty envelopes obtained through the optimal estimation (OE) technique; OE-based uncertainties are found to be generally reasonable for COD but larger than actual retrieval errors for AOD, due in part to difficulties in quantifying the degree of spectral correlation of forward model error. The algorithm is also applied to two MODIS scenes (one smoke and one dust) for which near-coincident NASA Ames Airborne Tracking Sun photometer (AATS) data were available to use as a ground truth AOD data source, and found to be in good agreement, demonstrating the validity of the technique with real observations.

  4. Radiative flux and forcing parameterization error in aerosol-free clear skies

    DOE PAGES

    Pincus, Robert; Mlawer, Eli J.; Oreopoulos, Lazaros; ...

    2015-07-03

    This article reports on the accuracy in aerosol- and cloud-free conditions of the radiation parameterizations used in climate models. Accuracy is assessed relative to observationally validated reference models for fluxes under present-day conditions and forcing (flux changes) from quadrupled concentrations of carbon dioxide. Agreement among reference models is typically within 1 W/m 2, while parameterized calculations are roughly half as accurate in the longwave and even less accurate, and more variable, in the shortwave. Absorption of shortwave radiation is underestimated by most parameterizations in the present day and has relatively large errors in forcing. Error in present-day conditions is essentiallymore » unrelated to error in forcing calculations. Recent revisions to parameterizations have reduced error in most cases. As a result, a dependence on atmospheric conditions, including integrated water vapor, means that global estimates of parameterization error relevant for the radiative forcing of climate change will require much more ambitious calculations.« less

  5. Improving Forecasts Through Realistic Uncertainty Estimates: A Novel Data Driven Method for Model Uncertainty Quantification in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.

    2016-12-01

    Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.

  6. Model uncertainty of various settlement estimation methods in shallow tunnels excavation; case study: Qom subway tunnel

    NASA Astrophysics Data System (ADS)

    Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb

    2017-10-01

    In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.

  7. Uncertainty Forecasts Improve Weather-Related Decisions and Attenuate the Effects of Forecast Error

    ERIC Educational Resources Information Center

    Joslyn, Susan L.; LeClerc, Jared E.

    2012-01-01

    Although uncertainty is inherent in weather forecasts, explicit numeric uncertainty estimates are rarely included in public forecasts for fear that they will be misunderstood. Of particular concern are situations in which precautionary action is required at low probabilities, often the case with severe events. At present, a categorical weather…

  8. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage

    PubMed Central

    Lee, Kyuman; Baek, Hoki; Lim, Jaesung

    2016-01-01

    The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user positioning. PMID:27529252

  9. Enhanced Positioning Algorithm of ARPS for Improving Accuracy and Expanding Service Coverage.

    PubMed

    Lee, Kyuman; Baek, Hoki; Lim, Jaesung

    2016-08-12

    The airborne relay-based positioning system (ARPS), which employs the relaying of navigation signals, was proposed as an alternative positioning system. However, the ARPS has limitations, such as relatively large vertical error and service restrictions, because firstly, the user position is estimated based on airborne relays that are located in one direction, and secondly, the positioning is processed using only relayed navigation signals. In this paper, we propose an enhanced positioning algorithm to improve the performance of the ARPS. The main idea of the enhanced algorithm is the adaptable use of either virtual or direct measurements of reference stations in the calculation process based on the structural features of the ARPS. Unlike the existing two-step algorithm for airborne relay and user positioning, the enhanced algorithm is divided into two cases based on whether the required number of navigation signals for user positioning is met. In the first case, where the number of signals is greater than four, the user first estimates the positions of the airborne relays and its own initial position. Then, the user position is re-estimated by integrating a virtual measurement of a reference station that is calculated using the initial estimated user position and known reference positions. To prevent performance degradation, the re-estimation is performed after determining its requirement through comparing the expected position errors. If the navigation signals are insufficient, such as when the user is outside of airborne relay coverage, the user position is estimated by additionally using direct signal measurements of the reference stations in place of absent relayed signals. The simulation results demonstrate that a higher accuracy level can be achieved because the user position is estimated based on the measurements of airborne relays and a ground station. Furthermore, the service coverage is expanded by using direct measurements of reference stations for user positioning.

  10. Two-step reconstruction method using global optimization and conjugate gradient for ultrasound-guided diffuse optical tomography.

    PubMed

    Tavakoli, Behnoosh; Zhu, Quing

    2013-01-01

    Ultrasound-guided diffuse optical tomography (DOT) is a promising method for characterizing malignant and benign lesions in the female breast. We introduce a new two-step algorithm for DOT inversion in which the optical parameters are estimated with the global optimization method, genetic algorithm. The estimation result is applied as an initial guess to the conjugate gradient (CG) optimization method to obtain the absorption and scattering distributions simultaneously. Simulations and phantom experiments have shown that the maximum absorption and reduced scattering coefficients are reconstructed with less than 10% and 25% errors, respectively. This is in contrast with the CG method alone, which generates about 20% error for the absorption coefficient and does not accurately recover the scattering distribution. A new measure of scattering contrast has been introduced to characterize benign and malignant breast lesions. The results of 16 clinical cases reconstructed with the two-step method demonstrates that, on average, the absorption coefficient and scattering contrast of malignant lesions are about 1.8 and 3.32 times higher than the benign cases, respectively.

  11. Anticoagulation therapy advisor: a decision-support system for heparin therapy during ECMO.

    PubMed Central

    Peverini, R. L.; Sale, M.; Rhine, W. D.; Fagan, L. M.; Lenert, L. A.

    1992-01-01

    We present a case study describing our development of a mathematical model to control a clinical parameter in a patient--in this case, the degree of anticoagulation during extracorporeal membrane oxygenation (ECMO) support. During ECMO therapy, an anticoagulant agent (heparin) is administered to prevent thrombosis. Under- or over-coagulation can have grave consequences. To improve control of anticoagulation, we developed a pharmacokinetic-pharmacodynamic (PK-PD) model that predicts activated clotting times (ACT) using the NONMEM program. We then integrated this model into a decision-support system, and validated it with an independent data set. The population model had a mean absolute error of prediction for ACT values of 33.5 seconds, with a mean bias in estimation of -14.3 seconds. Individualization of model-parameter estimates using nonlinear regression improved the absolute error prediction to 25.5 seconds, and lowered the mean bias to -3.1 seconds. The PK-PD model is coupled with software for heuristic interpretation of model results to provide a complete environment for the management of anticoagulation. PMID:1482937

  12. Case Mis-Conceptualization in Psychological Treatment: An Enduring Clinical Problem.

    PubMed

    Ridley, Charles R; Jeffrey, Christina E; Roberson, Richard B

    2017-04-01

    Case conceptualization, an integral component of mental health treatment, aims to facilitate therapeutic gains by formulating a clear picture of a client's psychological presentation. However, despite numerous attempts to improve this clinical activity, it remains unclear how well existing methods achieve their purported purpose. Case formulation is inconsistently defined in the literature and implemented in practice, with many methods varying in complexity, theoretical grounding, and empirical support. In addition, many of the methods demand a precise clinical acumen that is easily influenced by judgmental and inferential errors. These errors occur regardless of clinicians' level of training or amount of clinical experience. Overall, the lack of a consensus definition, a diversity of methods, and susceptibility of clinicians to errors are manifestations of the state of crisis in case conceptualization. This article, the 2nd in a series of 5 on thematic mapping, argues the need for more reliable and valid models of case conceptualization. © 2017 Wiley Periodicals, Inc.

  13. Modeling the Effects of the Local Environment on a Received GNSS Signal

    DTIC Science & Technology

    2012-12-18

    conducted in modern GNSS receivers, a major source of error is typically attributed to the impact that features in the local environment have on...signals, rather than signals impacted by real-world environmental effects. Because of this discrepancy between simulated and real-world signal...considered data segments will impact the estimation of parameters from the data segment currently 3 being considered, as may be the case when using an

  14. Optimum Array Processing for Detecting Binary Signals Corrupted by Directional Interference.

    DTIC Science & Technology

    1972-12-01

    specific cases. Two different series representations of a vector random process are discussed in Van Trees [3]. These two methods both require the... spaci ~ng d, etc.) its detection error represents a lower bound for the performance that might be obtained with other types of array processing (such...Middleton, Introduction to Statistical Communication Theory, New York: McGraw-Hill, 1960. 3. H.L. Van Trees , Detection, Estimation, and Modulation Theory

  15. Estimating Coastal Digital Elevation Model (DEM) Uncertainty

    NASA Astrophysics Data System (ADS)

    Amante, C.; Mesick, S.

    2017-12-01

    Integrated bathymetric-topographic digital elevation models (DEMs) are representations of the Earth's solid surface and are fundamental to the modeling of coastal processes, including tsunami, storm surge, and sea-level rise inundation. Deviations in elevation values from the actual seabed or land surface constitute errors in DEMs, which originate from numerous sources, including: (i) the source elevation measurements (e.g., multibeam sonar, lidar), (ii) the interpolative gridding technique (e.g., spline, kriging) used to estimate elevations in areas unconstrained by source measurements, and (iii) the datum transformation used to convert bathymetric and topographic data to common vertical reference systems. The magnitude and spatial distribution of the errors from these sources are typically unknown, and the lack of knowledge regarding these errors represents the vertical uncertainty in the DEM. The National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Information (NCEI) has developed DEMs for more than 200 coastal communities. This study presents a methodology developed at NOAA NCEI to derive accompanying uncertainty surfaces that estimate DEM errors at the individual cell-level. The development of high-resolution (1/9th arc-second), integrated bathymetric-topographic DEMs along the southwest coast of Florida serves as the case study for deriving uncertainty surfaces. The estimated uncertainty can then be propagated into the modeling of coastal processes that utilize DEMs. Incorporating the uncertainty produces more reliable modeling results, and in turn, better-informed coastal management decisions.

  16. Multiple Imputation of a Randomly Censored Covariate Improves Logistic Regression Analysis.

    PubMed

    Atem, Folefac D; Qian, Jing; Maye, Jacqueline E; Johnson, Keith A; Betensky, Rebecca A

    2016-01-01

    Randomly censored covariates arise frequently in epidemiologic studies. The most commonly used methods, including complete case and single imputation or substitution, suffer from inefficiency and bias. They make strong parametric assumptions or they consider limit of detection censoring only. We employ multiple imputation, in conjunction with semi-parametric modeling of the censored covariate, to overcome these shortcomings and to facilitate robust estimation. We develop a multiple imputation approach for randomly censored covariates within the framework of a logistic regression model. We use the non-parametric estimate of the covariate distribution or the semiparametric Cox model estimate in the presence of additional covariates in the model. We evaluate this procedure in simulations, and compare its operating characteristics to those from the complete case analysis and a survival regression approach. We apply the procedures to an Alzheimer's study of the association between amyloid positivity and maternal age of onset of dementia. Multiple imputation achieves lower standard errors and higher power than the complete case approach under heavy and moderate censoring and is comparable under light censoring. The survival regression approach achieves the highest power among all procedures, but does not produce interpretable estimates of association. Multiple imputation offers a favorable alternative to complete case analysis and ad hoc substitution methods in the presence of randomly censored covariates within the framework of logistic regression.

  17. Bivariate least squares linear regression: Towards a unified analytic formalism. I. Functional models

    NASA Astrophysics Data System (ADS)

    Caimmi, R.

    2011-08-01

    Concerning bivariate least squares linear regression, the classical approach pursued for functional models in earlier attempts ( York, 1966, 1969) is reviewed using a new formalism in terms of deviation (matrix) traces which, for unweighted data, reduce to usual quantities leaving aside an unessential (but dimensional) multiplicative factor. Within the framework of classical error models, the dependent variable relates to the independent variable according to the usual additive model. The classes of linear models considered are regression lines in the general case of correlated errors in X and in Y for weighted data, and in the opposite limiting situations of (i) uncorrelated errors in X and in Y, and (ii) completely correlated errors in X and in Y. The special case of (C) generalized orthogonal regression is considered in detail together with well known subcases, namely: (Y) errors in X negligible (ideally null) with respect to errors in Y; (X) errors in Y negligible (ideally null) with respect to errors in X; (O) genuine orthogonal regression; (R) reduced major-axis regression. In the limit of unweighted data, the results determined for functional models are compared with their counterparts related to extreme structural models i.e. the instrumental scatter is negligible (ideally null) with respect to the intrinsic scatter ( Isobe et al., 1990; Feigelson and Babu, 1992). While regression line slope and intercept estimators for functional and structural models necessarily coincide, the contrary holds for related variance estimators even if the residuals obey a Gaussian distribution, with the exception of Y models. An example of astronomical application is considered, concerning the [O/H]-[Fe/H] empirical relations deduced from five samples related to different stars and/or different methods of oxygen abundance determination. For selected samples and assigned methods, different regression models yield consistent results within the errors (∓ σ) for both heteroscedastic and homoscedastic data. Conversely, samples related to different methods produce discrepant results, due to the presence of (still undetected) systematic errors, which implies no definitive statement can be made at present. A comparison is also made between different expressions of regression line slope and intercept variance estimators, where fractional discrepancies are found to be not exceeding a few percent, which grows up to about 20% in the presence of large dispersion data. An extension of the formalism to structural models is left to a forthcoming paper.

  18. Autonomous optical navigation using nanosatellite-class instruments: a Mars approach case study

    NASA Astrophysics Data System (ADS)

    Enright, John; Jovanovic, Ilija; Kazemi, Laila; Zhang, Harry; Dzamba, Tom

    2018-02-01

    This paper examines the effectiveness of small star trackers for orbital estimation. Autonomous optical navigation has been used for some time to provide local estimates of orbital parameters during close approach to celestial bodies. These techniques have been used extensively on spacecraft dating back to the Voyager missions, but often rely on long exposures and large instrument apertures. Using a hyperbolic Mars approach as a reference mission, we present an EKF-based navigation filter suitable for nanosatellite missions. Observations of Mars and its moons allow the estimator to correct initial errors in both position and velocity. Our results show that nanosatellite-class star trackers can produce good quality navigation solutions with low position (<300 {m}) and velocity (<0.15 {m/s}) errors as the spacecraft approaches periapse.

  19. Age estimation of the Deccan Traps from the North American apparent polar wander path

    NASA Technical Reports Server (NTRS)

    Stoddard, Paul R.; Jurdy, Donna M.

    1988-01-01

    It has recently been proposed that flood basalt events, such as the eruption of the Deccan Traps, have been responsible for mass extinctions. To test this hypothesis, accurate estimations of the ages and duration of these events are needed. In the case of the Deccan Traps, however, neither age nor duration of emplacement is well constrianed; measured ages range from 40 to more than 80 Myr, and estimates of duration range from less than 1 to 67 Myr. To make an independent age determination, paleomagnetic and sea-floor-spreading data are used, and the associated errors are estimated. The Deccan paleomagnetic pole is compared with the reference apparent polar wander path of North America by rotating the positions of the paleomagnetic pole for the Deccan Traps to the reference path for a range of assumed ages. Uncertainties in the apparent polar wander path, Deccan paleopole position, and errors resulting from the plate reconstruction are estimated. It is suggested that 83-70 Myr is the most likely time of extrusion of these volcanic rocks.

  20. About an adaptively weighted Kaplan-Meier estimate.

    PubMed

    Plante, Jean-François

    2009-09-01

    The minimum averaged mean squared error nonparametric adaptive weights use data from m possibly different populations to infer about one population of interest. The definition of these weights is based on the properties of the empirical distribution function. We use the Kaplan-Meier estimate to let the weights accommodate right-censored data and use them to define the weighted Kaplan-Meier estimate. The proposed estimate is smoother than the usual Kaplan-Meier estimate and converges uniformly in probability to the target distribution. Simulations show that the performances of the weighted Kaplan-Meier estimate on finite samples exceed that of the usual Kaplan-Meier estimate. A case study is also presented.

  1. Simple Form of MMSE Estimator for Super-Gaussian Prior Densities

    NASA Astrophysics Data System (ADS)

    Kittisuwan, Pichid

    2015-04-01

    The denoising method that become popular in recent years for additive white Gaussian noise (AWGN) are Bayesian estimation techniques e.g., maximum a posteriori (MAP) and minimum mean square error (MMSE). In super-Gaussian prior densities, it is well known that the MMSE estimator in such a case has a complicated form. In this work, we derive the MMSE estimation with Taylor series. We show that the proposed estimator also leads to a simple formula. An extension of this estimator to Pearson type VII prior density is also offered. The experimental result shows that the proposed estimator to the original MMSE nonlinearity is reasonably good.

  2. Estimation of Fetal Weight during Labor: Still a Challenge.

    PubMed

    Barros, Joana Goulão; Reis, Inês; Pereira, Isabel; Clode, Nuno; Graça, Luís M

    2016-01-01

    To evaluate the accuracy of fetal weight prediction by ultrasonography labor employing a formula including the linear measurements of femur length (FL) and mid-thigh soft-tissue thickness (STT). We conducted a prospective study involving singleton uncomplicated term pregnancies within 48 hours of delivery. Only pregnancies with a cephalic fetus admitted in the labor ward for elective cesarean section, induction of labor or spontaneous labor were included. We excluded all non-Caucasian women, the ones previously diagnosed with gestational diabetes and the ones with evidence of ruptured membranes. Fetal weight estimates were calculated using a previously proposed formula [estimated fetal weight = 1687.47 + (54.1 x FL) + (76.68 x STT). The relationship between actual birth weight and estimated fetal weight was analyzed using Pearson's correlation. The formula's performance was assessed by calculating the signed and absolute errors. Mean weight difference and signed percentage error were calculated for birth weight divided into three subgroups: < 3000 g; 3000-4000 g; and > 4000 g. We included for analysis 145 cases and found a significant, yet low, linear relationship between birth weight and estimated fetal weight (p < 0.001; R2 = 0.197) with an absolute mean error of 10.6%. The lowest mean percentage error (0.3%) corresponded to the subgroup with birth weight between 3000 g and 4000 g. This study demonstrates a poor correlation between actual birth weight and the estimated fetal weight using a formula based on femur length and mid-thigh soft-tissue thickness, both linear parameters. Although avoidance of circumferential ultrasound measurements might prove to be beneficial, it is still yet to be found a fetal estimation formula that can be both accurate and simple to perform.

  3. Towards the operational estimation of a radiological plume using data assimilation after a radiological accidental atmospheric release

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Vira, Julius; Bocquet, Marc; Sofiev, Mikhail; Saunier, Olivier

    2011-06-01

    In the event of an accidental atmospheric release of radionuclides from a nuclear power plant, accurate real-time forecasting of the activity concentrations of radionuclides is required by the decision makers for the preparation of adequate countermeasures. The accuracy of the forecast plume is highly dependent on the source term estimation. On several academic test cases, including real data, inverse modelling and data assimilation techniques were proven to help in the assessment of the source term. In this paper, a semi-automatic method is proposed for the sequential reconstruction of the plume, by implementing a sequential data assimilation algorithm based on inverse modelling, with a care to develop realistic methods for operational risk agencies. The performance of the assimilation scheme has been assessed through the intercomparison between French and Finnish frameworks. Two dispersion models have been used: Polair3D and Silam developed in two different research centres. Different release locations, as well as different meteorological situations are tested. The existing and newly planned surveillance networks are used and realistically large multiplicative observational errors are assumed. The inverse modelling scheme accounts for strong error bias encountered with such errors. The efficiency of the data assimilation system is tested via statistical indicators. For France and Finland, the average performance of the data assimilation system is strong. However there are outlying situations where the inversion fails because of a too poor observability. In addition, in the case where the power plant responsible for the accidental release is not known, robust statistical tools are developed and tested to discriminate candidate release sites.

  4. MacWilliams Identity for M-Spotty Weight Enumerator

    NASA Astrophysics Data System (ADS)

    Suzuki, Kazuyoshi; Fujiwara, Eiji

    M-spotty byte error control codes are very effective for correcting/detecting errors in semiconductor memory systems that employ recent high-density RAM chips with wide I/O data (e.g., 8, 16, or 32bits). In this case, the width of the I/O data is one byte. A spotty byte error is defined as random t-bit errors within a byte of length b bits, where 1 le t ≤ b. Then, an error is called an m-spotty byte error if at least one spotty byte error is present in a byte. M-spotty byte error control codes are characterized by the m-spotty distance, which includes the Hamming distance as a special case for t =1 or t = b. The MacWilliams identity provides the relationship between the weight distribution of a code and that of its dual code. The present paper presents the MacWilliams identity for the m-spotty weight enumerator of m-spotty byte error control codes. In addition, the present paper clarifies that the indicated identity includes the MacWilliams identity for the Hamming weight enumerator as a special case.

  5. Atmospheric Compensation and Surface Temperature and Emissivity Retrieval with LWIR Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Pieper, Michael

    Accurate estimation or retrieval of surface emissivity spectra from long-wave infrared (LWIR) or Thermal Infrared (TIR) hyperspectral imaging data acquired by airborne or space-borne sensors is necessary for many scientific and defense applications. The at-aperture radiance measured by the sensor is a function of the ground emissivity and temperature, modified by the atmosphere. Thus the emissivity retrieval process consists of two interwoven steps: atmospheric compensation (AC) to retrieve the ground radiance from the measured at-aperture radiance and temperature-emissivity separation (TES) to separate the temperature and emissivity from the ground radiance. In-scene AC (ISAC) algorithms use blackbody-like materials in the scene, which have a linear relationship between their ground radiances and at-aperture radiances determined by the atmospheric transmission and upwelling radiance. Using a clear reference channel to estimate the ground radiance, a linear fitting of the at-aperture radiance and estimated ground radiance is done to estimate the atmospheric parameters. TES algorithms for hyperspectral imaging data assume that the emissivity spectra for solids are smooth compared to the sharp features added by the atmosphere. The ground temperature and emissivity are found by finding the temperature that provides the smoothest emissivity estimate. In this thesis we develop models to investigate the sensitivity of AC and TES to the basic assumptions enabling their performance. ISAC assumes that there are perfect blackbody pixels in a scene and that there is a clear channel, which is never the case. The developed ISAC model explains how the quality of blackbody-like pixels affect the shape of atmospheric estimates and the clear channel assumption affects their magnitude. Emissivity spectra for solids usually have some roughness. The TES model identifies four sources of error: the smoothing error of the emissivity spectrum, the emissivity error from using the incorrect temperature, and the errors caused by sensor noise and wavelength calibration. The ways these errors interact determines the overall TES performance. Since the AC and TES processes are interwoven, any errors in AC are transferred to TES and the final temperature and emissivity estimates. Combining the two models, shape errors caused by the blackbody assumption are transferred to the emissivity estimates, where magnitude errors from the clear channel assumption are compensated by TES temperature induced emissivity errors. The ability for the temperature induced error to compensate for such atmospheric errors makes it difficult to determine the correct atmospheric parameters for a scene. With these models we are able to determine the expected quality of estimated emissivity spectra based on the quality of blackbody-like materials on the ground, the emissivity of the materials being searched for, and the properties of the sensor. The quality of material emissivity spectra is a key factor in determining detection performance for a material in a scene.

  6. Cognitive error as the most frequent contributory factor in cases of medical injury: a study on verdict's judgment among closed claims in Japan.

    PubMed

    Tokuda, Yasuharu; Kishida, Naoki; Konishi, Ryota; Koizumi, Shunzo

    2011-03-01

    Cognitive errors in the course of clinical decision-making are prevalent in many cases of medical injury. We used information on verdict's judgment from closed claims files to determine the important cognitive factors associated with cases of medical injury. Data were collected from claims closed between 2001 to 2005 at district courts in Tokyo and Osaka, Japan. In each case, we recorded all the contributory cognitive, systemic, and patient-related factors judged in the verdicts to be causally related to the medical injury. We also analyzed the association between cognitive factors and cases involving paid compensation using a multivariable logistic regression model. Among 274 cases (mean age 49 years old; 45% women), there were 122 (45%) deaths and 67 (24%) major injuries (incomplete recovery within a year). In 103 cases (38%), the verdicts ordered hospitals to pay compensation (median; 8,000,000 Japanese Yen). An error in judgment (199/274, 73%) and failure of vigilance (177/274, 65%) were the most prevalent causative cognitive factors, and error in judgment was also significantly associated with paid compensation (odds ratio, 1.9; 95% confidence interval [CI], 1.0-3.4). Systemic causative factors including poor teamwork (11/274, 4%) and technology failure (5/274, 2%) were less common. The closed claims analysis based on verdict's judgment showed that cognitive errors were common in cases of medical injury, with an error in judgment being most prevalent and closely associated with compensation payment. Reduction of this type of error is required to produce safer healthcare. 2010 Society of Hospital Medicine.

  7. One-dimensional wave bottom boundary layer model comparison: specific eddy viscosity and turbulence closure models

    USGS Publications Warehouse

    Puleo, J.A.; Mouraenko, O.; Hanes, D.M.

    2004-01-01

    Six one-dimensional-vertical wave bottom boundary layer models are analyzed based on different methods for estimating the turbulent eddy viscosity: Laminar, linear, parabolic, k—one equation turbulence closure, k−ε—two equation turbulence closure, and k−ω—two equation turbulence closure. Resultant velocity profiles, bed shear stresses, and turbulent kinetic energy are compared to laboratory data of oscillatory flow over smooth and rough beds. Bed shear stress estimates for the smooth bed case were most closely predicted by the k−ω model. Normalized errors between model predictions and measurements of velocity profiles over the entire computational domain collected at 15° intervals for one-half a wave cycle show that overall the linear model was most accurate. The least accurate were the laminar and k−ε models. Normalized errors between model predictions and turbulence kinetic energy profiles showed that the k−ω model was most accurate. Based on these findings, when the smallest overall velocity profile prediction error is required, the processing requirements and error analysis suggest that the linear eddy viscosity model is adequate. However, if accurate estimates of bed shear stress and TKE are required then, of the models tested, the k−ω model should be used.

  8. Localized-atlas-based segmentation of breast MRI in a decision-making framework.

    PubMed

    Fooladivanda, Aida; Shokouhi, Shahriar B; Ahmadinejad, Nasrin

    2017-03-01

    Breast-region segmentation is an important step for density estimation and Computer-Aided Diagnosis (CAD) systems in Magnetic Resonance Imaging (MRI). Detection of breast-chest wall boundary is often a difficult task due to similarity between gray-level values of fibroglandular tissue and pectoral muscle. This paper proposes a robust breast-region segmentation method which is applicable for both complex cases with fibroglandular tissue connected to the pectoral muscle, and simple cases with high contrast boundaries. We present a decision-making framework based on geometric features and support vector machine (SVM) to classify breasts in two main groups, complex and simple. For complex cases, breast segmentation is done using a combination of intensity-based and atlas-based techniques; however, only intensity-based operation is employed for simple cases. A novel atlas-based method, that is called localized-atlas, accomplishes the processes of atlas construction and registration based on the region of interest (ROI). Atlas-based segmentation is performed by relying on the chest wall template. Our approach is validated using a dataset of 210 cases. Based on similarity between automatic and manual segmentation results, the proposed method achieves Dice similarity coefficient, Jaccard coefficient, total overlap, false negative, and false positive values of 96.3, 92.9, 97.4, 2.61 and 4.77%, respectively. The localization error of the breast-chest wall boundary is 1.97 mm, in terms of averaged deviation distance. The achieved results prove that the suggested framework performs the breast segmentation with negligible errors and efficient computational time for different breasts from the viewpoints of size, shape, and density pattern.

  9. Unifying distance-based goodness-of-fit indicators for hydrologic model assessment

    NASA Astrophysics Data System (ADS)

    Cheng, Qinbo; Reinhardt-Imjela, Christian; Chen, Xi; Schulte, Achim

    2014-05-01

    The goodness-of-fit indicator, i.e. efficiency criterion, is very important for model calibration. However, recently the knowledge about the goodness-of-fit indicators is all empirical and lacks a theoretical support. Based on the likelihood theory, a unified distance-based goodness-of-fit indicator termed BC-GED model is proposed, which uses the Box-Cox (BC) transformation to remove the heteroscedasticity of model errors and the generalized error distribution (GED) with zero-mean to fit the distribution of model errors after BC. The BC-GED model can unify all recent distance-based goodness-of-fit indicators, and reveals the mean square error (MSE) and the mean absolute error (MAE) that are widely used goodness-of-fit indicators imply statistic assumptions that the model errors follow the Gaussian distribution and the Laplace distribution with zero-mean, respectively. The empirical knowledge about goodness-of-fit indicators can be also easily interpreted by BC-GED model, e.g. the sensitivity to high flow of the goodness-of-fit indicators with large power of model errors results from the low probability of large model error in the assumed distribution of these indicators. In order to assess the effect of the parameters (i.e. the BC transformation parameter λ and the GED kurtosis coefficient β also termed the power of model errors) of BC-GED model on hydrologic model calibration, six cases of BC-GED model were applied in Baocun watershed (East China) with SWAT-WB-VSA model. Comparison of the inferred model parameters and model simulation results among the six indicators demonstrates these indicators can be clearly separated two classes by the GED kurtosis β: β >1 and β ≤ 1. SWAT-WB-VSA calibrated by the class β >1 of distance-based goodness-of-fit indicators captures high flow very well and mimics the baseflow very badly, but it calibrated by the class β ≤ 1 mimics the baseflow very well, because first the larger value of β, the greater emphasis is put on high flow and second the derivative of GED probability density function at zero is zero as β >1, but discontinuous as β ≤ 1, and even infinity as β < 1 with which the maximum likelihood estimation can guarantee the model errors approach zero as well as possible. The BC-GED that estimates the parameters (i.e. λ and β) of BC-GED model as well as hydrologic model parameters is the best distance-based goodness-of-fit indicator because not only the model validation using groundwater levels is very good, but also the model errors fulfill the statistic assumption best. However, in some cases of model calibration with a few observations e.g. calibration of single-event model, for avoiding estimation of the parameters of BC-GED model the MAE i.e. the boundary indicator (β = 1) of the two classes, can replace the BC-GED, because the model validation of MAE is best.

  10. Accounting for spatial correlation errors in the assimilation of GRACE into hydrological models through localization

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.

    2017-10-01

    Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS estimates for improved data assimilation results.

  11. Uncertainty information in climate data records from Earth observation

    NASA Astrophysics Data System (ADS)

    Merchant, Christopher J.; Paul, Frank; Popp, Thomas; Ablain, Michael; Bontemps, Sophie; Defourny, Pierre; Hollmann, Rainer; Lavergne, Thomas; Laeng, Alexandra; de Leeuw, Gerrit; Mittaz, Jonathan; Poulsen, Caroline; Povey, Adam C.; Reuter, Max; Sathyendranath, Shubha; Sandven, Stein; Sofieva, Viktoria F.; Wagner, Wolfgang

    2017-07-01

    The question of how to derive and present uncertainty information in climate data records (CDRs) has received sustained attention within the European Space Agency Climate Change Initiative (CCI), a programme to generate CDRs addressing a range of essential climate variables (ECVs) from satellite data. Here, we review the nature, mathematics, practicalities, and communication of uncertainty information in CDRs from Earth observations. This review paper argues that CDRs derived from satellite-based Earth observation (EO) should include rigorous uncertainty information to support the application of the data in contexts such as policy, climate modelling, and numerical weather prediction reanalysis. Uncertainty, error, and quality are distinct concepts, and the case is made that CDR products should follow international metrological norms for presenting quantified uncertainty. As a baseline for good practice, total standard uncertainty should be quantified per datum in a CDR, meaning that uncertainty estimates should clearly discriminate more and less certain data. In this case, flags for data quality should not duplicate uncertainty information, but instead describe complementary information (such as the confidence in the uncertainty estimate provided or indicators of conditions violating the retrieval assumptions). The paper discusses the many sources of error in CDRs, noting that different errors may be correlated across a wide range of timescales and space scales. Error effects that contribute negligibly to the total uncertainty in a single-satellite measurement can be the dominant sources of uncertainty in a CDR on the large space scales and long timescales that are highly relevant for some climate applications. For this reason, identifying and characterizing the relevant sources of uncertainty for CDRs is particularly challenging. The characterization of uncertainty caused by a given error effect involves assessing the magnitude of the effect, the shape of the error distribution, and the propagation of the uncertainty to the geophysical variable in the CDR accounting for its error correlation properties. Uncertainty estimates can and should be validated as part of CDR validation when possible. These principles are quite general, but the approach to providing uncertainty information appropriate to different ECVs is varied, as confirmed by a brief review across different ECVs in the CCI. User requirements for uncertainty information can conflict with each other, and a variety of solutions and compromises are possible. The concept of an ensemble CDR as a simple means of communicating rigorous uncertainty information to users is discussed. Our review concludes by providing eight concrete recommendations for good practice in providing and communicating uncertainty in EO-based climate data records.

  12. Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant: Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne

    2012-03-01

    A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.

  13. Efficient Reduction and Analysis of Model Predictive Error

    NASA Astrophysics Data System (ADS)

    Doherty, J.

    2006-12-01

    Most groundwater models are calibrated against historical measurements of head and other system states before being used to make predictions in a real-world context. Through the calibration process, parameter values are estimated or refined such that the model is able to reproduce historical behaviour of the system at pertinent observation points reasonably well. Predictions made by the model are deemed to have greater integrity because of this. Unfortunately, predictive integrity is not as easy to achieve as many groundwater practitioners would like to think. The level of parameterisation detail estimable through the calibration process (especially where estimation takes place on the basis of heads alone) is strictly limited, even where full use is made of modern mathematical regularisation techniques such as those encapsulated in the PEST calibration package. (Use of these mechanisms allows more information to be extracted from a calibration dataset than is possible using simpler regularisation devices such as zones of piecewise constancy.) Where a prediction depends on aspects of parameterisation detail that are simply not inferable through the calibration process (which is often the case for predictions related to contaminant movement, and/or many aspects of groundwater/surface water interaction), then that prediction may be just as much in error as it would have been if the model had not been calibrated at all. Model predictive error arises from two sources. These are (a) the presence of measurement noise within the calibration dataset through which linear combinations of parameters spanning the "calibration solution space" are inferred, and (b) the sensitivity of the prediction to members of the "calibration null space" spanned by linear combinations of parameters which are not inferable through the calibration process. The magnitude of the former contribution depends on the level of measurement noise. The magnitude of the latter contribution (which often dominates the former) depends on the "innate variability" of hydraulic properties within the model domain. Knowledge of both of these is a prerequisite for characterisation of the magnitude of possible model predictive error. Unfortunately, in most cases, such knowledge is incomplete and subjective. Nevertheless, useful analysis of model predictive error can still take place. The present paper briefly discusses the means by which mathematical regularisation can be employed in the model calibration process in order to extract as much information as possible on hydraulic property heterogeneity prevailing within the model domain, thereby reducing predictive error to the lowest that can be achieved on the basis of that dataset. It then demonstrates the means by which predictive error variance can be quantified based on information supplied by the regularised inversion process. Both linear and nonlinear predictive error variance analysis is demonstrated using a number of real-world and synthetic examples.

  14. On the validity of cosmological Fisher matrix forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolz, Laura; Kilbinger, Martin; Weller, Jochen

    2012-09-01

    We present a comparison of Fisher matrix forecasts for cosmological probes with Monte Carlo Markov Chain (MCMC) posterior likelihood estimation methods. We analyse the performance of future Dark Energy Task Force (DETF) stage-III and stage-IV dark-energy surveys using supernovae, baryon acoustic oscillations and weak lensing as probes. We concentrate in particular on the dark-energy equation of state parameters w{sub 0} and w{sub a}. For purely geometrical probes, and especially when marginalising over w{sub a}, we find considerable disagreement between the two methods, since in this case the Fisher matrix can not reproduce the highly non-elliptical shape of the likelihood function.more » More quantitatively, the Fisher method underestimates the marginalized errors for purely geometrical probes between 30%-70%. For cases including structure formation such as weak lensing, we find that the posterior probability contours from the Fisher matrix estimation are in good agreement with the MCMC contours and the forecasted errors only changing on the 5% level. We then explore non-linear transformations resulting in physically-motivated parameters and investigate whether these parameterisations exhibit a Gaussian behaviour. We conclude that for the purely geometrical probes and, more generally, in cases where it is not known whether the likelihood is close to Gaussian, the Fisher matrix is not the appropriate tool to produce reliable forecasts.« less

  15. Cirrus Cloud Optical Thickness and Effective Diameter Retrieved by MODIS: Impacts of Single Habit Assumption, 3-D Radiative Effects, and Cloud Inhomogeneity

    NASA Astrophysics Data System (ADS)

    Zhou, Yongbo; Sun, Xuejin; Mielonen, Tero; Li, Haoran; Zhang, Riwei; Li, Yan; Zhang, Chuanliang

    2018-01-01

    For inhomogeneous cirrus clouds, cloud optical thickness (COT) and effective diameter (De) provided by the Moderate Resolution Imaging Spectrometer (MODIS) Collection 6 cloud products are associated with errors due to the single habit assumption (SHA), independent pixel assumption (IPA), photon absorption effect (PAE), and plane-parallel assumption (PPA). SHA means that every cirrus cloud is assumed to have the same shape habit of ice crystals. IPA errors are caused by three-dimensional (3D) radiative effects. PPA and PAE errors are caused by cloud inhomogeneity. We proposed a method to single out these different errors. These errors were examined using the Spherical Harmonics Discrete Ordinate Method simulations done for the MODIS 0.86 μm and 2.13 μm bands. Four midlatitude and tropical cirrus cases were studied. For the COT retrieval, the impacts of SHA and IPA were especially large for optically thick cirrus cases. SHA errors in COT varied distinctly with scattering angles. For the De retrieval, SHA decreased De under most circumstances. PAE decreased De for optically thick cirrus cases. For the COT and De retrievals, the dominant error source was SHA for overhead sun whereas for oblique sun, it could be any of SHA, IPA, and PAE, varying with cirrus cases and sun-satellite viewing geometries. On the domain average, the SHA errors in COT (De) were within -16.1%-42.6% (-38.7%-2.0%), whereas the 3-D radiative effects- and cloud inhomogeneity-induced errors in COT (De) were within -5.6%-19.6% (-2.9%-8.0%) and -2.6%-0% (-3.7%-9.8%), respectively.

  16. Matrix interference evaluation employing GC and LC coupled to triple quadrupole tandem mass spectrometry.

    PubMed

    Uclés, S; Lozano, A; Sosa, A; Parrilla Vázquez, P; Valverde, A; Fernández-Alba, A R

    2017-11-01

    Gas and liquid chromatography coupled to triple quadrupole tandem mass spectrometry are currently the most powerful tools employed for the routine analysis of pesticide residues in food control laboratories. However, whatever the multiresidue extraction method, there will be a residual matrix effect making it difficult to identify/quantify some specific compounds in certain cases. Two main effects stand out: (i) co-elution with isobaric matrix interferents, which can be a major drawback for unequivocal identification, and therefore false negative detections, and (ii) signal suppression/enhancement, commonly called the "matrix effect", which may cause serious problems including inaccurate quantitation, low analyte detectability and increased method uncertainty. The aim of this analytical study is to provide a framework for evaluating the maximum expected errors associated with the matrix effects. The worst-case study contrived to give an estimation of the extreme errors caused by matrix effects when extraction/determination protocols are applied in routine multiresidue analysis. Twenty-five different blank matrices extracted with the four most common extraction methods used in routine analysis (citrate QuEChERS with/without PSA clean-up, ethyl acetate and the Dutch mini-Luke "NL" methods) were evaluated by both GC-QqQ-MS/MS and LC-QqQ-MS/MS. The results showed that the presence of matrix compounds with isobaric transitions to target pesticides was higher in GC than under LC in the experimental conditions tested. In a second study, the number of "potential" false negatives was evaluated. For that, ten matrices with higher percentages of natural interfering components were checked. Additionally, the results showed that for more than 90% of the cases, pesticide quantification was not affected by matrix-matched standard calibration when an interferent was kept constant along the calibration curve. The error in quantification depended on the concentration level. In a third study, the "matrix effect" was evaluated for each commodity/extraction method. Results showed 44% of cases with suppression/enhancement for LC and 93% of cases with enhancement for GC. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less

  18. Can statistic adjustment of OR minimize the potential confounding bias for meta-analysis of case-control study? A secondary data analysis.

    PubMed

    Liu, Tianyi; Nie, Xiaolu; Wu, Zehao; Zhang, Ying; Feng, Guoshuang; Cai, Siyu; Lv, Yaqi; Peng, Xiaoxia

    2017-12-29

    Different confounder adjustment strategies were used to estimate odds ratios (ORs) in case-control study, i.e. how many confounders original studies adjusted and what the variables are. This secondary data analysis is aimed to detect whether there are potential biases caused by difference of confounding factor adjustment strategies in case-control study, and whether such bias would impact the summary effect size of meta-analysis. We included all meta-analyses that focused on the association between breast cancer and passive smoking among non-smoking women, as well as each original case-control studies included in these meta-analyses. The relative deviations (RDs) of each original study were calculated to detect how magnitude the adjustment would impact the estimation of ORs, compared with crude ORs. At the same time, a scatter diagram was sketched to describe the distribution of adjusted ORs with different number of adjusted confounders. Substantial inconsistency existed in meta-analysis of case-control studies, which would influence the precision of the summary effect size. First, mixed unadjusted and adjusted ORs were used to combine individual OR in majority of meta-analysis. Second, original studies with different adjustment strategies of confounders were combined, i.e. the number of adjusted confounders and different factors being adjusted in each original study. Third, adjustment did not make the effect size of original studies trend to constringency, which suggested that model fitting might have failed to correct the systematic error caused by confounding. The heterogeneity of confounder adjustment strategies in case-control studies may lead to further bias for summary effect size in meta-analyses, especially for weak or medium associations so that the direction of causal inference would be even reversed. Therefore, further methodological researches are needed, referring to the assessment of confounder adjustment strategies, as well as how to take this kind of bias into consideration when drawing conclusion based on summary estimation of meta-analyses.

  19. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  20. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  1. A ridge tracking algorithm and error estimate for efficient computation of Lagrangian coherent structures.

    PubMed

    Lipinski, Doug; Mohseni, Kamran

    2010-03-01

    A ridge tracking algorithm for the computation and extraction of Lagrangian coherent structures (LCS) is developed. This algorithm takes advantage of the spatial coherence of LCS by tracking the ridges which form LCS to avoid unnecessary computations away from the ridges. We also make use of the temporal coherence of LCS by approximating the time dependent motion of the LCS with passive tracer particles. To justify this approximation, we provide an estimate of the difference between the motion of the LCS and that of tracer particles which begin on the LCS. In addition to the speedup in computational time, the ridge tracking algorithm uses less memory and results in smaller output files than the standard LCS algorithm. Finally, we apply our ridge tracking algorithm to two test cases, an analytically defined double gyre as well as the more complicated example of the numerical simulation of a swimming jellyfish. In our test cases, we find up to a 35 times speedup when compared with the standard LCS algorithm.

  2. Efficient logistic regression designs under an imperfect population identifier.

    PubMed

    Albert, Paul S; Liu, Aiyi; Nansel, Tonja

    2014-03-01

    Motivated by actual study designs, this article considers efficient logistic regression designs where the population is identified with a binary test that is subject to diagnostic error. We consider the case where the imperfect test is obtained on all participants, while the gold standard test is measured on a small chosen subsample. Under maximum-likelihood estimation, we evaluate the optimal design in terms of sample selection as well as verification. We show that there may be substantial efficiency gains by choosing a small percentage of individuals who test negative on the imperfect test for inclusion in the sample (e.g., verifying 90% test-positive cases). We also show that a two-stage design may be a good practical alternative to a fixed design in some situations. Under optimal and nearly optimal designs, we compare maximum-likelihood and semi-parametric efficient estimators under correct and misspecified models with simulations. The methodology is illustrated with an analysis from a diabetes behavioral intervention trial. © 2013, The International Biometric Society.

  3. Optimal estimation of the optomechanical coupling strength

    NASA Astrophysics Data System (ADS)

    Bernád, József Zsolt; Sanavio, Claudio; Xuereb, André

    2018-06-01

    We apply the formalism of quantum estimation theory to obtain information about the value of the nonlinear optomechanical coupling strength. In particular, we discuss the minimum mean-square error estimator and a quantum Cramér-Rao-type inequality for the estimation of the coupling strength. Our estimation strategy reveals some cases where quantum statistical inference is inconclusive and merely results in the reinforcement of prior expectations. We show that these situations also involve the highest expected information losses. We demonstrate that interaction times on the order of one time period of mechanical oscillations are the most suitable for our estimation scenario, and compare situations involving different photon and phonon excitations.

  4. Symmetry boost of the fidelity of Shor factoring

    NASA Astrophysics Data System (ADS)

    Nam, Y. S.; Blümel, R.

    2018-05-01

    In Shor's algorithm quantum subroutines occur with the structure F U F-1 , where F is a unitary transform and U is performing a quantum computation. Examples are quantum adders and subunits of quantum modulo adders. In this paper we show, both analytically and numerically, that if, in analogy to spin echoes, F and F-1 can be implemented symmetrically when executing Shor's algorithm on actual, imperfect quantum hardware, such that F and F-1 have the same hardware errors, a symmetry boost in the fidelity of the combined F U F-1 quantum operation results when compared to the case in which the errors in F and F-1 are independently random. Running the complete gate-by-gate implemented Shor algorithm, we show that the symmetry-induced fidelity boost can be as large as a factor 4. While most of our analytical and numerical results concern the case of over- and under-rotation of controlled rotation gates, in the numerically accessible case of Shor's algorithm with a small number of qubits, we show explicitly that the symmetry boost is robust with respect to more general types of errors. While, expectedly, additional error types reduce the symmetry boost, we show explicitly, by implementing general off-diagonal SU (N ) errors (N =2 ,4 ,8 ), that the boost factor scales like a Lorentzian in δ /σ , where σ and δ are the error strengths of the diagonal over- and underrotation errors and the off-diagonal SU (N ) errors, respectively. The Lorentzian shape also shows that, while the boost factor may become small with increasing δ , it declines slowly (essentially like a power law) and is never completely erased. We also investigate the effect of diagonal nonunitary errors, which, in analogy to unitary errors, reduce but never erase the symmetry boost. Going beyond the case of small quantum processors, we present analytical scaling results that show that the symmetry boost persists in the practically interesting case of a large number of qubits. We illustrate this result explicitly for the case of Shor factoring of the semiprime RSA-1024, where, analytically, focusing on over- and underrotation errors, we obtain a boost factor of about 10. In addition, we provide a proof of the fidelity product formula, including its range of applicability.

  5. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    NASA Astrophysics Data System (ADS)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  6. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  7. The Use of EPI-Splines to Model Empirical Semivariograms for Optimal Spatial Estimation

    DTIC Science & Technology

    2016-09-01

    proliferation of unmanned systems in military and civilian sectors has occurred at lightning speed. In the case of Autonomous Underwater Vehicles or...SLAM is a method of position estimation that relies on map data [3]. In this process, the creation of the map occurs as the vehicle is navigating the...that ensures minimal errors. This technique is accomplished in two steps. The first step is creation of the semivariogram. The semivariogram is a

  8. BETASEQ: a powerful novel method to control type-I error inflation in partially sequenced data for rare variant association testing.

    PubMed

    Yan, Song; Li, Yun

    2014-02-15

    Despite its great capability to detect rare variant associations, next-generation sequencing is still prohibitively expensive when applied to large samples. In case-control studies, it is thus appealing to sequence only a subset of cases to discover variants and genotype the identified variants in controls and the remaining cases under the reasonable assumption that causal variants are usually enriched among cases. However, this approach leads to inflated type-I error if analyzed naively for rare variant association. Several methods have been proposed in recent literature to control type-I error at the cost of either excluding some sequenced cases or correcting the genotypes of discovered rare variants. All of these approaches thus suffer from certain extent of information loss and thus are underpowered. We propose a novel method (BETASEQ), which corrects inflation of type-I error by supplementing pseudo-variants while keeps the original sequence and genotype data intact. Extensive simulations and real data analysis demonstrate that, in most practical situations, BETASEQ leads to higher testing powers than existing approaches with guaranteed (controlled or conservative) type-I error. BETASEQ and associated R files, including documentation, examples, are available at http://www.unc.edu/~yunmli/betaseq

  9. The accuracy of seismic estimates of dynamic strains: an evaluation using strainmeter and seismometer data from Piñon Flat Observatory, California

    USGS Publications Warehouse

    Gomberg, Joan S.; Agnew, Duncan Carr

    1996-01-01

    The dynamic strains associated with seismic waves may play a significant role in earthquake triggering, hydrological and magmatic changes, earthquake damage, and ground failure. We determine how accurately dynamic strains may be estimated from seismometer data and elastic-wave theory by comparing such estimated strains with strains measured on a three-component long-base strainmeter system at Pin??on Flat, California. We quantify the uncertainties and errors through cross-spectral analysis of data from three regional earthquakes (the M0 = 4 ?? 1017 N-m St. George, Utah; M0 = 4 ?? 1017 N-m Little Skull Mountain, Nevada; and M0 = 1 ?? 1019 N-m Northridge, California, events at distances of 470, 345, and 206 km, respectively). Our analysis indicates that in most cases the phase of the estimated strain matches that of the observed strain quite well (to within the uncertainties, which are about ?? 0.1 to ?? 0.2 cycles). However, the amplitudes are often systematically off, at levels exceeding the uncertainties (about 20%); in one case, the predicted strain amplitudes are nearly twice those observed. We also observe significant ?????? strains (?? = tangential direction), which should be zero theoretically; in the worst case, the rms ?????? strain exceeds the other nonzero components. These nonzero ?????? strains cannot be caused by deviations of the surface-wave propagation paths from the expected azimuth or by departures from the plane-wave approximation. We believe that distortion of the strain field by topography or material heterogeneities give rise to these complexities.

  10. Characteristics associated with requests by pathologists for second opinions on breast biopsies.

    PubMed

    Geller, Berta M; Nelson, Heidi D; Weaver, Donald L; Frederick, Paul D; Allison, Kimberly H; Onega, Tracy; Carney, Patricia A; Tosteson, Anna N A; Elmore, Joann G

    2017-11-01

    Second opinions in pathology improve patient safety by reducing diagnostic errors, leading to more appropriate clinical treatment decisions. Little objective data are available regarding the factors triggering a request for second opinion despite second opinion consultations being part of the diagnostic system of pathology. Therefore we sought to assess breast biopsy cases and interpreting pathologists characteristics associated with second opinion requests. Collected pathologist surveys and their interpretations of 60 test set cases were used to explore the relationships between case characteristics, pathologist characteristics and case perceptions, and requests for second opinions. Data were evaluated by logistic regression and generalised estimating equations. 115 pathologists provided 6900 assessments; pathologists requested second opinions on 70% (4827/6900) of their assessments 36% (1731/4827) of these would not have been required by policy. All associations between case characteristics and requesting second opinions were statistically significant, including diagnostic category, breast density, biopsy type, and number of diagnoses noted per case. Exclusive of institutional policies, pathologists wanted second opinions most frequently for atypia (66%) and least frequently for invasive cancer (20%). Second opinion rates were higher when the pathologist had lower assessment confidence, in cases with higher perceived difficulty, and cases with borderline diagnoses. Pathologists request second opinions for challenging cases, particularly those with atypia, high breast density, core needle biopsies, or many co-existing diagnoses. Further studies should evaluate whether the case characteristics identified in this study could be used as clinical criteria to prompt system-level strategies for mandating second opinions. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  11. Variation of ultrasound image lateral spectrum with assumed speed of sound and true scatterer density.

    PubMed

    Gyöngy, Miklós; Kollár, Sára

    2015-02-01

    One method of estimating sound speed in diagnostic ultrasound imaging consists of choosing the speed of sound that generates the sharpest image, as evaluated by the lateral frequency spectrum of the squared B-mode image. In the current work, simulated and experimental data on a typical (47 mm aperture, 3.3-10.0 MHz response) linear array transducer are used to investigate the accuracy of this method. A range of candidate speeds of sound (1240-1740 m/s) was used, with a true speed of sound of 1490 m/s in simulations and 1488 m/s in experiments. Simulations of single point scatterers and two interfering point scatterers at various locations with respect to each other gave estimate errors of 0.0-2.0%. Simulations and experiments of scatterer distributions with a mean scatterer spacing of at least 0.5 mm gave estimate errors of 0.1-4.0%. In the case of lower scatterer spacing, the speed of sound estimates become unreliable due to a decrease in contrast of the sharpness measure between different candidate speeds of sound. This suggests that in estimating speed of sound in tissue, the region of interest should be dominated by a few, sparsely spaced scatterers. Conversely, the decreasing sensitivity of the sharpness measure to speed of sound errors for higher scatterer concentrations suggests a potential method for estimating mean scatterer spacing. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Representation of the Characteristics of Piezoelectric Fiber Composites with Neural Networks

    NASA Astrophysics Data System (ADS)

    Yapici, A.; Bickraj, K.; Yenilmez, A.; Li, M.; Tansel, I. N.; Martin, S. A.; Pereira, C. M.; Roth, L. E.

    2007-03-01

    Ideal sensors for the future should be economical, efficient, highly intelligent, and capable of obtaining their operation power from the environment. The use of piezoelectric fiber composites coupled with a low power microprocessor and backpropagation type neural networks is proposed for the development of a simple sensor to estimate the characteristics of harmonic forces. Three neural networks were used for the estimation of amplitude, gain and variation of the load in the time domain. The average estimation errors of the neural networks were less than 8% in all of the studied cases.

  13. Merging gauge and satellite rainfall with specification of associated uncertainty across Australia

    NASA Astrophysics Data System (ADS)

    Woldemeskel, Fitsum M.; Sivakumar, Bellie; Sharma, Ashish

    2013-08-01

    Accurate estimation of spatial rainfall is crucial for modelling hydrological systems and planning and management of water resources. While spatial rainfall can be estimated either using rain gauge-based measurements or using satellite-based measurements, such estimates are subject to uncertainties due to various sources of errors in either case, including interpolation and retrieval errors. The purpose of the present study is twofold: (1) to investigate the benefit of merging rain gauge measurements and satellite rainfall data for Australian conditions and (2) to produce a database of retrospective rainfall along with a new uncertainty metric for each grid location at any timestep. The analysis involves four steps: First, a comparison of rain gauge measurements and the Tropical Rainfall Measuring Mission (TRMM) 3B42 data at such rain gauge locations is carried out. Second, gridded monthly rain gauge rainfall is determined using thin plate smoothing splines (TPSS) and modified inverse distance weight (MIDW) method. Third, the gridded rain gauge rainfall is merged with the monthly accumulated TRMM 3B42 using a linearised weighting procedure, the weights at each grid being calculated based on the error variances of each dataset. Finally, cross validation (CV) errors at rain gauge locations and standard errors at gridded locations for each timestep are estimated. The CV error statistics indicate that merging of the two datasets improves the estimation of spatial rainfall, and more so where the rain gauge network is sparse. The provision of spatio-temporal standard errors with the retrospective dataset is particularly useful for subsequent modelling applications where input error knowledge can help reduce the uncertainty associated with modelling outcomes.

  14. A comparative analysis of errors in long-term econometric forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tepel, R.

    1986-04-01

    The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less

  15. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation

    PubMed Central

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-01-01

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms. PMID:27999361

  16. A New Adaptive H-Infinity Filtering Algorithm for the GPS/INS Integrated Navigation.

    PubMed

    Jiang, Chen; Zhang, Shu-Bi; Zhang, Qiu-Zhao

    2016-12-19

    The Kalman filter is an optimal estimator with numerous applications in technology, especially in systems with Gaussian distributed noise. Moreover, the adaptive Kalman filtering algorithms, based on the Kalman filter, can control the influence of dynamic model errors. In contrast to the adaptive Kalman filtering algorithms, the H-infinity filter is able to address the interference of the stochastic model by minimization of the worst-case estimation error. In this paper, a novel adaptive H-infinity filtering algorithm, which integrates the adaptive Kalman filter and the H-infinity filter in order to perform a comprehensive filtering algorithm, is presented. In the proposed algorithm, a robust estimation method is employed to control the influence of outliers. In order to verify the proposed algorithm, experiments with real data of the Global Positioning System (GPS) and Inertial Navigation System (INS) integrated navigation, were conducted. The experimental results have shown that the proposed algorithm has multiple advantages compared to the other filtering algorithms.

  17. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  18. Conservative classical and quantum resolution limits for incoherent imaging

    NASA Astrophysics Data System (ADS)

    Tsang, Mankei

    2018-06-01

    I propose classical and quantum limits to the statistical resolution of two incoherent optical point sources from the perspective of minimax parameter estimation. Unlike earlier results based on the Cramér-Rao bound (CRB), the limits proposed here, based on the worst-case error criterion and a Bayesian version of the CRB, are valid for any biased or unbiased estimator and obey photon-number scalings that are consistent with the behaviours of actual estimators. These results prove that, from the minimax perspective, the spatial-mode demultiplexing measurement scheme recently proposed by Tsang, Nair, and Lu [Phys. Rev. X 2016, 6 031033.] remains superior to direct imaging for sufficiently high photon numbers.

  19. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  20. [Spinal manipulative therapy and cervical artery dissections].

    PubMed

    Saxler, G; Schopphoff, E; Quitmann, H; Quint, U

    2005-06-01

    Severe complications after cervical spine manipulation are rare. As experts for medical treatment errors, we received between July 2002 and February 2004 cases with serious complications in the central nervous system after manipulation. 5 vertebral artery dissections with subsequent brain infarction were registered. In all cases, the patients showed complete persisting remission of symptoms. In addition, a kinematic estimation model was developed to study the possible causes of vertebral artery damage. We were able to demonstrate that material extension is dependent on cervical rotation and the "free length" of the vertebral artery in the upper cervical spine.

  1. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  2. Computer-Aided Surgical Simulation in Head and Neck Reconstruction: A Cost Comparison among Traditional, In-House, and Commercial Options.

    PubMed

    Li, Sean S; Copeland-Halperin, Libby R; Kaminsky, Alexander J; Li, Jihui; Lodhi, Fahad K; Miraliakbari, Reza

    2018-06-01

     Computer-aided surgical simulation (CASS) has redefined surgery, improved precision and reduced the reliance on intraoperative trial-and-error manipulations. CASS is provided by third-party services; however, it may be cost-effective for some hospitals to develop in-house programs. This study provides the first cost analysis comparison among traditional (no CASS), commercial CASS, and in-house CASS for head and neck reconstruction.  The costs of three-dimensional (3D) pre-operative planning for mandibular and maxillary reconstructions were obtained from an in-house CASS program at our large tertiary care hospital in Northern Virginia, as well as a commercial provider (Synthes, Paoli, PA). A cost comparison was performed among these modalities and extrapolated in-house CASS costs were derived. The calculations were based on estimated CASS use with cost structures similar to our institution and sunk costs were amortized over 10 years.  Average operating room time was estimated at 10 hours, with an average of 2 hours saved with CASS. The hourly cost to the hospital for the operating room (including anesthesia and other ancillary costs) was estimated at $4,614/hour. Per case, traditional cases were $46,140, commercial CASS cases were $40,951, and in-house CASS cases were $38,212. Annual in-house CASS costs were $39,590.  CASS reduced operating room time, likely due to improved efficiency and accuracy. Our data demonstrate that hospitals with similar cost structure as ours, performing greater than 27 cases of 3D head and neck reconstructions per year can see a financial benefit from developing an in-house CASS program. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  3. On the error in crop acreage estimation using satellite (LANDSAT) data

    NASA Technical Reports Server (NTRS)

    Chhikara, R. (Principal Investigator)

    1983-01-01

    The problem of crop acreage estimation using satellite data is discussed. Bias and variance of a crop proportion estimate in an area segment obtained from the classification of its multispectral sensor data are derived as functions of the means, variances, and covariance of error rates. The linear discriminant analysis and the class proportion estimation for the two class case are extended to include a third class of measurement units, where these units are mixed on ground. Special attention is given to the investigation of mislabeling in training samples and its effect on crop proportion estimation. It is shown that the bias and variance of the estimate of a specific crop acreage proportion increase as the disparity in mislabeling rates between two classes increases. Some interaction is shown to take place, causing the bias and the variance to decrease at first and then to increase, as the mixed unit class varies in size from 0 to 50 percent of the total area segment.

  4. The Effects of Spatial Diversity and Imperfect Channel Estimation on Wideband MC-DS-CDMA and MC-CDMA

    DTIC Science & Technology

    2009-10-01

    In our previous work, we compared the theoretical bit error rates of multi-carrier direct sequence code division multiple access (MC- DS - CDMA ) and...consider only those cases where MC- CDMA has higher frequency diversity than MC- DS - CDMA . Since increases in diversity yield diminishing gains, we conclude

  5. Ideal, nonideal, and no-marker variables: The confirmatory factor analysis (CFA) marker technique works when it matters.

    PubMed

    Williams, Larry J; O'Boyle, Ernest H

    2015-09-01

    A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).

  6. Improving Accuracy and Temporal Resolution of Learning Curve Estimation for within- and across-Session Analysis

    PubMed Central

    Tabelow, Karsten; König, Reinhard; Polzehl, Jörg

    2016-01-01

    Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning. PMID:27303809

  7. Multi-Unmanned Aerial Vehicle (UAV) Cooperative Fault Detection Employing Differential Global Positioning (DGPS), Inertial and Vision Sensors.

    PubMed

    Heredia, Guillermo; Caballero, Fernando; Maza, Iván; Merino, Luis; Viguria, Antidio; Ollero, Aníbal

    2009-01-01

    This paper presents a method to increase the reliability of Unmanned Aerial Vehicle (UAV) sensor Fault Detection and Identification (FDI) in a multi-UAV context. Differential Global Positioning System (DGPS) and inertial sensors are used for sensor FDI in each UAV. The method uses additional position estimations that augment individual UAV FDI system. These additional estimations are obtained using images from the same planar scene taken from two different UAVs. Since accuracy and noise level of the estimation depends on several factors, dynamic replanning of the multi-UAV team can be used to obtain a better estimation in case of faults caused by slow growing errors of absolute position estimation that cannot be detected by using local FDI in the UAVs. Experimental results with data from two real UAVs are also presented.

  8. An improved silhouette for human pose estimation

    NASA Astrophysics Data System (ADS)

    Hawes, Anthony H.; Iftekharuddin, Khan M.

    2017-08-01

    We propose a novel method for analyzing images that exploits the natural lines of a human poses to find areas where self-occlusion could be present. Errors caused by self-occlusion cause several modern human pose estimation methods to mis-identify body parts, which reduces the performance of most action recognition algorithms. Our method is motivated by the observation that, in several cases, occlusion can be reasoned using only boundary lines of limbs. An intelligent edge detection algorithm based on the above principle could be used to augment the silhouette with information useful for pose estimation algorithms and push forward progress on occlusion handling for human action recognition. The algorithm described is applicable to computer vision scenarios involving 2D images and (appropriated flattened) 3D images.

  9. Smooth conditional distribution function and quantiles under random censorship.

    PubMed

    Leconte, Eve; Poiraud-Casanova, Sandrine; Thomas-Agnan, Christine

    2002-09-01

    We consider a nonparametric random design regression model in which the response variable is possibly right censored. The aim of this paper is to estimate the conditional distribution function and the conditional alpha-quantile of the response variable. We restrict attention to the case where the response variable as well as the explanatory variable are unidimensional and continuous. We propose and discuss two classes of estimators which are smooth with respect to the response variable as well as to the covariate. Some simulations demonstrate that the new methods have better mean square error performances than the generalized Kaplan-Meier estimator introduced by Beran (1981) and considered in the literature by Dabrowska (1989, 1992) and Gonzalez-Manteiga and Cadarso-Suarez (1994).

  10. A Case of Error Disclosure: A Communication Privacy Management Analysis

    PubMed Central

    Petronio, Sandra; Helft, Paul R.; Child, Jeffrey T.

    2013-01-01

    To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios’s theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient’s family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public health Much of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information, unwittingly following conversational scripts that convey misleading messages, and the difficulty in regulating privacy boundaries in the stressful circumstances that occur with error disclosures. As a consequence, the potential contribution to public health is the ability to more clearly see the significance of the disclosure process that has implications for many public health issues. PMID:25170501

  11. Learning without Borders: A Review of the Implementation of Medical Error Reporting in Médecins Sans Frontières

    PubMed Central

    Shanks, Leslie; Bil, Karla; Fernhout, Jena

    2015-01-01

    Objective To analyse the results from the first 3 years of implementation of a medical error reporting system in Médecins Sans Frontières-Operational Centre Amsterdam (MSF) programs. Methodology A medical error reporting policy was developed with input from frontline workers and introduced to the organisation in June 2010. The definition of medical error used was “the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim.” All confirmed error reports were entered into a database without the use of personal identifiers. Results 179 errors were reported from 38 projects in 18 countries over the period of June 2010 to May 2013. The rate of reporting was 31, 42, and 106 incidents/year for reporting year 1, 2 and 3 respectively. The majority of errors were categorized as dispensing errors (62 cases or 34.6%), errors or delays in diagnosis (24 cases or 13.4%) and inappropriate treatment (19 cases or 10.6%). The impact of the error was categorized as no harm (58, 32.4%), harm (70, 39.1%), death (42, 23.5%) and unknown in 9 (5.0%) reports. Disclosure to the patient took place in 34 cases (19.0%), did not take place in 46 (25.7%), was not applicable for 5 (2.8%) cases and not reported for 94 (52.5%). Remedial actions introduced at headquarters level included guideline revisions and changes to medical supply procedures. At field level improvements included increased training and supervision, adjustments in staffing levels, and adaptations to the organization of the pharmacy. Conclusion It was feasible to implement a voluntary reporting system for medical errors despite the complex contexts in which MSF intervenes. The reporting policy led to system changes that improved patient safety and accountability to patients. Challenges remain in achieving widespread acceptance of the policy as evidenced by the low reporting and disclosure rates. PMID:26381622

  12. Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador

    NASA Astrophysics Data System (ADS)

    Chicaiza, E. G.; Leiva, C. A.; Arranz, J. J.; Buenańo, X. E.

    2017-06-01

    Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA), trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects.

  13. Uncertainty analysis for fluorescence tomography with Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Reinbacher-Köstinger, Alice; Freiberger, Manuel; Scharfetter, Hermann

    2011-07-01

    Fluorescence tomography seeks to image an inaccessible fluorophore distribution inside an object like a small animal by injecting light at the boundary and measuring the light emitted by the fluorophore. Optical parameters (e.g. the conversion efficiency or the fluorescence life-time) of certain fluorophores depend on physiologically interesting quantities like the pH value or the oxygen concentration in the tissue, which allows functional rather than just anatomical imaging. To reconstruct the concentration and the life-time from the boundary measurements, a nonlinear inverse problem has to be solved. It is, however, difficult to estimate the uncertainty of the reconstructed parameters in case of iterative algorithms and a large number of degrees of freedom. Uncertainties in fluorescence tomography applications arise from model inaccuracies, discretization errors, data noise and a priori errors. Thus, a Markov chain Monte Carlo method (MCMC) was used to consider all these uncertainty factors exploiting Bayesian formulation of conditional probabilities. A 2-D simulation experiment was carried out for a circular object with two inclusions. Both inclusions had a 2-D Gaussian distribution of the concentration and constant life-time inside of a representative area of the inclusion. Forward calculations were done with the diffusion approximation of Boltzmann's transport equation. The reconstruction results show that the percent estimation error of the lifetime parameter is by a factor of approximately 10 lower than that of the concentration. This finding suggests that lifetime imaging may provide more accurate information than concentration imaging only. The results must be interpreted with caution, however, because the chosen simulation setup represents a special case and a more detailed analysis remains to be done in future to clarify if the findings can be generalized.

  14. Linear algebra of the permutation invariant Crow-Kimura model of prebiotic evolution.

    PubMed

    Bratus, Alexander S; Novozhilov, Artem S; Semenov, Yuri S

    2014-10-01

    A particular case of the famous quasispecies model - the Crow-Kimura model with a permutation invariant fitness landscape - is investigated. Using the fact that the mutation matrix in the case of a permutation invariant fitness landscape has a special tridiagonal form, a change of the basis is suggested such that in the new coordinates a number of analytical results can be obtained. In particular, using the eigenvectors of the mutation matrix as the new basis, we show that the quasispecies distribution approaches a binomial one and give simple estimates for the speed of convergence. Another consequence of the suggested approach is a parametric solution to the system of equations determining the quasispecies. Using this parametric solution we show that our approach leads to exact asymptotic results in some cases, which are not covered by the existing methods. In particular, we are able to present not only the limit behavior of the leading eigenvalue (mean population fitness), but also the exact formulas for the limit quasispecies eigenvector for special cases. For instance, this eigenvector has a geometric distribution in the case of the classical single peaked fitness landscape. On the biological side, we propose a mathematical definition, based on the closeness of the quasispecies to the binomial distribution, which can be used as an operational definition of the notorious error threshold. Using this definition, we suggest two approximate formulas to estimate the critical mutation rate after which the quasispecies delocalization occurs. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. MALPRACTICE AND THE QUALITY OF CARE IN RETINOPATHY OF PREMATURITY (AN AMERICAN OPHTHALMOLOGICAL SOCIETY THESIS)

    PubMed Central

    Reynolds, James D.

    2007-01-01

    Purpose A review of retinopathy of prematurity (ROP) malpractice cases will identify specific, repetitive problems in the provision of care and the reasons underlying these problems. Opportunities to improve the quality of care provided to premature infants with ROP will result. Methods A retrospective review of a series of 13 ROP malpractice cases in which the author served as a paid consultant, as well as a review of the literature for additional cases, was conducted. The series of 13 involved a review of the entire medical record as well as testimony and depositions. The characteristics of each case are tabulated, including state, date, allegations, defendants, disposition, award, the medical facts and care issues involved, and the judgment of medical error. In addition, a merit review was performed on the care in each case, and an error assessment was performed. Results The quality of care issues included neonatology failure to refer or follow up in 8 of 13, failure to adequately supervise resident care in 2 of 13, ophthalmologic failure to follow up in 6 of 13, and failure to properly diagnose and manage in 9 of 13. The latter included 4 of 13 that hinged on zone III issues and the presence or absence of full nasal vascularization with or without previous zone II disease. Merit review found negligent error by at least one party in 12 of 13. Ophthalmology error was found in 6 of 13. Malpractice, ie, negligent error causing negligent harm, was judged to be present in 9 of 13. Conclusions Negligent errors are common in malpractice cases that proceed to disposition. There are a limited number of repetitive errors that produce malpractice. An explanation of how these errors occur, coupled with the pertinent pathophysiology, afford an excellent opportunity to improve patient care PMID:18427626

  16. Clinical validation of an algorithm to correct the error in the keratometric estimation of corneal power in normal eyes.

    PubMed

    Piñero, David P; Camps, Vicente J; Mateo, Verónica; Ruiz-Fortes, Pedro

    2012-08-01

    To validate clinically in a normal healthy population an algorithm to correct the error in the keratometric estimation of corneal power based on the use of a variable keratometric index of refraction (n(k)). Medimar International Hospital (Oftalmar) and University of Alicante, Alicante, Spain. Case series. Corneal power was measured with a Scheimpflug photography-based system (Pentacam software version 1.14r01) in healthy eyes with no previous ocular surgery. In all cases, keratometric corneal power was also estimated using an adjusted value of n(k) that is dependent on the anterior corneal radius (r(1c)) as follows: n(kadj) = -0.0064286 r(1c) +1.37688. Agreement between the Gaussian (P(c)(Gauss)) and adjusted keratometric (P(kadj)) corneal power values was evaluated. The study evaluated 92 eyes (92 patients; age range 15 to 64 years). The mean difference between P(c)(Gauss) and P(kadj) was -0.02 diopter (D) ± 0.22 (SD) (P=.43). A very strong, statistically significant correlation was found between both corneal powers (r = .994, P<.01). The range of agreement between P(c)(Gauss) and P(kadj) was 0.44 D, with limits of agreement of -0.46 and +0.42 D. In addition, a very strong, statistically significant correlation of the difference between P(c)(Gauss) and P(kadj) and the posterior corneal radius was found (r = 0.96, P<.01). The imprecision in the calculation of corneal power using keratometric estimation can be minimized in clinical practice by using a variable keratometric index that depends on the radius of the anterior corneal surface. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  17. Group elicitations yield more consistent, yet more uncertain experts in understanding risks to ecosystem services in New Zealand bays

    PubMed Central

    Sinner, Jim; Ellis, Joanne; Kandlikar, Milind; Halpern, Benjamin S.; Satterfield, Terre; Chan, Kai

    2017-01-01

    The elicitation of expert judgment is an important tool for assessment of risks and impacts in environmental management contexts, and especially important as decision-makers face novel challenges where prior empirical research is lacking or insufficient. Evidence-driven elicitation approaches typically involve techniques to derive more accurate probability distributions under fairly specific contexts. Experts are, however, prone to overconfidence in their judgements. Group elicitations with diverse experts can reduce expert overconfidence by allowing cross-examination and reassessment of prior judgements, but groups are also prone to uncritical “groupthink” errors. When the problem context is underspecified the probability that experts commit groupthink errors may increase. This study addresses how structured workshops affect expert variability among and certainty within responses in a New Zealand case study. We find that experts’ risk estimates before and after a workshop differ, and that group elicitations provided greater consistency of estimates, yet also greater uncertainty among experts, when addressing prominent impacts to four different ecosystem services in coastal New Zealand. After group workshops, experts provided more consistent ranking of risks and more consistent best estimates of impact through increased clarity in terminology and dampening of extreme positions, yet probability distributions for impacts widened. The results from this case study suggest that group elicitations have favorable consequences for the quality and uncertainty of risk judgments within and across experts, making group elicitation techniques invaluable tools in contexts of limited data. PMID:28767694

  18. Geographically correlated orbit error

    NASA Technical Reports Server (NTRS)

    Rosborough, G. W.

    1989-01-01

    The dominant error source in estimating the orbital position of a satellite from ground based tracking data is the modeling of the Earth's gravity field. The resulting orbit error due to gravity field model errors are predominantly long wavelength in nature. This results in an orbit error signature that is strongly correlated over distances on the size of ocean basins. Anderle and Hoskin (1977) have shown that the orbit error along a given ground track also is correlated to some degree with the orbit error along adjacent ground tracks. This cross track correlation is verified here and is found to be significant out to nearly 1000 kilometers in the case of TOPEX/POSEIDON when using the GEM-T1 gravity model. Finally, it was determined that even the orbit error at points where ascending and descending ground traces cross is somewhat correlated. The implication of these various correlations is that the orbit error due to gravity error is geographically correlated. Such correlations have direct implications when using altimetry to recover oceanographic signals.

  19. Large-area Mapping of Forest Cover and Biomass using ALOS PALSAR

    NASA Astrophysics Data System (ADS)

    Cartus, O.; Kellndorfer, J. M.; Walker, W. S.; Goetz, S. J.; Laporte, N.; Bishop, J.; Cormier, T.; Baccini, A.

    2011-12-01

    In the frame of a Pantropical mapping project, we aim at producing high-resolution forest cover maps from ALOS PALSAR. The ALOS data was obtained through the Americas ALOS Data Node (AADN) at ASF. For the forest cover classification, a pan-tropical network of calibrated reference data was generated from ancillary satellite data (ICESAT GLAS). These data are used to classify PALSAR swath data to be combined to continental forest probability maps. The maps are validated with withheld training data for testing, as well as through independent operator verification with very high-resolution image. In addition, we aim at developing robust algorithms for the mapping of forest biophysical parameters like stem volume or biomass using synergy of PALSAR, optical and Lidar data. Currently we are testing different approaches for the mapping of forest biophysical parameters. 1) For the showcase scenario of Mexico, where we have access to ~1400 PALSAR FBD images as well as the 30 m Landsat Vegetation Continuous Field product, VCF, we test a traditional ground-data based approach. The PALSAR HH/HV intensity data and VCF are used as predictor layers in RandomForest for predicting aboveground forest biomass. A network of 40000 in situ biomass plots is used for model development (for each PALSAR swath) as well as for validation. With this approach a first 30 m biomass map for entire Mexico was produced. An initial validation of the map resulted in an RMSE of 41 t/ha and an R2 of 0.42. Pronounced differences between different ecozones were observed. In some areas the retrieval reached an R2 of 0.6 (e.g. pine-oak forests) whereas, for instance, in dry woodlands, the retrieval accuracy was much lower (R2 of 0.1). A major limitation of the approach was also represented by the fact that for the development of models for each ALOS swath, in some cases too few sample plots were available. 2) Chile: At a forest site in Central Chile, dominated by plantations of pinus radiata, synergy of ALOS PALSAR, Landsat and small-footprint Lidar is investigated for the mapping of forest growing stock volume and canopy height. Canopy Height Models with 1 m pixel size that were generated from the first/last return Lidar data were used to produce surrogate sampling plots to upscale stand-level inventory measurements to wall-to-wall maps with the aid of multi-temporal ALOS and Landsat data. The Lidar data allowed the estimation of volume and canopy height with high accuracy: 23 % error in case of volume and 7 % error in case of height. Using the Lidar estimates as surrogate training data for the development of models relating the ALOS backscatter to volume and height we obtained retrieval errors of ~60 % in case of volume and 31 % in case of height when using only one ALOS FBD image. Significant improvements could be achieved when 1) using three ALOS images for retrieval (50 % error for volume and 26 % for height) and 2) when including also Landsat data (42 % error for volume and 20 % for height).

  20. Extreme interplanetary rotational discontinuities at 1 AU

    NASA Astrophysics Data System (ADS)

    Lepping, R. P.; Wu, C.-C.

    2005-11-01

    This study is concerned with the identification and description of a special subset of four Wind interplanetary rotational discontinuities (from an earlier study of 134 directional discontinuities by Lepping et al. (2003)) with some "extreme" characteristics, in the sense that every case has (1) an almost planar current sheet surface, (2) a very large discontinuity angle (ω), (3) at least moderately strong normal field components (>0.8 nT), and (4) the overall set has a very broad range of transition layer thicknesses, with one being as thick as 50 RE and another at the other extreme being 1.6 RE, most being much thicker than are usually studied. Each example has a well-determined surface normal (n) according to minimum variance analysis and corroborated via time delay checking of the discontinuity with observations at IMP 8 by employing the local surface planarity. From the variance analyses, most of these cases had unusually large ratios of intermediate-to-minimum eigenvalues (λI/λmin), being on average 32 for three cases (with a fourth being much larger), indicating compact current sheet transition zones, another (the fifth) extreme property. For many years there has been a controversy as to the relative distribution of rotational (RDs) to tangential discontinuities (TDs) in the solar wind at 1 AU (and elsewhere, such as between the Sun and Earth), even to the point where some authors have suggested that RDs with large ∣Bn∣s are probably not generated or, if generated, are unstable and therefore very rare. Some of this disagreement apparently has been due to the different selection criteria used, e.g., some allowed eigenvalue ratios (λI/λmin) to be almost an order of magnitude lower than 32 in estimating n, usually introducing unacceptable error in n and therefore also in ∣Bn∣. However, we suggest that RDs may not be so rare at 1 AU, but good quality cases (where ∣Bn∣ confidently exceeds the error in ∣Bn∣) appear to be uncommon, and further, cases of large ∣Bn∣ may indeed be rare. Finally, the issue of estimating the number of RDs-to-TDs was revisited using the full 134 events of the original Lepping et al. (2003) study (which utilized the RDs' propagation speeds for this estimation, an unconventional approach) but now by considering only normal field components, the more conventional approach. This resulted in slightly different conclusions, depending on specific assumptions used, making the unconventional approach suspect.

  1. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.

    2018-06-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.

  2. Evaluation of a moderate resolution, satellite-based impervious surface map using an independent, high-resolution validation data set

    USGS Publications Warehouse

    Jones, J.W.; Jarnagin, T.

    2009-01-01

    Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.

  3. Optimal Policy of Cross-Layer Design for Channel Access and Transmission Rate Adaptation in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    He, Hao; Wang, Jun; Zhu, Jiang; Li, Shaoqian

    2010-12-01

    In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

  4. Lognormal kriging for the assessment of reliability in groundwater quality control observation networks

    USGS Publications Warehouse

    Candela, L.; Olea, R.A.; Custodio, E.

    1988-01-01

    Groundwater quality observation networks are examples of discontinuous sampling on variables presenting spatial continuity and highly skewed frequency distributions. Anywhere in the aquifer, lognormal kriging provides estimates of the variable being sampled and a standard error of the estimate. The average and the maximum standard error within the network can be used to dynamically improve the network sampling efficiency or find a design able to assure a given reliability level. The approach does not require the formulation of any physical model for the aquifer or any actual sampling of hypothetical configurations. A case study is presented using the network monitoring salty water intrusion into the Llobregat delta confined aquifer, Barcelona, Spain. The variable chloride concentration used to trace the intrusion exhibits sudden changes within short distances which make the standard error fairly invariable to changes in sampling pattern and to substantial fluctuations in the number of wells. ?? 1988.

  5. Error reporting from the da Vinci surgical system in robotic surgery: A Canadian multispecialty experience at a single academic centre

    PubMed Central

    Rajih, Emad; Tholomier, Côme; Cormier, Beatrice; Samouëlian, Vanessa; Warkus, Thomas; Liberman, Moishe; Widmer, Hugues; Lattouf, Jean-Baptiste; Alenizi, Abdullah M.; Meskawi, Malek; Valdivieso, Roger; Hueber, Pierre-Alain; Karakewicz, Pierre I.; El-Hakim, Assaad; Zorn, Kevin C.

    2017-01-01

    Introduction The goal of the study is to evaluate and report on the third-generation da Vinci surgical (Si) system malfunctions. Methods A total of 1228 robotic surgeries were performed between January 2012 and December 2015 at our academic centre. All cases were performed by using a single, dual console, four-arm, da Vinci Si robot system. The three specialties included urology, gynecology, and thoracic surgery. Studied outcomes included the robotic surgical error types, immediate consequences, and operative side effects. Error rate trend with time was also examined. Results Overall robotic malfunctions were documented on the da Vinci Si systems event log in 4.97% (61/1228) of the cases. The most common error was related to pressure sensors in the robotic arms indicating out of limit output. This recoverable fault was noted in 2.04% (25/1228) of cases. Other errors included unrecoverable electronic communication-related in 1.06% (13/1228) of cases, failed encoder error in 0.57% (7/1228), illuminator-related in 0.33% (4/1228), faulty switch in 0.24% (3/1228), battery-related failures in 0.24% (3/1228), and software/hardware error in 0.08% (1/1228) of cases. Surgical delay was reported only in one patient. No conversion to either open or laparoscopic occurred secondary to robotic malfunctions. In 2015, the incidence of robotic error rose to 1.71% (21/1228) from 0.81% (10/1228) in 2014. Conclusions Robotic malfunction is not infrequent in the current era of robotic surgery in various surgical subspecialties, but rarely consequential. Their seldom occurrence does not seem to affect patient safety or surgical outcome. PMID:28503234

  6. Error reporting from the da Vinci surgical system in robotic surgery: A Canadian multispecialty experience at a single academic centre.

    PubMed

    Rajih, Emad; Tholomier, Côme; Cormier, Beatrice; Samouëlian, Vanessa; Warkus, Thomas; Liberman, Moishe; Widmer, Hugues; Lattouf, Jean-Baptiste; Alenizi, Abdullah M; Meskawi, Malek; Valdivieso, Roger; Hueber, Pierre-Alain; Karakewicz, Pierre I; El-Hakim, Assaad; Zorn, Kevin C

    2017-05-01

    The goal of the study is to evaluate and report on the third-generation da Vinci surgical (Si) system malfunctions. A total of 1228 robotic surgeries were performed between January 2012 and December 2015 at our academic centre. All cases were performed by using a single, dual console, four-arm, da Vinci Si robot system. The three specialties included urology, gynecology, and thoracic surgery. Studied outcomes included the robotic surgical error types, immediate consequences, and operative side effects. Error rate trend with time was also examined. Overall robotic malfunctions were documented on the da Vinci Si systems event log in 4.97% (61/1228) of the cases. The most common error was related to pressure sensors in the robotic arms indicating out of limit output. This recoverable fault was noted in 2.04% (25/1228) of cases. Other errors included unrecoverable electronic communication-related in 1.06% (13/1228) of cases, failed encoder error in 0.57% (7/1228), illuminator-related in 0.33% (4/1228), faulty switch in 0.24% (3/1228), battery-related failures in 0.24% (3/1228), and software/hardware error in 0.08% (1/1228) of cases. Surgical delay was reported only in one patient. No conversion to either open or laparoscopic occurred secondary to robotic malfunctions. In 2015, the incidence of robotic error rose to 1.71% (21/1228) from 0.81% (10/1228) in 2014. Robotic malfunction is not infrequent in the current era of robotic surgery in various surgical subspecialties, but rarely consequential. Their seldom occurrence does not seem to affect patient safety or surgical outcome.

  7. Bounding the errors for convex dynamics on one or more polytopes.

    PubMed

    Tresser, Charles

    2007-09-01

    We discuss the greedy algorithm for approximating a sequence of inputs in a family of polytopes lying in affine spaces by an output sequence made of vertices of the respective polytopes. More precisely, we consider here the case when the greed of the algorithm is dictated by the Euclidean norms of the successive cumulative errors. This algorithm can be interpreted as a time-dependent dynamical system in the vector space, where the errors live, or as a time-dependent dynamical system in an affine space containing copies of all the original polytopes. This affine space contains the inputs, as well as the inputs modified by adding the respective former errors; it is the evolution of these modified inputs that the dynamical system in affine space describes. Scheduling problems with many polytopes arise naturally, for instance, when the inputs are from a single polytope P, but one imposes the constraint that whenever the input belongs to a codimension n face, the output has to be in the same codimension n face (as when scheduling drivers among participants of a carpool). It has been previously shown that the error is bounded in the case of a single polytope by proving the existence of an arbitrary large convex invariant region for the dynamics in affine space: A region that is simultaneously invariant for several polytopes, each considered separately, was also constructed. It was then shown that there cannot be an invariant region in affine space in the general case of a family of polytopes. Here we prove the existence of an arbitrary large convex invariant set for the dynamics in the vector space in the case when the sizes of the polytopes in the family are bounded and the set of all the outgoing normals to all the faces of all the polytopes is finite. It was also previously known that starting from zero as the initial error set, the error set could not be saturated in finitely many steps in some cases with several polytopes: Contradicting a former conjecture, we show that the same happens for some single quadrilaterals and for a single pentagon with an axial symmetry. The disproof of that conjecture is the new piece of information that leads us to expect, and then to verify, as we recount here, that the proof that the errors are bounded in the general case could be a small step beyond the proof of the same statement for the single polytope case.

  8. Bounding the errors for convex dynamics on one or more polytopes

    NASA Astrophysics Data System (ADS)

    Tresser, Charles

    2007-09-01

    We discuss the greedy algorithm for approximating a sequence of inputs in a family of polytopes lying in affine spaces by an output sequence made of vertices of the respective polytopes. More precisely, we consider here the case when the greed of the algorithm is dictated by the Euclidean norms of the successive cumulative errors. This algorithm can be interpreted as a time-dependent dynamical system in the vector space, where the errors live, or as a time-dependent dynamical system in an affine space containing copies of all the original polytopes. This affine space contains the inputs, as well as the inputs modified by adding the respective former errors; it is the evolution of these modified inputs that the dynamical system in affine space describes. Scheduling problems with many polytopes arise naturally, for instance, when the inputs are from a single polytope P, but one imposes the constraint that whenever the input belongs to a codimension n face, the output has to be in the same codimension n face (as when scheduling drivers among participants of a carpool). It has been previously shown that the error is bounded in the case of a single polytope by proving the existence of an arbitrary large convex invariant region for the dynamics in affine space: A region that is simultaneously invariant for several polytopes, each considered separately, was also constructed. It was then shown that there cannot be an invariant region in affine space in the general case of a family of polytopes. Here we prove the existence of an arbitrary large convex invariant set for the dynamics in the vector space in the case when the sizes of the polytopes in the family are bounded and the set of all the outgoing normals to all the faces of all the polytopes is finite. It was also previously known that starting from zero as the initial error set, the error set could not be saturated in finitely many steps in some cases with several polytopes: Contradicting a former conjecture, we show that the same happens for some single quadrilaterals and for a single pentagon with an axial symmetry. The disproof of that conjecture is the new piece of information that leads us to expect, and then to verify, as we recount here, that the proof that the errors are bounded in the general case could be a small step beyond the proof of the same statement for the single polytope case.

  9. Performance of ultrasound fetal weight estimation in twins.

    PubMed

    Dimassi, Kaouther; Karoui, Abir; Triki, Amel; Gara, Mohamed Faouzi

    2016-03-01

    Ultrasonography is an essential tool in the management of twin pregnancies. Fetal weight estimation is useful to anticipate neonatal care in case of weight restriction or growth discordance. To assess the accuracy of estimated fetal weight (EFW) in twins and to assess the accuracy of sonographic examination to predict birth weight discordance (BWD) and small birth weight (SBW).    Methods : This was  a longitudinal prospective study over a period of one year. We have included 50 twin pregnancies with a first trimester ultrasound calculated term and specified chorionicity. An ultrasound EFW was scheduled for all patients within an interval of 4 days before delivery. We calculated the differences between EFW and BW in terms of absolute difference and percentage error. We studied the correlation and the agreement between EFW and BW. Finally we calculated the sensitivity, the specificity, PPV and NPV of ultrasound in the diagnosis of BWD and SBW. Absolute differences between BWF and BW were similar for the two twins. The relative difference was 7.7% [0-32] for T1 and 8.2% [0-27] for T2. The margin of error was greater than 10% in 38% of the cases for T1 and in 34% of cases for T2. Furthermore, correlation coefficients R1 and R2 for T1 and T2 were close to 1; R 1 =0.87 and  R 2 = 0.89. Linear regression analysis allowed us to calculate the birth weight based on the estimated weight and this according to the following equations: For the first twin BW T1 = 0.846 * EFW 415,57+ T1 For the second twin BW T2 = 65.68 + 0.963 * EFW T2 in 34% of cases for T2. Chorionicity, presentation and gestational age did not affect the estimations. Ultrasonography in the diagnosis of SBW had a sensitivity of 90.32%, a specificity of 76.82%, a (PPV) of 80% and a (VPN) of 87%. The performance of ultrasound in the diagnosis of BWD varied according to the adopted threshold. Ultrasound is an effective examination to estimate twins weight. Regarding prenatal diagnosis of birth weight discordance, the relevance of this examination increases with the adopted threshold.

  10. Test-Enhanced Learning in Competence-Based Predoctoral Orthodontics: A Four-Year Study.

    PubMed

    Freda, Nicolas M; Lipp, Mitchell J

    2016-03-01

    Dental educators intend to promote integration of knowledge, skills, and values toward professional competence. Studies report that retrieval, in the form of testing, results in better learning with retention than traditional studying. The aim of this study was to evaluate test-enhanced experiences on demonstrations of competence in diagnosis and management of malocclusion and skeletal problems. The study participants were all third-year dental students (2011 N=88, 2012 N=74, 2013 N=91, 2014 N=85) at New York University College of Dentistry. The 2013 and 2014 groups received the test-enhanced method emphasizing formative assessments with written and dialogic delayed feedback, while the 2011 and 2012 groups received the traditional approach emphasizing lectures and classroom exercises. The students received six two-hour sessions, spaced one week apart. At the final session, a summative assessment consisting of the same four cases was administered. Students constructed a problem list, treatment objectives, and a treatment plan for each case, scored according to the same criteria. Grades were based on the number of cases without critical errors: A=0 critical errors on four cases, A-=0 critical errors on three cases, B+=0 critical errors on two cases, B=0 critical errors on one case, F=critical errors on four cases. Performance grades were categorized as high quality (B+, A-, A) and low quality (F, B). The results showed that the test-enhanced groups demonstrated statistically significant benefits at 95% confidence intervals compared to the traditional groups when comparing low- and high-quality grades. These performance trends support the continued use of the test-enhanced approach.

  11. Performance of Disease Risk Score Matching in Nested Case-Control Studies: A Simulation Study.

    PubMed

    Desai, Rishi J; Glynn, Robert J; Wang, Shirley; Gagne, Joshua J

    2016-05-15

    In a case-control study, matching on a disease risk score (DRS), which includes many confounders, should theoretically result in greater precision than matching on only a few confounders; however, this has not been investigated. We simulated 1,000 hypothetical cohorts with a binary exposure, a time-to-event outcome, and 13 covariates. Each cohort comprised 2 subcohorts of 10,000 patients each: a historical subcohort and a concurrent subcohort. DRS were estimated in the historical subcohorts and applied to the concurrent subcohorts. Nested case-control studies were conducted in the concurrent subcohorts using incidence density sampling with 2 strategies-matching on age and sex, with adjustment for additional confounders, and matching on DRS-followed by conditional logistic regression for 9 outcome-exposure incidence scenarios. In all scenarios, DRS matching yielded lower average standard errors and mean squared errors than did matching on age and sex. In 6 scenarios, DRS matching also resulted in greater empirical power. DRS matching resulted in less relative bias than did matching on age and sex at lower outcome incidences but more relative bias at higher incidences. Post-hoc analysis revealed that the effect of DRS model misspecification might be more pronounced at higher outcome incidences, resulting in higher relative bias. These results suggest that DRS matching might increase the statistical efficiency of case-control studies, particularly when the outcome is rare. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting

    PubMed Central

    2017-01-01

    Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities. PMID:29186921

  13. Level 4 Global and European Chl-a Daily Analyses for End Users and Data Assimilation in the Frame of the Copernicus-Marine Environment Monitoring Service

    NASA Astrophysics Data System (ADS)

    Saulquin, Bertrand; Gohin, Francis; Garnesson, Philippe; Demaria, Julien; Mangin, Antoine; Fanton d'Andon, Odile

    2016-08-01

    The level-4 daily chl-a products are a combination of a water typed merge of chl-a estimates and an optimal interpolation based on the kriging method with regional anisotropic models [1, 2]. The Level 4 products basically pro- vide a global continuous (cloud free) estimation of the surface chl-a concentration at 4 km resolution over the world and 1 km resolution over the Europe. The level-4 products gather MODIS, MERIS, SeaWiFS, VIIRS and OLCI daily observations from 1998 to now.The Level 4 product avoids end users to consider typical lack of data as observed during cloudy conditions and the historical multiplicity of available algorithms such as involved by case 1 (oligotrophic) and case 2 (turbid) water issues in ocean colour. [3, 4].A total product uncertainty, i.e. a combination of the interpolation and the estimation error, is provided for each daily product. The L4 products are freely distributed in the frame of the Copernicus - Marine environment monitoring service.

  14. Accounting for response misclassification and covariate measurement error improves power and reduces bias in epidemiologic studies.

    PubMed

    Cheng, Dunlei; Branscum, Adam J; Stamey, James D

    2010-07-01

    To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.

  15. Effects of energy chirp on bunch length measurement in linear accelerator beams

    NASA Astrophysics Data System (ADS)

    Sabato, L.; Arpaia, P.; Giribono, A.; Liccardo, A.; Mostacci, A.; Palumbo, L.; Vaccarezza, C.; Variola, A.

    2017-08-01

    The effects of assumptions about bunch properties on the accuracy of the measurement method of the bunch length based on radio frequency deflectors (RFDs) in electron linear accelerators (LINACs) are investigated. In particular, when the electron bunch at the RFD has a non-negligible energy chirp (i.e. a correlation between the longitudinal positions and energies of the particle), the measurement is affected by a deterministic intrinsic error, which is directly related to the RFD phase offset. A case study on this effect in the electron LINAC of a gamma beam source at the Extreme Light Infrastructure-Nuclear Physics (ELI-NP) is reported. The relative error is estimated by using an electron generation and tracking (ELEGANT) code to define the reference measurements of the bunch length. The relative error is proved to increase linearly with the RFD phase offset. In particular, for an offset of {{7}\\circ} , corresponding to a vertical centroid offset at a screen of about 1 mm, the relative error is 4.5%.

  16. Dynamic State Estimation for Multi-Machine Power System by Unscented Kalman Filter With Enhanced Numerical Stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Sun, Kai; Wang, Jianhui

    In this paper, in order to enhance the numerical stability of the unscented Kalman filter (UKF) used for power system dynamic state estimation, a new UKF with guaranteed positive semidifinite estimation error covariance (UKFGPS) is proposed and compared with five existing approaches, including UKFschol, UKF-kappa, UKFmodified, UKF-Delta Q, and the squareroot UKF (SRUKF). These methods and the extended Kalman filter (EKF) are tested by performing dynamic state estimation on WSCC 3-machine 9-bus system and NPCC 48-machine 140-bus system. For WSCC system, all methods obtain good estimates. However, for NPCC system, both EKF and the classic UKF fail. It is foundmore » that UKFschol, UKF-kappa, and UKF-Delta Q do not work well in some estimations while UKFGPS works well in most cases. UKFmodified and SRUKF can always work well, indicating their better scalability mainly due to the enhanced numerical stability.« less

  17. POWERLIB: SAS/IML Software for Computing Power in Multivariate Linear Models

    PubMed Central

    Johnson, Jacqueline L.; Muller, Keith E.; Slaughter, James C.; Gurka, Matthew J.; Gribbin, Matthew J.; Simpson, Sean L.

    2014-01-01

    The POWERLIB SAS/IML software provides convenient power calculations for a wide range of multivariate linear models with Gaussian errors. The software includes the Box, Geisser-Greenhouse, Huynh-Feldt, and uncorrected tests in the “univariate” approach to repeated measures (UNIREP), the Hotelling Lawley Trace, Pillai-Bartlett Trace, and Wilks Lambda tests in “multivariate” approach (MULTIREP), as well as a limited but useful range of mixed models. The familiar univariate linear model with Gaussian errors is an important special case. For estimated covariance, the software provides confidence limits for the resulting estimated power. All power and confidence limits values can be output to a SAS dataset, which can be used to easily produce plots and tables for manuscripts. PMID:25400516

  18. Peak flood estimation using gene expression programming

    NASA Astrophysics Data System (ADS)

    Zorn, Conrad R.; Shamseldin, Asaad Y.

    2015-12-01

    As a case study for the Auckland Region of New Zealand, this paper investigates the potential use of gene-expression programming (GEP) in predicting specific return period events in comparison to the established and widely used Regional Flood Estimation (RFE) method. Initially calibrated to 14 gauged sites, the GEP derived model was further validated to 10 and 100 year flood events with a relative errors of 29% and 18%, respectively. This is compared to the RFE method providing 48% and 44% errors for the same flood events. While the effectiveness of GEP in predicting specific return period events is made apparent, it is argued that the derived equations should be used in conjunction with those existing methodologies rather than as a replacement.

  19. Black-box modeling to estimate tissue temperature during radiofrequency catheter cardiac ablation: Feasibility study on an agar phantom model.

    PubMed

    Blasco-Gimenez, Ramón; Lequerica, Juan L; Herrero, Maria; Hornero, Fernando; Berjano, Enrique J

    2010-04-01

    The aim of this work was to study linear deterministic models to predict tissue temperature during radiofrequency cardiac ablation (RFCA) by measuring magnitudes such as electrode temperature, power and impedance between active and dispersive electrodes. The concept involves autoregressive models with exogenous input (ARX), which is a particular case of the autoregressive moving average model with exogenous input (ARMAX). The values of the mode parameters were determined from a least-squares fit of experimental data. The data were obtained from radiofrequency ablations conducted on agar models with different contact pressure conditions between electrode and agar (0 and 20 g) and different flow rates around the electrode (1, 1.5 and 2 L min(-1)). Half of all the ablations were chosen randomly to be used for identification (i.e. determination of model parameters) and the other half were used for model validation. The results suggest that (1) a linear model can be developed to predict tissue temperature at a depth of 4.5 mm during RF cardiac ablation by using the variables applied power, impedance and electrode temperature; (2) the best model provides a reasonably accurate estimate of tissue temperature with a 60% probability of achieving average errors better than 5 degrees C; (3) substantial errors (larger than 15 degrees C) were found only in 6.6% of cases and were associated with abnormal experiments (e.g. those involving the displacement of the ablation electrode) and (4) the impact of measuring impedance on the overall estimate is negligible (around 1 degrees C).

  20. Frozen section analysis of margins for head and neck tumor resections: reduction of sampling errors with a third histologic level.

    PubMed

    Olson, Stephen M; Hussaini, Mohammad; Lewis, James S

    2011-05-01

    Frozen section analysis is an essential tool for assessing margins intra-operatively to assure complete resection. Many institutions evaluate surgical defect edge tissue provided by the surgeon after the main lesion has been removed. With the increasing use of transoral laser microsurgery, this method is becoming even more prevalent. We sought to evaluate error rates at our large academic institution and to see if sampling errors could be reduced by the simple method change of taking an additional third section on these specimens. All head and neck tumor resection cases from January 2005 through August 2008 with margins evaluated by frozen section were identified by database search. These cases were analyzed by cutting two levels during frozen section and a third permanent section later. All resection cases from August 2008 through July 2009 were identified as well. These were analyzed by cutting three levels during frozen section (the third a 'much deeper' level) and a fourth permanent section later. Error rates for both of these periods were determined. Errors were separated into sampling and interpretation types. There were 4976 total frozen section specimens from 848 patients. The overall error rate was 2.4% for all frozen sections where just two levels were evaluated and was 2.5% when three levels were evaluated (P=0.67). The sampling error rate was 1.6% for two-level sectioning and 1.2% for three-level sectioning (P=0.42). However, when considering only the frozen section cases where tumor was ultimately identified (either at the time of frozen section or on permanent sections) the sampling error rate for two-level sectioning was 15.3 versus 7.4% for three-level sectioning. This difference was statistically significant (P=0.006). Cutting a single additional 'deeper' level at the time of frozen section identifies more tumor-bearing specimens and may reduce the number of sampling errors.

  1. Intrinsic errors in transporting a single-spin qubit through a double quantum dot

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Barnes, Edwin; Kestner, J. P.; Das Sarma, S.

    2017-07-01

    Coherent spatial transport or shuttling of a single electron spin through semiconductor nanostructures is an important ingredient in many spintronic and quantum computing applications. In this work we analyze the possible errors in solid-state quantum computation due to leakage in transporting a single-spin qubit through a semiconductor double quantum dot. In particular, we consider three possible sources of leakage errors associated with such transport: finite ramping times, spin-dependent tunneling rates between quantum dots induced by finite spin-orbit couplings, and the presence of multiple valley states. In each case we present quantitative estimates of the leakage errors, and discuss how they can be minimized. The emphasis of this work is on how to deal with the errors intrinsic to the ideal semiconductor structure, such as leakage due to spin-orbit couplings, rather than on errors due to defects or noise sources. In particular, we show that in order to minimize leakage errors induced by spin-dependent tunnelings, it is necessary to apply pulses to perform certain carefully designed spin rotations. We further develop a formalism that allows one to systematically derive constraints on the pulse shapes and present a few examples to highlight the advantage of such an approach.

  2. Malpractice suits in chest radiology: an evaluation of the histories of 8265 radiologists.

    PubMed

    Baker, Stephen R; Patel, Ronak H; Yang, Lily; Lelkes, Valdis M; Castro, Alejandro

    2013-11-01

    The aim of this study was to present rates of claims, causes of error, percentage of cases resulting in a judgment, and average payments made by radiologists in chest-related malpractice cases in a survey of 8265 radiologists. The malpractice histories of 8265 radiologists were evaluated from the credentialing files of One-Call Medical Inc., a preferred provider organization for computed tomography/magnetic resonance imaging in workers' compensation cases. Of the 8265 radiologists, 2680 (32.4%) had at least 1 malpractice suit. Of those who were sued, the rate of claims was 55.1 per 1000 person years. The rate of thorax-related suits was 6.6 claims per 1000 radiology practice years (95% confidence interval, 6.0-7.2). There were 496 suits encompassing 48 different causes. Errors in diagnosis comprised 78.0% of the causes. Failure to diagnose lung cancer was by far the most frequent diagnostic error, representing 211 cases or 42.5%. Of the 496 cases, an outcome was known in 417. Sixty-one percent of these were settled in favor of the plaintiff, with a mean payment of $277,230 (95% confidence interval, 226,967-338,614). Errors in diagnosis, and among them failure to diagnose lung cancer, were by far the most common reasons for initiating a malpractice suit against radiologists related to the thorax and its contents.

  3. Using beta binomials to estimate classification uncertainty for ensemble models.

    PubMed

    Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin

    2014-01-01

    Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.

  4. Surveillance of systemic autoimmune rheumatic diseases using administrative data.

    PubMed

    Bernatsky, S; Lix, L; Hanly, J G; Hudson, M; Badley, E; Peschken, C; Pineau, C A; Clarke, A E; Fortin, P R; Smith, M; Bélisle, P; Lagace, C; Bergeron, L; Joseph, L

    2011-04-01

    There is growing interest in developing tools and methods for the surveillance of chronic rheumatic diseases, using existing resources such as administrative health databases. To illustrate how this might work, we used population-based administrative data to estimate and compare the prevalence of systemic autoimmune rheumatic diseases (SARDs) across three Canadian provinces, assessing for regional differences and the effects of demographic factors. Cases of SARDs (systemic lupus erythematosus, scleroderma, primary Sjogren's, polymyositis/dermatomyositis) were ascertained from provincial physician billing and hospitalization data. We combined information from three case definitions, using hierarchical Bayesian latent class regression models that account for the imperfect nature of each case definition. Using methods that account for the imperfect nature of both billing and hospitalization databases, we estimated the over-all prevalence of SARDs to be approximately 2-3 cases per 1,000 residents. Stratified prevalence estimates suggested similar demographic trends across provinces (i.e. greater prevalence in females-versus-males, and in persons of older age). The prevalence in older females approached or exceeded 1 in 100, which may reflect the high burden of primary Sjogren's syndrome in this group. Adjusting for demographics, there was a greater prevalence in urban-versus-rural settings. In our work, prevalence estimates had good face validity and provided useful information about potential regional and demographic variations. Our results suggest that surveillance of some rheumatic diseases using administrative data may indeed be feasible. Our work highlights the usefulness of using multiple data sources, adjusting for the error in each.

  5. Serial fusion of Eulerian and Lagrangian approaches for accurate heart-rate estimation using face videos.

    PubMed

    Gupta, Puneet; Bhowmick, Brojeshwar; Pal, Arpan

    2017-07-01

    Camera-equipped devices are ubiquitous and proliferating in the day-to-day life. Accurate heart rate (HR) estimation from the face videos acquired from the low cost cameras in a non-contact manner, can be used in many real-world scenarios and hence, require rigorous exploration. This paper has presented an accurate and near real-time HR estimation system using these face videos. It is based on the phenomenon that the color and motion variations in the face video are closely related to the heart beat. The variations also contain the noise due to facial expressions, respiration, eye blinking and environmental factors which are handled by the proposed system. Neither Eulerian nor Lagrangian temporal signals can provide accurate HR in all the cases. The cases where Eulerian temporal signals perform spuriously are determined using a novel poorness measure and then both the Eulerian and Lagrangian temporal signals are employed for better HR estimation. Such a fusion is referred as serial fusion. Experimental results reveal that the error introduced in the proposed algorithm is 1.8±3.6 which is significantly lower than the existing well known systems.

  6. Cart3D Simulations for the First AIAA Sonic Boom Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Nemec, Marian

    2014-01-01

    Simulation results for the First AIAA Sonic Boom Prediction Workshop (LBW1) are presented using an inviscid, embedded-boundary Cartesian mesh method. The method employs adjoint-based error estimation and adaptive meshing to automatically determine resolution requirements of the computational domain. Results are presented for both mandatory and optional test cases. These include an axisymmetric body of revolution, a 69deg delta wing model and a complete model of the Lockheed N+2 supersonic tri-jet with V-tail and flow through nacelles. In addition to formal mesh refinement studies and examination of the adjoint-based error estimates, mesh convergence is assessed by presenting simulation results for meshes at several resolutions which are comparable in size to the unstructured grids distributed by the workshop organizers. Data provided includes both the pressure signals required by the workshop and information on code performance in both memory and processing time. Various enhanced techniques offering improved simulation efficiency will be demonstrated and discussed.

  7. Developing appropriate methods for cost-effectiveness analysis of cluster randomized trials.

    PubMed

    Gomes, Manuel; Ng, Edmond S-W; Grieve, Richard; Nixon, Richard; Carpenter, James; Thompson, Simon G

    2012-01-01

    Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering--seemingly unrelated regression (SUR) without a robust standard error (SE)--and 4 methods that recognized clustering--SUR and generalized estimating equations (GEEs), both with robust SE, a "2-stage" nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92-0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters.

  8. Developing Appropriate Methods for Cost-Effectiveness Analysis of Cluster Randomized Trials

    PubMed Central

    Gomes, Manuel; Ng, Edmond S.-W.; Nixon, Richard; Carpenter, James; Thompson, Simon G.

    2012-01-01

    Aim. Cost-effectiveness analyses (CEAs) may use data from cluster randomized trials (CRTs), where the unit of randomization is the cluster, not the individual. However, most studies use analytical methods that ignore clustering. This article compares alternative statistical methods for accommodating clustering in CEAs of CRTs. Methods. Our simulation study compared the performance of statistical methods for CEAs of CRTs with 2 treatment arms. The study considered a method that ignored clustering—seemingly unrelated regression (SUR) without a robust standard error (SE)—and 4 methods that recognized clustering—SUR and generalized estimating equations (GEEs), both with robust SE, a “2-stage” nonparametric bootstrap (TSB) with shrinkage correction, and a multilevel model (MLM). The base case assumed CRTs with moderate numbers of balanced clusters (20 per arm) and normally distributed costs. Other scenarios included CRTs with few clusters, imbalanced cluster sizes, and skewed costs. Performance was reported as bias, root mean squared error (rMSE), and confidence interval (CI) coverage for estimating incremental net benefits (INBs). We also compared the methods in a case study. Results. Each method reported low levels of bias. Without the robust SE, SUR gave poor CI coverage (base case: 0.89 v. nominal level: 0.95). The MLM and TSB performed well in each scenario (CI coverage, 0.92–0.95). With few clusters, the GEE and SUR (with robust SE) had coverage below 0.90. In the case study, the mean INBs were similar across all methods, but ignoring clustering underestimated statistical uncertainty and the value of further research. Conclusions. MLMs and the TSB are appropriate analytical methods for CEAs of CRTs with the characteristics described. SUR and GEE are not recommended for studies with few clusters. PMID:22016450

  9. Bayesian Estimation of Pneumonia Etiology: Epidemiologic Considerations and Applications to the Pneumonia Etiology Research for Child Health Study.

    PubMed

    Deloria Knoll, Maria; Fu, Wei; Shi, Qiyuan; Prosperi, Christine; Wu, Zhenke; Hammitt, Laura L; Feikin, Daniel R; Baggett, Henry C; Howie, Stephen R C; Scott, J Anthony G; Murdoch, David R; Madhi, Shabir A; Thea, Donald M; Brooks, W Abdullah; Kotloff, Karen L; Li, Mengying; Park, Daniel E; Lin, Wenyi; Levine, Orin S; O'Brien, Katherine L; Zeger, Scott L

    2017-06-15

    In pneumonia, specimens are rarely obtained directly from the infection site, the lung, so the pathogen causing infection is determined indirectly from multiple tests on peripheral clinical specimens, which may have imperfect and uncertain sensitivity and specificity, so inference about the cause is complex. Analytic approaches have included expert review of case-only results, case-control logistic regression, latent class analysis, and attributable fraction, but each has serious limitations and none naturally integrate multiple test results. The Pneumonia Etiology Research for Child Health (PERCH) study required an analytic solution appropriate for a case-control design that could incorporate evidence from multiple specimens from cases and controls and that accounted for measurement error. We describe a Bayesian integrated approach we developed that combined and extended elements of attributable fraction and latent class analyses to meet some of these challenges and illustrate the advantage it confers regarding the challenges identified for other methods. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.

  10. Pathology-related cases in the Norwegian System of Patient Injury Compensation in the period 2010-2015.

    PubMed

    Alfsen, G Cecilie; Chen, Ying; Kähler, Hanne; Bukholm, Ida Rashida Khan

    2016-12-01

    The Norwegian System of Patient Injury Compensation (NPE) processes compensation claims from patients who complain about malpractice in the health services. A wrong diagnosis in pathology may cause serious injury to the patient, but the incidence of compensation claims is unknown, because pathology is not specified as a separate category in NPE’s statistics. Knowledge about errors is required to assess quality-enhancing measures. We have therefore searched through the NPE records to identify cases whose background stems from errors committed in pathology departments and laboratories. We have searched through the NPE records for cases related to pathology for the years 2010 – 2015. During this period the NPE processed a total of 26 600 cases, of which 93 were related to pathology. The compensation claim was upheld in 66 cases, resulting in total compensation payments amounting to NOK 63 million. False-negative results in the form of undetected diagnoses were the most frequent grounds for compensation claims (63 cases), with an undetected malignant melanoma (n = 23) or atypia in cell samples from the cervix uteri (n = 16) as the major groups. Sixteen cases involved non-diagnostic issues such as mix-up of samples (n = 8), contamination of samples (n = 4) or delayed responses (n = 4). The number of compensation claims caused by errors in pathology diagnostics is low in relative terms. The errors may, however, be of a serious nature, especially if malignant conditions are overlooked or samples mixed up.

  11. Interannual variability of surface heat fluxes in the Adriatic Sea in the period 1998-2001 and comparison with observations.

    PubMed

    Chiggiato, Jacopo; Zavatarelli, Marco; Castellari, Sergio; Deserti, Marco

    2005-12-15

    Surface heat fluxes of the Adriatic Sea are estimated for the period 1998-2001 through bulk formulae with the goal to assess the uncertainties related to their estimations and to describe their interannual variability. In addition a comparison to observations is conducted. We computed the components of the sea surface heat budget by using two different operational meteorological data sets as inputs: the ECMWF operational analysis and the regional limited area model LAMBO operational forecast. Both results are consistent with previous long-term climatology and short-term analyses present in the literature. In both cases we obtained that the Adriatic Sea loses 26 W/m2 on average, that is consistent with the assessments found in the literature. Then we conducted a comparison with observations of the radiative components of the heat budget collected on offshore platforms and one coastal station. In the case of shortwave radiation, results show a little overestimation on the annual basis. Values obtained in this case are 172 W/m2 when using ECMWF data and 169 W/m2 when using LAMBO data. The use of either Schiano's or Gilman's and Garrett's corrections help to get even closer values. More difficult is to assess the comparison in the case of longwave radiation, with relative errors of an order of 10-20%.

  12. Meta-regression approximations to reduce publication selection bias.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2014-03-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  13. Effects of errors and gaps in spatial data sets on assessment of conservation progress.

    PubMed

    Visconti, P; Di Marco, M; Álvarez-Romero, J G; Januchowski-Hartley, S R; Pressey, R L; Weeks, R; Rondinini, C

    2013-10-01

    Data on the location and extent of protected areas, ecosystems, and species' distributions are essential for determining gaps in biodiversity protection and identifying future conservation priorities. However, these data sets always come with errors in the maps and associated metadata. Errors are often overlooked in conservation studies, despite their potential negative effects on the reported extent of protection of species and ecosystems. We used 3 case studies to illustrate the implications of 3 sources of errors in reporting progress toward conservation objectives: protected areas with unknown boundaries that are replaced by buffered centroids, propagation of multiple errors in spatial data, and incomplete protected-area data sets. As of 2010, the frequency of protected areas with unknown boundaries in the World Database on Protected Areas (WDPA) caused the estimated extent of protection of 37.1% of the terrestrial Neotropical mammals to be overestimated by an average 402.8% and of 62.6% of species to be underestimated by an average 10.9%. Estimated level of protection of the world's coral reefs was 25% higher when using recent finer-resolution data on coral reefs as opposed to globally available coarse-resolution data. Accounting for additional data sets not yet incorporated into WDPA contributed up to 6.7% of additional protection to marine ecosystems in the Philippines. We suggest ways for data providers to reduce the errors in spatial and ancillary data and ways for data users to mitigate the effects of these errors on biodiversity assessments. © 2013 Society for Conservation Biology.

  14. A new approach on seismic mortality estimations based on average population density

    NASA Astrophysics Data System (ADS)

    Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong

    2016-12-01

    This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.

  15. Estimations of ABL fluxes and other turbulence parameters from Doppler lidar data

    NASA Technical Reports Server (NTRS)

    Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn

    1989-01-01

    Techniques for extraction boundary layer parameters from measurements of a short-pulse CO2 Doppler lidar are described. The measurements are those collected during the First International Satellites Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE). By continuously operating the lidar for about an hour, stable statistics of the radial velocities can be extracted. Assuming that the turbulence is horizontally homogeneous, the mean wind, its standard deviations, and the momentum fluxes were estimated. Spectral analysis of the radial velocities is also performed from which, by examining the amplitude of the power spectrum at the inertial range, the kinetic energy dissipation was deduced. Finally, using the statistical form of the Navier-Stokes equations, the surface heat flux is derived as the residual balance between the vertical gradient of the third moment of the vertical velocity and the kinetic energy dissipation. Combining many measurements would normally reduce the error provided that, it is unbiased and uncorrelated. The nature of some of the algorithms however, is such that, biased and correlated errors may be generated even though the raw measurements are not. Data processing procedures were developed that eliminate bias and minimize error correlation. Once bias and error correlations are accounted for, the large sample size is shown to reduce the errors substantially. The principal features of the derived turbulence statistics for two case studied are presented.

  16. Error induced by the estimation of the corneal power and the effective lens position with a rotationally asymmetric refractive multifocal intraocular lens

    PubMed Central

    Piñero, David P.; Camps, Vicente J.; Ramón, María L.; Mateo, Verónica; Pérez-Cambrodí, Rafael J.

    2015-01-01

    AIM To evaluate the prediction error in intraocular lens (IOL) power calculation for a rotationally asymmetric refractive multifocal IOL and the impact on this error of the optimization of the keratometric estimation of the corneal power and the prediction of the effective lens position (ELP). METHODS Retrospective study including a total of 25 eyes of 13 patients (age, 50 to 83y) with previous cataract surgery with implantation of the Lentis Mplus LS-312 IOL (Oculentis GmbH, Germany). In all cases, an adjusted IOL power (PIOLadj) was calculated based on Gaussian optics using a variable keratometric index value (nkadj) for the estimation of the corneal power (Pkadj) and on a new value for ELP (ELPadj) obtained by multiple regression analysis. This PIOLadj was compared with the IOL power implanted (PIOLReal) and the value proposed by three conventional formulas (Haigis, Hoffer Q and Holladay I). RESULTS PIOLReal was not significantly different than PIOLadj and Holladay IOL power (P>0.05). In the Bland and Altman analysis, PIOLadj showed lower mean difference (-0.07 D) and limits of agreement (of 1.47 and -1.61 D) when compared to PIOLReal than the IOL power value obtained with the Holladay formula. Furthermore, ELPadj was significantly lower than ELP calculated with other conventional formulas (P<0.01) and was found to be dependent on axial length, anterior chamber depth and Pkadj. CONCLUSION Refractive outcomes after cataract surgery with implantation of the multifocal IOL Lentis Mplus LS-312 can be optimized by minimizing the keratometric error and by estimating ELP using a mathematical expression dependent on anatomical factors. PMID:26085998

  17. Error induced by the estimation of the corneal power and the effective lens position with a rotationally asymmetric refractive multifocal intraocular lens.

    PubMed

    Piñero, David P; Camps, Vicente J; Ramón, María L; Mateo, Verónica; Pérez-Cambrodí, Rafael J

    2015-01-01

    To evaluate the prediction error in intraocular lens (IOL) power calculation for a rotationally asymmetric refractive multifocal IOL and the impact on this error of the optimization of the keratometric estimation of the corneal power and the prediction of the effective lens position (ELP). Retrospective study including a total of 25 eyes of 13 patients (age, 50 to 83y) with previous cataract surgery with implantation of the Lentis Mplus LS-312 IOL (Oculentis GmbH, Germany). In all cases, an adjusted IOL power (PIOLadj) was calculated based on Gaussian optics using a variable keratometric index value (nkadj) for the estimation of the corneal power (Pkadj) and on a new value for ELP (ELPadj) obtained by multiple regression analysis. This PIOLadj was compared with the IOL power implanted (PIOLReal) and the value proposed by three conventional formulas (Haigis, Hoffer Q and Holladay I). PIOLReal was not significantly different than PIOLadj and Holladay IOL power (P>0.05). In the Bland and Altman analysis, PIOLadj showed lower mean difference (-0.07 D) and limits of agreement (of 1.47 and -1.61 D) when compared to PIOLReal than the IOL power value obtained with the Holladay formula. Furthermore, ELPadj was significantly lower than ELP calculated with other conventional formulas (P<0.01) and was found to be dependent on axial length, anterior chamber depth and Pkadj. Refractive outcomes after cataract surgery with implantation of the multifocal IOL Lentis Mplus LS-312 can be optimized by minimizing the keratometric error and by estimating ELP using a mathematical expression dependent on anatomical factors.

  18. Spectral imaging using consumer-level devices and kernel-based regression.

    PubMed

    Heikkinen, Ville; Cámara, Clara; Hirvonen, Tapani; Penttinen, Niko

    2016-06-01

    Hyperspectral reflectance factor image estimations were performed in the 400-700 nm wavelength range using a portable consumer-level laptop display as an adjustable light source for a trichromatic camera. Targets of interest were ColorChecker Classic samples, Munsell Matte samples, geometrically challenging tempera icon paintings from the turn of the 20th century, and human hands. Measurements and simulations were performed using Nikon D80 RGB camera and Dell Vostro 2520 laptop screen as a light source. Estimations were performed without spectral characteristics of the devices and by emphasizing simplicity for training sets and estimation model optimization. Spectral and color error images are shown for the estimations using line-scanned hyperspectral images as the ground truth. Estimations were performed using kernel-based regression models via a first-degree inhomogeneous polynomial kernel and a Matérn kernel, where in the latter case the median heuristic approach for model optimization and link function for bounded estimation were evaluated. Results suggest modest requirements for a training set and show that all estimation models have markedly improved accuracy with respect to the DE00 color distance (up to 99% for paintings and hands) and the Pearson distance (up to 98% for paintings and 99% for hands) from a weak training set (Digital ColorChecker SG) case when small representative training data were used in the estimation.

  19. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis.

    PubMed

    Jones, Reese E; Mandadapu, Kranthi K

    2012-04-21

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  20. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    NASA Astrophysics Data System (ADS)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

Top