Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada
NASA Astrophysics Data System (ADS)
Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.
2015-08-01
Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.
NASA Technical Reports Server (NTRS)
Carson, John M., III; Bayard, David S.
2006-01-01
G-SAMPLE is an in-flight dynamical method for use by sample collection missions to identify the presence and quantity of collected sample material. The G-SAMPLE method implements a maximum-likelihood estimator to identify the collected sample mass, based on onboard force sensor measurements, thruster firings, and a dynamics model of the spacecraft. With G-SAMPLE, sample mass identification becomes a computation rather than an extra hardware requirement; the added cost of cameras or other sensors for sample mass detection is avoided. Realistic simulation examples are provided for a spacecraft configuration with a sample collection device mounted on the end of an extended boom. In one representative example, a 1000 gram sample mass is estimated to within 110 grams (95% confidence) under realistic assumptions of thruster profile error, spacecraft parameter uncertainty, and sensor noise. For convenience to future mission design, an overall sample-mass estimation error budget is developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2010-01-01
This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…
NASA Astrophysics Data System (ADS)
Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.
2015-08-01
We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.
NASA Technical Reports Server (NTRS)
Calhoun, Philip C.; Sedlak, Joseph E.; Superfin, Emil
2011-01-01
Precision attitude determination for recent and planned space missions typically includes quaternion star trackers (ST) and a three-axis inertial reference unit (IRU). Sensor selection is based on estimates of knowledge accuracy attainable from a Kalman filter (KF), which provides the optimal solution for the case of linear dynamics with measurement and process errors characterized by random Gaussian noise with white spectrum. Non-Gaussian systematic errors in quaternion STs are often quite large and have an unpredictable time-varying nature, particularly when used in non-inertial pointing applications. Two filtering methods are proposed to reduce the attitude estimation error resulting from ST systematic errors, 1) extended Kalman filter (EKF) augmented with Markov states, 2) Unscented Kalman filter (UKF) with a periodic measurement model. Realistic assessments of the attitude estimation performance gains are demonstrated with both simulation and flight telemetry data from the Lunar Reconnaissance Orbiter.
Error Analyses of the North Alabama Lightning Mapping Array (LMA)
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Moradkhani, H.; Marshall, L. A.; Sharma, A.; Geenens, G.
2016-12-01
Effective combination of model simulations and observations through Data Assimilation (DA) depends heavily on uncertainty characterisation. Many traditional methods for quantifying model uncertainty in DA require some level of subjectivity (by way of tuning parameters or by assuming Gaussian statistics). Furthermore, the focus is typically on only estimating the first and second moments. We propose a data-driven methodology to estimate the full distributional form of model uncertainty, i.e. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered collectively, without needing to devise stochastic perturbations for individual components (such as model input, parameter and structural uncertainty). A training period is used to derive the distribution of errors in observed variables conditioned on hidden states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The theory behind the framework and case study applications are discussed in detail. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard perturbation approach.
NASA Technical Reports Server (NTRS)
Chelton, Dudley B.; Schlax, Michael G.
1991-01-01
The sampling error of an arbitrary linear estimate of a time-averaged quantity constructed from a time series of irregularly spaced observations at a fixed located is quantified through a formalism. The method is applied to satellite observations of chlorophyll from the coastal zone color scanner. The two specific linear estimates under consideration are the composite average formed from the simple average of all observations within the averaging period and the optimal estimate formed by minimizing the mean squared error of the temporal average based on all the observations in the time series. The resulting suboptimal estimates are shown to be more accurate than composite averages. Suboptimal estimates are also found to be nearly as accurate as optimal estimates using the correct signal and measurement error variances and correlation functions for realistic ranges of these parameters, which makes it a viable practical alternative to the composite average method generally employed at present.
Estimation of the sea surface's two-scale backscatter parameters
NASA Technical Reports Server (NTRS)
Wentz, F. J.
1978-01-01
The relationship between the sea-surface normalized radar cross section and the friction velocity vector is determined using a parametric two-scale scattering model. The model parameters are found from a nonlinear maximum likelihood estimation. The estimation is based on aircraft scatterometer measurements and the sea-surface anemometer measurements collected during the JONSWAP '75 experiment. The estimates of the ten model parameters converge to realistic values that are in good agreement with the available oceanographic data. The rms discrepancy between the model and the cross section measurements is 0.7 db, which is the rms sum of a 0.3 db average measurement error and a 0.6 db modeling error.
North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses
NASA Technical Reports Server (NTRS)
Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.
2003-01-01
Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.
Decoy-state quantum key distribution with more than three types of photon intensity pulses
NASA Astrophysics Data System (ADS)
Chau, H. F.
2018-04-01
The decoy-state method closes source security loopholes in quantum key distribution (QKD) using a laser source. In this method, accurate estimates of the detection rates of vacuum and single-photon events plus the error rate of single-photon events are needed to give a good enough lower bound of the secret key rate. Nonetheless, the current estimation method for these detection and error rates, which uses three types of photon intensities, is accurate up to about 1 % relative error. Here I report an experimentally feasible way that greatly improves these estimates and hence increases the one-way key rate of the BB84 QKD protocol with unbiased bases selection by at least 20% on average in realistic settings. The major tricks are the use of more than three types of photon intensities plus the fact that estimating bounds of the above detection and error rates is numerically stable, although these bounds are related to the inversion of a high condition number matrix.
Taming the Hurricane of Acquisition Cost Growth - Or at Least Predicting It
2015-01-01
the practice of generating two different cost estimates dubbed Will Cost and Should Cost. The Should Cost estimate is “based on realistic tech...to predict estimate error in similar future programs. This method is dubbed “macro-stochastic” estimation (Ryan, Schubert Kabban, Jacques...mph Potential Day 1-3 Track Area Tropical Storm Warning OK AR TN AL FL Mexico MS LA TX 30 N 35 N 25 N 95 W 90 W 85 W 80 W True at 30.00N Approx
Shimansky, Y P
2011-05-01
It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.
Development of the PCAD Model to Assess Biological Significance of Acoustic Disturbance
2013-09-30
substantial pre-existing knowledge of foraging patterns , life-history schedules, and demographics. Therefore, it is essential to use well-studied species to...transiting areas of the post-molt migration . Using a bootstrapping approach, we simulated thousands of disturbances to achieve realistic error estimates...resident population). Given seasonal differences in calving, causes of mortality, and movement patterns , we will estimate demographic rates on a
Global modeling of land water and energy balances. Part I: The land dynamics (LaD) model
Milly, P.C.D.; Shmakin, A.B.
2002-01-01
A simple model of large-scale land (continental) water and energy balances is presented. The model is an extension of an earlier scheme with a record of successful application in climate modeling. The most important changes from the original model include 1) introduction of non-water-stressed stomatal control of transpiration, in order to correct a tendency toward excessive evaporation: 2) conversion from globally constant parameters (with the exception of vegetation-dependent snow-free surface albedo) to more complete vegetation and soil dependence of all parameters, in order to provide more realistic representation of geographic variations in water and energy balances and to enable model-based investigations of land-cover change; 3) introduction of soil sensible heat storage and transport, in order to move toward realistic diurnal-cycle modeling; 4) a groundwater (saturated-zone) storage reservoir, in order to provide more realistic temporal variability of runoff; and 5) a rudimentary runoff-routing scheme for delivery of runoff to the ocean, in order to provide realistic freshwater forcing of the ocean general circulation model component of a global climate model. The new model is tested with forcing from the International Satellite Land Surface Climatology Project Initiative I global dataset and a recently produced observation-based water-balance dataset for major river basins of the world. Model performance is evaluated by comparing computed and observed runoff ratios from many major river basins of the world. Special attention is given to distinguishing between two components of the apparent runoff ratio error: the part due to intrinsic model error and the part due to errors in the assumed precipitation forcing. The pattern of discrepancies between modeled and observed runoff ratios is consistent with results from a companion study of precipitation estimation errors. The new model is tuned by adjustment of a globally constant scale factor for non-water-stressed stomatal resistance. After tuning, significant overestimation of runoff is found in environments where an overall arid climate includes a brief but intense wet season. It is shown that this error may be explained by the neglect of upward soil water diffusion from below the root zone during the dry season. With the exception of such basins, and in the absence of precipitation errors. It is estimated that annual runoff ratios simulated by the model would have a root-mean-square error of about 0.05. The new model matches observations better than its predecessor, which has a negative runoff bias and greater scatter.
Dynamic Modeling from Flight Data with Unknown Time Skews
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2016-01-01
A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.
Taking error into account when fitting models using Approximate Bayesian Computation.
van der Vaart, Elske; Prangle, Dennis; Sibly, Richard M
2018-03-01
Stochastic computer simulations are often the only practical way of answering questions relating to ecological management. However, due to their complexity, such models are difficult to calibrate and evaluate. Approximate Bayesian Computation (ABC) offers an increasingly popular approach to this problem, widely applied across a variety of fields. However, ensuring the accuracy of ABC's estimates has been difficult. Here, we obtain more accurate estimates by incorporating estimation of error into the ABC protocol. We show how this can be done where the data consist of repeated measures of the same quantity and errors may be assumed to be normally distributed and independent. We then derive the correct acceptance probabilities for a probabilistic ABC algorithm, and update the coverage test with which accuracy is assessed. We apply this method, which we call error-calibrated ABC, to a toy example and a realistic 14-parameter simulation model of earthworms that is used in environmental risk assessment. A comparison with exact methods and the diagnostic coverage test show that our approach improves estimation of parameter values and their credible intervals for both models. © 2017 by the Ecological Society of America.
Dettmer, Jan; Dosso, Stan E
2012-10-01
This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.
Application and Evaluation of MODIS LAI, FPAR, and Albedo ...
MODIS vegetation and albedo products provide a more realistic representation of surface conditions for input to the WRF/CMAQ modeling system. However, the initial evaluation of ingesting MODIS data into the system showed mixed results, with increased bias and error for 2-m temperature and reduced bias and error for 2-m mixing ratio. Recently, the WRF/CMAQ land surface and boundary laywer processes have been updated. In this study, MODIS vegetation and albedo data are input to the updated WRF/CMAQ meteorology and air quality simulations for 2006 over a North American (NA) 12-km domain. The evaluation of the simulation results shows that the updated WRF/CMAQ system improves 2-m temperature estimates over the pre-update base modeling system estimates. The MODIS vegetation input produces a realistic spring green-up that progresses through time from the south to north. Overall, MODIS input reduces 2-m mixing ration bias during the growing season. The NA west shows larger positive O3 bias during the growing season because of reduced gas phase deposition resulting from lower O3 deposition velocities driven by reduced vegetation cover. The O3 bias increase associated with the realistic vegetation representation indicates that further improvement may be needed in the WRF/CMAQ system. The National Exposure Research Laboratory’s Atmospheric Modeling Division (AMAD) conducts research in support of EPA’s mission to protect human health and the environment. AMAD’s rese
An investigation into exoplanet transits and uncertainties
NASA Astrophysics Data System (ADS)
Ji, Y.; Banks, T.; Budding, E.; Rhodes, M. D.
2017-06-01
A simple transit model is described along with tests of this model against published results for 4 exoplanet systems (Kepler-1, 2, 8, and 77). Data from the Kepler mission are used. The Markov Chain Monte Carlo (MCMC) method is applied to obtain realistic error estimates. Optimisation of limb darkening coefficients is subject to data quality. It is more likely for MCMC to derive an empirical limb darkening coefficient for light curves with S/N (signal to noise) above 15. Finally, the model is applied to Kepler data for 4 Kepler candidate systems (KOI 760.01, 767.01, 802.01, and 824.01) with previously unpublished results. Error estimates for these systems are obtained via the MCMC method.
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelbaum, R.; Rowe, B.; Armstrong, R.
2015-05-01
We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about amore » spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; ...
2015-05-11
The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore » a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less
NASA Astrophysics Data System (ADS)
Khaki, M.; Schumacher, M.; Forootan, E.; Kuhn, M.; Awange, J. L.; van Dijk, A. I. J. M.
2017-10-01
Assimilation of terrestrial water storage (TWS) information from the Gravity Recovery And Climate Experiment (GRACE) satellite mission can provide significant improvements in hydrological modelling. However, the rather coarse spatial resolution of GRACE TWS and its spatially correlated errors pose considerable challenges for achieving realistic assimilation results. Consequently, successful data assimilation depends on rigorous modelling of the full error covariance matrix of the GRACE TWS estimates, as well as realistic error behavior for hydrological model simulations. In this study, we assess the application of local analysis (LA) to maximize the contribution of GRACE TWS in hydrological data assimilation. For this, we assimilate GRACE TWS into the World-Wide Water Resources Assessment system (W3RA) over the Australian continent while applying LA and accounting for existing spatial correlations using the full error covariance matrix. GRACE TWS data is applied with different spatial resolutions including 1° to 5° grids, as well as basin averages. The ensemble-based sequential filtering technique of the Square Root Analysis (SQRA) is applied to assimilate TWS data into W3RA. For each spatial scale, the performance of the data assimilation is assessed through comparison with independent in-situ ground water and soil moisture observations. Overall, the results demonstrate that LA is able to stabilize the inversion process (within the implementation of the SQRA filter) leading to less errors for all spatial scales considered with an average RMSE improvement of 54% (e.g., 52.23 mm down to 26.80 mm) for all the cases with respect to groundwater in-situ measurements. Validating the assimilated results with groundwater observations indicates that LA leads to 13% better (in terms of RMSE) assimilation results compared to the cases with Gaussian errors assumptions. This highlights the great potential of LA and the use of the full error covariance matrix of GRACE TWS estimates for improved data assimilation results.
Lower limb estimation from sparse landmarks using an articulated shape model.
Zhang, Ju; Fernandez, Justin; Hislop-Jambrich, Jacqui; Besier, Thor F
2016-12-08
Rapid generation of lower limb musculoskeletal models is essential for clinically applicable patient-specific gait modeling. Estimation of muscle and joint contact forces requires accurate representation of bone geometry and pose, as well as their muscle attachment sites, which define muscle moment arms. Motion-capture is a routine part of gait assessment but contains relatively sparse geometric information. Standard methods for creating customized models from motion-capture data scale a reference model without considering natural shape variations. We present an articulated statistical shape model of the left lower limb with embedded anatomical landmarks and muscle attachment regions. This model is used in an automatic workflow, implemented in an easy-to-use software application, that robustly and accurately estimates realistic lower limb bone geometry, pose, and muscle attachment regions from seven commonly used motion-capture landmarks. Estimated bone models were validated on noise-free marker positions to have a lower (p=0.001) surface-to-surface root-mean-squared error of 4.28mm, compared to 5.22mm using standard isotropic scaling. Errors at a variety of anatomical landmarks were also lower (8.6mm versus 10.8mm, p=0.001). We improve upon standard lower limb model scaling methods with shape model-constrained realistic bone geometries, regional muscle attachment sites, and higher accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt
2016-08-01
A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.
Han, J W; Van Leeuwen, G M; Mizushina, S; Van de Kamer, J B; Maruyama, K; Sugiura, T; Azzopardi, D V; Edwards, A D
2001-07-01
In this study we present a design for a multi-frequency microwave radiometer aimed at prolonged monitoring of deep brain temperature in newborn infants and suitable for use during hypothermic neural rescue therapy. We identify appropriate hardware to measure brightness temperature and evaluate the accuracy of the measurements. We describe a method to estimate the tissue temperature distribution from measured brightness temperatures which uses the results of numerical simulations of the tissue temperature as well as the propagation of the microwaves in a realistic detailed three-dimensional infant head model. The temperature retrieval method is then used to evaluate how the statistical fluctuations in the measured brightness temperatures limit the confidence interval for the estimated temperature: for an 18 degrees C temperature differential between cooled surface and deep brain we found a standard error in the estimated central brain temperature of 0.75 degrees C. Evaluation of the systematic errors arising from inaccuracies in model parameters showed that realistic deviations in tissue parameters have little impact compared to uncertainty in the thickness of the bolus between the receiving antenna and the infant's head or in the skull thickness. This highlights the need to pay particular attention to these latter parameters in future practical implementation of the technique.
NASA Technical Reports Server (NTRS)
Sun, Jielun
1993-01-01
Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.
Revision of laser-induced damage threshold evaluation from damage probability data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bataviciute, Gintare; Grigas, Povilas; Smalakys, Linas
2013-04-15
In this study, the applicability of commonly used Damage Frequency Method (DFM) is addressed in the context of Laser-Induced Damage Threshold (LIDT) testing with pulsed lasers. A simplified computer model representing the statistical interaction between laser irradiation and randomly distributed damage precursors is applied for Monte Carlo experiments. The reproducibility of LIDT predicted from DFM is examined under both idealized and realistic laser irradiation conditions by performing numerical 1-on-1 tests. A widely accepted linear fitting resulted in systematic errors when estimating LIDT and its error bars. For the same purpose, a Bayesian approach was proposed. A novel concept of parametricmore » regression based on varying kernel and maximum likelihood fitting technique is introduced and studied. Such approach exhibited clear advantages over conventional linear fitting and led to more reproducible LIDT evaluation. Furthermore, LIDT error bars are obtained as a natural outcome of parametric fitting which exhibit realistic values. The proposed technique has been validated on two conventionally polished fused silica samples (355 nm, 5.7 ns).« less
Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers
NASA Technical Reports Server (NTRS)
Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.
2012-01-01
Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.
Single point estimation of phenytoin dosing: a reappraisal.
Koup, J R; Gibaldi, M; Godolphin, W
1981-11-01
A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.
Development of an OSSE Framework for a Global Atmospheric Data Assimilation System
NASA Technical Reports Server (NTRS)
Gelaro, Ronald; Errico, Ronald M.; Prive, N.
2012-01-01
Observing system simulation experiments (OSSEs) are powerful tools for estimating the usefulness of various configurations of envisioned observing systems and data assimilation techniques. Their utility stems from their being conducted in an entirely simulated context, utilizing simulated observations having simulated errors and drawn from a simulation of the earth's environment. Observations are generated by applying physically based algorithms to the simulated state, such as performed during data assimilation or using other appropriate algorithms. Adding realistic instrument plus representativeness errors, including their biases and correlations, can be critical for obtaining realistic assessments of the impact of a proposed observing system or analysis technique. If estimates of the expected accuracy of proposed observations are realistic, then the OSSE can be also used to learn how best to utilize the new information, accelerating its transition to operations once the real data are available. As with any inferences from simulations, however, it is first imperative that some baseline OSSEs are performed and well validated against corresponding results obtained with a real observing system. This talk provides an overview of, and highlights critical issues related to, the development of an OSSE framework for the tropospheric weather prediction component of the NASA GEOS-5 global atmospheric data assimilation system. The framework includes all existing observations having significant impact on short-term forecast skill. Its validity has been carefully assessed using a range of metrics that can be evaluated in both the OSSE and real contexts, including adjoint-based estimates of observation impact. A preliminary application to the Aeolus Doppler wind lidar mission, scheduled for launch by the European Space Agency in 2014, has also been investigated.
Estimating error rates for firearm evidence identifications in forensic science
Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan
2018-01-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680
Estimating error rates for firearm evidence identifications in forensic science.
Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan
2018-03-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Fukumori, Ichiro; Malanotte-Rizzoli, Paola
1995-01-01
A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kalman filter based on approximation of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.
NASA Astrophysics Data System (ADS)
Fukumori, Ichiro; Malanotte-Rizzoli, Paola
1995-04-01
A practical method of data assimilation for use with large, nonlinear, ocean general circulation models is explored. A Kaiman filter based on approximations of the state error covariance matrix is presented, employing a reduction of the effective model dimension, the error's asymptotic steady state limit, and a time-invariant linearization of the dynamic model for the error integration. The approximations lead to dramatic computational savings in applying estimation theory to large complex systems. We examine the utility of the approximate filter in assimilating different measurement types using a twin experiment of an idealized Gulf Stream. A nonlinear primitive equation model of an unstable east-west jet is studied with a state dimension exceeding 170,000 elements. Assimilation of various pseudomeasurements are examined, including velocity, density, and volume transport at localized arrays and realistic distributions of satellite altimetry and acoustic tomography observations. Results are compared in terms of their effects on the accuracies of the estimation. The approximate filter is shown to outperform an empirical nudging scheme used in a previous study. The examples demonstrate that useful approximate estimation errors can be computed in a practical manner for general circulation models.
Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten
2016-10-02
Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.
Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten
2016-01-01
Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099
Waveform inversion of acoustic waves for explosion yield estimation
Kim, K.; Rodgers, A. J.
2016-07-08
We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less
Waveform inversion of acoustic waves for explosion yield estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, K.; Rodgers, A. J.
We present a new waveform inversion technique to estimate the energy of near-surface explosions using atmospheric acoustic waves. Conventional methods often employ air blast models based on a homogeneous atmosphere, where the acoustic wave propagation effects (e.g., refraction and diffraction) are not taken into account, and therefore, their accuracy decreases with increasing source-receiver distance. In this study, three-dimensional acoustic simulations are performed with a finite difference method in realistic atmospheres and topography, and the modeled acoustic Green's functions are incorporated into the waveform inversion for the acoustic source time functions. The strength of the acoustic source is related to explosionmore » yield based on a standard air blast model. The technique was applied to local explosions (<10 km) and provided reasonable yield estimates (<~30% error) in the presence of realistic topography and atmospheric structure. In conclusion, the presented method can be extended to explosions recorded at far distance provided proper meteorological specifications.« less
NASA Astrophysics Data System (ADS)
Hu, Xiao-Ming; Zhang, Fuqing; Nielsen-Gammon, John W.
2010-04-01
This study explores the treatment of model error and uncertainties through simultaneous state and parameter estimation (SSPE) with an ensemble Kalman filter (EnKF) in the simulation of a 2006 air pollution event over the greater Houston area during the Second Texas Air Quality Study (TexAQS-II). Two parameters in the atmospheric boundary layer parameterization associated with large model sensitivities are combined with standard prognostic variables in an augmented state vector to be continuously updated through assimilation of wind profiler observations. It is found that forecasts of the atmosphere with EnKF/SSPE are markedly improved over experiments with no state and/or parameter estimation. More specifically, the EnKF/SSPE is shown to help alleviate a near-surface cold bias and to alter the momentum mixing in the boundary layer to produce more realistic wind profiles.
Mismeasurement and the resonance of strong confounders: uncorrelated errors.
Marshall, J R; Hastrup, J L
1996-05-15
Greenland first documented (Am J Epidemiol 1980; 112:564-9) that error in the measurement of a confounder could resonate--that it could bias estimates of other study variables, and that the bias could persist even with statistical adjustment for the confounder as measured. An important question is raised by this finding: can such bias be more than trivial within the bounds of realistic data configurations? The authors examine several situations involving dichotomous and continuous data in which a confounder and a null variable are measured with error, and they assess the extent of resultant bias in estimates of the effect of the null variable. They show that, with continuous variables, measurement error amounting to 40% of observed variance in the confounder could cause the observed impact of the null study variable to appear to alter risk by as much as 30%. Similarly, they show, with dichotomous independent variables, that 15% measurement error in the form of misclassification could lead the null study variable to appear to alter risk by as much as 50%. Such bias would result only from strong confounding. Measurement error would obscure the evidence that strong confounding is a likely problem. These results support the need for every epidemiologic inquiry to include evaluations of measurement error in each variable considered.
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
NASA Astrophysics Data System (ADS)
Gaté, F.; Revenu, B.; García-Fernández, D.; Marin, V.; Dallier, R.; Escudié, A.; Martin, L.
2018-03-01
The composition of ultra-high energy cosmic rays is still poorly known and constitutes a very important topic in the field of high-energy astrophysics. Detection of ultra-high energy cosmic rays is carried out via the extensive air showers they create after interacting with the atmosphere constituents. The secondary electrons and positrons within the showers emit a detectable electric field in the kHz-GHz range. It is possible to use this radio signal for the estimation of the atmospheric depth of maximal development of the showers Xmax , with a good accuracy and a duty cycle close to 100%. This value of Xmax is strongly correlated to the nature of the primary cosmic ray that initiated the shower. We show in this paper the importance of using a realistic atmospheric model in order to correct for systematic errors that can prevent a correct and unbiased estimation of Xmax.
2013-01-01
Demographic estimates of population at risk often underpin epidemiologic research and public health surveillance efforts. In spite of their central importance to epidemiology and public-health practice, little previous attention has been paid to evaluating the magnitude of errors associated with such estimates or the sensitivity of epidemiologic statistics to these effects. In spite of the well-known observation that accuracy in demographic estimates declines as the size of the population to be estimated decreases, demographers continue to face pressure to produce estimates for increasingly fine-grained population characteristics at ever-smaller geographic scales. Unfortunately, little guidance on the magnitude of errors that can be expected in such estimates is currently available in the literature and available for consideration in small-area epidemiology. This paper attempts to fill this current gap by producing a Vintage 2010 set of single-year-of-age estimates for census tracts, then evaluating their accuracy and precision in light of the results of the 2010 Census. These estimates are produced and evaluated for 499 census tracts in New Mexico for single-years of age from 0 to 21 and for each sex individually. The error distributions associated with these estimates are characterized statistically using non-parametric statistics including the median and 2.5th and 97.5th percentiles. The impact of these errors are considered through simulations in which observed and estimated 2010 population counts are used as alternative denominators and simulated event counts are used to compute a realistic range fo prevalence values. The implications of the results of this study for small-area epidemiologic research in cancer and environmental health are considered. PMID:24359344
Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel
NASA Astrophysics Data System (ADS)
Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele
2009-12-01
An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.
NASA Astrophysics Data System (ADS)
Oda, Hirokuni; Xuan, Chuang
2014-10-01
development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.
Preparatory studies for the WFIRST supernova cosmology measurements
NASA Astrophysics Data System (ADS)
Perlmutter, Saul
In the context of the WFIRST-AFTA Science Definition Team we developed a first version of a supernova program, described in the WFIRST-AFTA SDT report. This program uses the imager to discover supernova candidates and an Integral Field Spectrograph (IFS) to obtain spectrophotometric light curves and higher signal to noise spectra of the supernovae near peak to better characterize the supernovae and thus minimize systematic errors. While this program was judged a robust one, and the estimates of the sensitivity to the cosmological parameters were felt to be reliable, due to limitation of time the analysis was clearly limited in depth on a number of issues. The goal of this proposal is to further develop this program and refine the estimates of the sensitivities to the cosmological parameters using more sophisticated systematic uncertainty models and covariance error matrices that fold in more realistic data concerning observed populations of SNe Ia as well as more realistic instrument models. We propose to develop analysis algorithms and approaches that are needed to build, optimize, and refine the WFIRST instrument and program requirements to accomplish the best supernova cosmology measurements possible. We plan to address the following: a) Use realistic Supernova populations, subclasses and population drift. One bothersome uncertainty with the supernova technique is the possibility of population drift with redshift. We are in a unique position to characterize and mitigate such effects using the spectrophotometric time series of real Type Ia supernovae from the Nearby Supernova Factory (SNfactory). Each supernova in this sample has global galaxy measurements as well as additional local environment information derived from the IFS spectroscopy. We plan to develop methods of coping with this issue, e.g., by selecting similar subsamples of supernovae and allowing additional model flexibility, in order to reduce systematic uncertainties. These studies will allow us to tune details, like the wavelength coverage and S/N requirements, of the WFIRST IFS to capitalize on these systematic error reduction methods. b) Supernova extraction and host galaxy subtractions. The underlying light of the host galaxy must be subtracted from the supernova images making up the lightcurves. Using the IFS to provide the lightcurve points via spectrophotometry requires the subtraction of a reference spectrum of the galaxy taken after the supernova light has faded to a negligible level. We plan to apply the expertise obtained from the SNfactory to develop galaxy background procedures that minimize the systematic errors introduced by this step in the analysis. c) Instrument calibration and ground to space cross calibration. Calibrating the entire supernova sample will be a challenge as no standard stars exist that span the range of magnitudes and wavelengths relevant to the WFIRST survey. Linking the supernova measurements to the relatively brighter standards will require several links. WFIRST will produce the high redshift sample, but the nearby supernova to anchor the Hubble diagram will have to come from ground based observations. Developing algorithms to carry out the cross calibration of these two samples to the required one percent level will be an important goal of our proposal. An integral part of this calibration will be to remove all instrumental signatures and to develop unbiased measurement techniques starting at the pixel level. We then plan to pull the above studies together in a synthesis to produce a correlated error matrix. We plan to develop a Fisher Matrix based model to evaluate the correlated error matrix due to the various systematic errors discussed above. A realistic error model will allow us to carry out a more reliable estimates of the eventual errors on the measurement of the cosmological parameters, as well as serve as a means of optimizing and fine tuning the requirements for the instruments and survey strategies.
Quantifying and correcting motion artifacts in MRI
NASA Astrophysics Data System (ADS)
Bones, Philip J.; Maclaren, Julian R.; Millane, Rick P.; Watts, Richard
2006-08-01
Patient motion during magnetic resonance imaging (MRI) can produce significant artifacts in a reconstructed image. Since measurements are made in the spatial frequency domain ('k-space'), rigid-body translational motion results in phase errors in the data samples while rotation causes location errors. A method is presented to detect and correct these errors via a modified sampling strategy, thereby achieving more accurate image reconstruction. The strategy involves sampling vertical and horizontal strips alternately in k-space and employs phase correlation within the overlapping segments to estimate translational motion. An extension, also based on correlation, is employed to estimate rotational motion. Results from simulations with computer-generated phantoms suggest that the algorithm is robust up to realistic noise levels. The work is being extended to physical phantoms. Provided that a reference image is available and the object is of limited extent, it is shown that a measure related to the amount of energy outside the support can be used to objectively compare the severity of motion-induced artifacts.
Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA
NASA Astrophysics Data System (ADS)
Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz
2018-04-01
External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.
Dynamic Method for Identifying Collected Sample Mass
NASA Technical Reports Server (NTRS)
Carson, John
2008-01-01
G-Sample is designed for sample collection missions to identify the presence and quantity of sample material gathered by spacecraft equipped with end effectors. The software method uses a maximum-likelihood estimator to identify the collected sample's mass based on onboard force-sensor measurements, thruster firings, and a dynamics model of the spacecraft. This makes sample mass identification a computation rather than a process requiring additional hardware. Simulation examples of G-Sample are provided for spacecraft model configurations with a sample collection device mounted on the end of an extended boom. In the absence of thrust knowledge errors, the results indicate that G-Sample can identify the amount of collected sample mass to within 10 grams (with 95-percent confidence) by using a force sensor with a noise and quantization floor of 50 micrometers. These results hold even in the presence of realistic parametric uncertainty in actual spacecraft inertia, center-of-mass offset, and first flexibility modes. Thrust profile knowledge is shown to be a dominant sensitivity for G-Sample, entering in a nearly one-to-one relationship with the final mass estimation error. This means thrust profiles should be well characterized with onboard accelerometers prior to sample collection. An overall sample-mass estimation error budget has been developed to approximate the effect of model uncertainty, sensor noise, data rate, and thrust profile error on the expected estimate of collected sample mass.
NASA Technical Reports Server (NTRS)
Prive, N. C.; Errico, R. M.; Tai, K.-S.
2013-01-01
The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.
Estimation of chromatic errors from broadband images for high contrast imaging: sensitivity analysis
NASA Astrophysics Data System (ADS)
Sirbu, Dan; Belikov, Ruslan
2016-01-01
Many concepts have been proposed to enable direct imaging of planets around nearby stars, and which would enable spectroscopic observations of their atmospheric observations and the potential discovery of biomarkers. The main technical challenge associated with direct imaging of exoplanets is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. Usage of an internal coronagraph with an adaptive optical system for wavefront correction is one of the most mature methods and is being developed as an instrument addition to the WFIRST-AFTA space mission. In addition, such instruments as GPI and SPHERE are already being used on the ground and are yielding spectra of giant planets. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, mid-spatial frequency wavefront errors must be estimated. To date, most broadband lab demonstrations use narrowband filters to obtain an estimate of the the chromaticity of the wavefront error and this can result in usage of a large percentage of the total integration time. Previously, we have proposed a method to estimate the chromaticity of wavefront errors using only broadband images; we have demonstrated that under idealized conditions wavefront errors can be estimated from images composed of discrete wavelengths. This is achieved by using DM probes with sufficient spatially-localized chromatic diversity. Here we report on the results of a study of the performance of this method with respect to realistic broadband images including noise. Additionally, we study optimal probe patterns that enable reduction of the number of probes used and compare the integration time with narrowband and IFS estimation methods.
NASA Astrophysics Data System (ADS)
Negri, Rodolfo Batista; Prado, Antonio Fernando Bertachini de Almeida; Sukhanov, Alexander
2017-11-01
The swing-by maneuver is a technique used to change the energy of a spacecraft by using a close approach in a celestial body. This procedure was used many times in real missions. Usually, the first approach to design this type of mission is based on the "patched-conics" model, which splits the maneuver into three "two-body dynamics." This approach causes an error in the estimation of the energy variations, which depends on the geometry of the maneuver and the system of primaries considered. Therefore, the goal of the present paper is to study the errors caused by this approximation. The comparison of the results are made with the trajectories obtained using the more realistic restricted three-body problem, assumed here to be the "real values" for the maneuver. The results shown here describe the effects of each parameter involved in the swing-by. Some examples using bodies in the solar system are used in this part of the paper. The study is then generalized to cover different mass parameters, and its influence is analyzed to give an idea of the amount of the error expected for a given system of primaries. The results presented here may help in estimating errors in the preliminary mission analysis using the "patched-conics" approach.
NASA Technical Reports Server (NTRS)
Lee-Rausch, E. M.; Park, M. A.; Jones, W. T.; Hammond, D. P.; Nielsen, E. J.
2005-01-01
This paper demonstrates the extension of error estimation and adaptation methods to parallel computations enabling larger, more realistic aerospace applications and the quantification of discretization errors for complex 3-D solutions. Results were shown for an inviscid sonic-boom prediction about a double-cone configuration and a wing/body segmented leading edge (SLE) configuration where the output function of the adjoint was pressure integrated over a part of the cylinder in the near field. After multiple cycles of error estimation and surface/field adaptation, a significant improvement in the inviscid solution for the sonic boom signature of the double cone was observed. Although the double-cone adaptation was initiated from a very coarse mesh, the near-field pressure signature from the final adapted mesh compared very well with the wind-tunnel data which illustrates that the adjoint-based error estimation and adaptation process requires no a priori refinement of the mesh. Similarly, the near-field pressure signature for the SLE wing/body sonic boom configuration showed a significant improvement from the initial coarse mesh to the final adapted mesh in comparison with the wind tunnel results. Error estimation and field adaptation results were also presented for the viscous transonic drag prediction of the DLR-F6 wing/body configuration, and results were compared to a series of globally refined meshes. Two of these globally refined meshes were used as a starting point for the error estimation and field-adaptation process where the output function for the adjoint was the total drag. The field-adapted results showed an improvement in the prediction of the drag in comparison with the finest globally refined mesh and a reduction in the estimate of the remaining drag error. The adjoint-based adaptation parameter showed a need for increased resolution in the surface of the wing/body as well as a need for wake resolution downstream of the fuselage and wing trailing edge in order to achieve the requested drag tolerance. Although further adaptation was required to meet the requested tolerance, no further cycles were computed in order to avoid large discrepancies between the surface mesh spacing and the refined field spacing.
Accurate Magnetometer/Gyroscope Attitudes Using a Filter with Correlated Sensor Noise
NASA Technical Reports Server (NTRS)
Sedlak, J.; Hashmall, J.
1997-01-01
Magnetometers and gyroscopes have been shown to provide very accurate attitudes for a variety of spacecraft. These results have been obtained, however, using a batch-least-squares algorithm and long periods of data. For use in onboard applications, attitudes are best determined using sequential estimators such as the Kalman filter. When a filter is used to determine attitudes using magnetometer and gyroscope data for input, the resulting accuracy is limited by both the sensor accuracies and errors inherent in the Earth magnetic field model. The Kalman filter accounts for the random component by modeling the magnetometer and gyroscope errors as white noise processes. However, even when these tuning parameters are physically realistic, the rate biases (included in the state vector) have been found to show systematic oscillations. These are attributed to the field model errors. If the gyroscope noise is sufficiently small, the tuned filter 'memory' will be long compared to the orbital period. In this case, the variations in the rate bias induced by field model errors are substantially reduced. Mistuning the filter to have a short memory time leads to strongly oscillating rate biases and increased attitude errors. To reduce the effect of the magnetic field model errors, these errors are estimated within the filter and used to correct the reference model. An exponentially-correlated noise model is used to represent the filter estimate of the systematic error. Results from several test cases using in-flight data from the Compton Gamma Ray Observatory are presented. These tests emphasize magnetometer errors, but the method is generally applicable to any sensor subject to a combination of random and systematic noise.
Estimating uncertainties in complex joint inverse problems
NASA Astrophysics Data System (ADS)
Afonso, Juan Carlos
2016-04-01
Sources of uncertainty affecting geophysical inversions can be classified either as reflective (i.e. the practitioner is aware of her/his ignorance) or non-reflective (i.e. the practitioner does not know that she/he does not know!). Although we should be always conscious of the latter, the former are the ones that, in principle, can be estimated either empirically (by making measurements or collecting data) or subjectively (based on the experience of the researchers). For complex parameter estimation problems in geophysics, subjective estimation of uncertainty is the most common type. In this context, probabilistic (aka Bayesian) methods are commonly claimed to offer a natural and realistic platform from which to estimate model uncertainties. This is because in the Bayesian approach, errors (whatever their nature) can be naturally included as part of the global statistical model, the solution of which represents the actual solution to the inverse problem. However, although we agree that probabilistic inversion methods are the most powerful tool for uncertainty estimation, the common claim that they produce "realistic" or "representative" uncertainties is not always justified. Typically, ALL UNCERTAINTY ESTIMATES ARE MODEL DEPENDENT, and therefore, besides a thorough characterization of experimental uncertainties, particular care must be paid to the uncertainty arising from model errors and input uncertainties. We recall here two quotes by G. Box and M. Gunzburger, respectively, of special significance for inversion practitioners and for this session: "…all models are wrong, but some are useful" and "computational results are believed by no one, except the person who wrote the code". In this presentation I will discuss and present examples of some problems associated with the estimation and quantification of uncertainties in complex multi-observable probabilistic inversions, and how to address them. Although the emphasis will be on sources of uncertainty related to the forward and statistical models, I will also address other uncertainties associated with data and uncertainty propagation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berkel, M. van; Fellow of the Japan Society for the Promotion of Science; FOM Institute DIFFER-Dutch Institute for Fundamental Energy Research, Association EURATOM- FOM, Trilateral Euregio Cluster, PO Box 1207, 3430 BE Nieuwegein
2014-11-15
In this paper, a number of new approximations are introduced to estimate the perturbative diffusivity (χ), convectivity (V), and damping (τ) in cylindrical geometry. For this purpose, the harmonic components of heat waves induced by localized deposition of modulated power are used. The approximations are based on semi-infinite slab approximations of the heat equation. The main result is the approximation of χ under the influence of V and τ based on the phase of two harmonics making the estimate less sensitive to calibration errors. To understand why the slab approximations can estimate χ well in cylindrical geometry, the relationships betweenmore » heat transport models in slab and cylindrical geometry are studied. In addition, the relationship between amplitude and phase with respect to their derivatives, used to estimate χ, is discussed. The results are presented in terms of the relative error for the different derived approximations for different values of frequency, transport coefficients, and dimensionless radius. The approximations show a significant region in which χ, V, and τ can be estimated well, but also regions in which the error is large. Also, it is shown that some compensation is necessary to estimate V and τ in a cylindrical geometry. On the other hand, errors resulting from the simplified assumptions are also discussed showing that estimating realistic values for V and τ based on infinite domains will be difficult in practice. This paper is the first part (Part I) of a series of three papers. In Part II and Part III, cylindrical approximations based directly on semi-infinite cylindrical domain (outward propagating heat pulses) and inward propagating heat pulses in a cylindrical domain, respectively, will be treated.« less
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Vira, Julius; Bocquet, Marc; Sofiev, Mikhail; Saunier, Olivier
2011-06-01
In the event of an accidental atmospheric release of radionuclides from a nuclear power plant, accurate real-time forecasting of the activity concentrations of radionuclides is required by the decision makers for the preparation of adequate countermeasures. The accuracy of the forecast plume is highly dependent on the source term estimation. On several academic test cases, including real data, inverse modelling and data assimilation techniques were proven to help in the assessment of the source term. In this paper, a semi-automatic method is proposed for the sequential reconstruction of the plume, by implementing a sequential data assimilation algorithm based on inverse modelling, with a care to develop realistic methods for operational risk agencies. The performance of the assimilation scheme has been assessed through the intercomparison between French and Finnish frameworks. Two dispersion models have been used: Polair3D and Silam developed in two different research centres. Different release locations, as well as different meteorological situations are tested. The existing and newly planned surveillance networks are used and realistically large multiplicative observational errors are assumed. The inverse modelling scheme accounts for strong error bias encountered with such errors. The efficiency of the data assimilation system is tested via statistical indicators. For France and Finland, the average performance of the data assimilation system is strong. However there are outlying situations where the inversion fails because of a too poor observability. In addition, in the case where the power plant responsible for the accidental release is not known, robust statistical tools are developed and tested to discriminate candidate release sites.
Simultaneous orbit determination
NASA Technical Reports Server (NTRS)
Wright, J. R.
1988-01-01
Simultaneous orbit determination is demonstrated using live range and Doppler data for the NASA/Goddard tracking configuration defined by the White Sands Ground Terminal (WSGT), the Tracking and Data Relay Satellite (TDRS), and the Earth Radiation Budget Satellite (ERBS). A physically connected sequential filter-smoother was developed for this demonstration. Rigorous necessary conditions are used to show that the state error covariance functions are realistic; and this enables the assessment of orbit estimation accuracies for both TDRS and ERBS.
Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.
Monica, Stefania; Ferrari, Gianluigi
2018-05-17
Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.
Quantum computation with realistic magic-state factories
NASA Astrophysics Data System (ADS)
O'Gorman, Joe; Campbell, Earl T.
2017-03-01
Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.
A strategy for reducing gross errors in the generalized Born models of implicit solvation
Onufriev, Alexey V.; Sigalov, Grigori
2011-01-01
The “canonical” generalized Born (GB) formula [C. Still, A. Tempczyk, R. C. Hawley, and T. Hendrickson, J. Am. Chem. Soc. 112, 6127 (1990)] is known to provide accurate estimates for total electrostatic solvation energies ΔGel of biomolecules if the corresponding effective Born radii are accurate. Here we show that even if the effective Born radii are perfectly accurate, the canonical formula still exhibits significant number of gross errors (errors larger than 2kBT relative to numerical Poisson equation reference) in pairwise interactions between individual atomic charges. Analysis of exact analytical solutions of the Poisson equation (PE) for several idealized nonspherical geometries reveals two distinct spatial modes of the PE solution; these modes are also found in realistic biomolecular shapes. The canonical GB Green function misses one of two modes seen in the exact PE solution, which explains the observed gross errors. To address the problem and reduce gross errors of the GB formalism, we have used exact PE solutions for idealized nonspherical geometries to suggest an alternative analytical Green function to replace the canonical GB formula. The proposed functional form is mathematically nearly as simple as the original, but depends not only on the effective Born radii but also on their gradients, which allows for better representation of details of nonspherical molecular shapes. In particular, the proposed functional form captures both modes of the PE solution seen in nonspherical geometries. Tests on realistic biomolecular structures ranging from small peptides to medium size proteins show that the proposed functional form reduces gross pairwise errors in all cases, with the amount of reduction varying from more than an order of magnitude for small structures to a factor of 2 for the largest ones. PMID:21528947
Foreground mitigation strategy for measuring the 21 cm-LAE cross-correlation
NASA Astrophysics Data System (ADS)
Yoshiura, Shintaro; Line, Jack L. B.; Kubota, Kenji; Hasegawa, Kenji; Takahashi, Keitaro
2018-05-01
The cross power spectrum of the 21 cm signal and Lyman-α emitters (LAEs) is a probe of the Epoch of Reionization. Astrophysical foregrounds do not correlate with the LAE distribution, though the foregrounds contribute to the error. To study the impact of foregrounds on the measurement, we assume realistic observation by the Murchison Widefield Array using a catalogue of radio galaxies, a LAE survey by the Subaru Hyper Supreme-Cam and the redshift of LAEs is determined by the Prime Focus Spectrograph. The HI distribution is estimated from a radiative transfer simulation with models based on results of radiation hydrodynamics simulation. Using these models, we found that the error of cross power spectrum is dominated by foreground terms. Furthermore, we estimate the effects of foreground removal, and find 99% of the foreground removal is required to detect the 21 cm-LAE signal at k ~ 0.4 h Mpc-1.
Optimizing Hybrid Metrology: Rigorous Implementation of Bayesian and Combined Regression.
Henn, Mark-Alexander; Silver, Richard M; Villarrubia, John S; Zhang, Nien Fan; Zhou, Hui; Barnes, Bryan M; Ming, Bin; Vladár, András E
2015-01-01
Hybrid metrology, e.g., the combination of several measurement techniques to determine critical dimensions, is an increasingly important approach to meet the needs of the semiconductor industry. A proper use of hybrid metrology may yield not only more reliable estimates for the quantitative characterization of 3-D structures but also a more realistic estimation of the corresponding uncertainties. Recent developments at the National Institute of Standards and Technology (NIST) feature the combination of optical critical dimension (OCD) measurements and scanning electron microscope (SEM) results. The hybrid methodology offers the potential to make measurements of essential 3-D attributes that may not be otherwise feasible. However, combining techniques gives rise to essential challenges in error analysis and comparing results from different instrument models, especially the effect of systematic and highly correlated errors in the measurement on the χ 2 function that is minimized. Both hypothetical examples and measurement data are used to illustrate solutions to these challenges.
NASA Astrophysics Data System (ADS)
Dettmer, Jan; Molnar, Sheri; Steininger, Gavin; Dosso, Stan E.; Cassidy, John F.
2012-02-01
This paper applies a general trans-dimensional Bayesian inference methodology and hierarchical autoregressive data-error models to the inversion of microtremor array dispersion data for shear wave velocity (vs) structure. This approach accounts for the limited knowledge of the optimal earth model parametrization (e.g. the number of layers in the vs profile) and of the data-error statistics in the resulting vs parameter uncertainty estimates. The assumed earth model parametrization influences estimates of parameter values and uncertainties due to different parametrizations leading to different ranges of data predictions. The support of the data for a particular model is often non-unique and several parametrizations may be supported. A trans-dimensional formulation accounts for this non-uniqueness by including a model-indexing parameter as an unknown so that groups of models (identified by the indexing parameter) are considered in the results. The earth model is parametrized in terms of a partition model with interfaces given over a depth-range of interest. In this work, the number of interfaces (layers) in the partition model represents the trans-dimensional model indexing. In addition, serial data-error correlations are addressed by augmenting the geophysical forward model with a hierarchical autoregressive error model that can account for a wide range of error processes with a small number of parameters. Hence, the limited knowledge about the true statistical distribution of data errors is also accounted for in the earth model parameter estimates, resulting in more realistic uncertainties and parameter values. Hierarchical autoregressive error models do not rely on point estimates of the model vector to estimate data-error statistics, and have no requirement for computing the inverse or determinant of a data-error covariance matrix. This approach is particularly useful for trans-dimensional inverse problems, as point estimates may not be representative of the state space that spans multiple subspaces of different dimensionalities. The order of the autoregressive process required to fit the data is determined here by posterior residual-sample examination and statistical tests. Inference for earth model parameters is carried out on the trans-dimensional posterior probability distribution by considering ensembles of parameter vectors. In particular, vs uncertainty estimates are obtained by marginalizing the trans-dimensional posterior distribution in terms of vs-profile marginal distributions. The methodology is applied to microtremor array dispersion data collected at two sites with significantly different geology in British Columbia, Canada. At both sites, results show excellent agreement with estimates from invasive measurements.
Losses from effluent taxes and quotas under uncertainty
Watson, W.D.; Ridker, R.G.
1984-01-01
Recent theoretical papers by Adar and Griffin (J. Environ. Econ. Manag.3, 178-188 (1976)), Fishelson (J. Environ. Econ. Manag.3, 189-197 (1976)), and Weitzman (Rev. Econ. Studies41, 477-491 (1974)) show that,different expected social losses arise from using effluent taxes and quotas as alternative control instruments when marginal control costs are uncertain. Key assumptions in these analyses are linear marginal cost and benefit functions and an additive error for the marginal cost function (to reflect uncertainty). In this paper, empirically derived nonlinear functions and more realistic multiplicative error terms are used to estimate expected control and damage costs and to identify (empirically) the mix of control instruments that minimizes expected losses. ?? 1984.
Developing a generalized allometric equation for aboveground biomass estimation
NASA Astrophysics Data System (ADS)
Xu, Q.; Balamuta, J. J.; Greenberg, J. A.; Li, B.; Man, A.; Xu, Z.
2015-12-01
A key potential uncertainty in estimating carbon stocks across multiple scales stems from the use of empirically calibrated allometric equations, which estimate aboveground biomass (AGB) from plant characteristics such as diameter at breast height (DBH) and/or height (H). The equations themselves contain significant and, at times, poorly characterized errors. Species-specific equations may be missing. Plant responses to their local biophysical environment may lead to spatially varying allometric relationships. The structural predictor may be difficult or impossible to measure accurately, particularly when derived from remote sensing data. All of these issues may lead to significant and spatially varying uncertainties in the estimation of AGB that are unexplored in the literature. We sought to quantify the errors in predicting AGB at the tree and plot level for vegetation plots in California. To accomplish this, we derived a generalized allometric equation (GAE) which we used to model the AGB on a full set of tree information such as DBH, H, taxonomy, and biophysical environment. The GAE was derived using published allometric equations in the GlobAllomeTree database. The equations were sparse in details about the error since authors provide the coefficient of determination (R2) and the sample size. A more realistic simulation of tree AGB should also contain the noise that was not captured by the allometric equation. We derived an empirically corrected variance estimate for the amount of noise to represent the errors in the real biomass. Also, we accounted for the hierarchical relationship between different species by treating each taxonomic level as a covariate nested within a higher taxonomic level (e.g. species < genus). This approach provides estimation under incomplete tree information (e.g. missing species) or blurred information (e.g. conjecture of species), plus the biophysical environment. The GAE allowed us to quantify contribution of each different covariate in estimating the AGB of trees. Lastly, we applied the GAE to an existing vegetation plot database - Forest Inventory and Analysis database - to derive per-tree and per-plot AGB estimations, their errors, and how much the error could be contributed to the original equations, the plant's taxonomy, and their biophysical environment.
Stenroos, Matti; Hauk, Olaf
2013-01-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259
Assessing the performance of dynamical trajectory estimates
NASA Astrophysics Data System (ADS)
Bröcker, Jochen
2014-06-01
Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.
Assessing the performance of dynamical trajectory estimates.
Bröcker, Jochen
2014-06-01
Estimating trajectories and parameters of dynamical systems from observations is a problem frequently encountered in various branches of science; geophysicists for example refer to this problem as data assimilation. Unlike as in estimation problems with exchangeable observations, in data assimilation the observations cannot easily be divided into separate sets for estimation and validation; this creates serious problems, since simply using the same observations for estimation and validation might result in overly optimistic performance assessments. To circumvent this problem, a result is presented which allows us to estimate this optimism, thus allowing for a more realistic performance assessment in data assimilation. The presented approach becomes particularly simple for data assimilation methods employing a linear error feedback (such as synchronization schemes, nudging, incremental 3DVAR and 4DVar, and various Kalman filter approaches). Numerical examples considering a high gain observer confirm the theory.
NASA Astrophysics Data System (ADS)
Hallez, Hans; Staelens, Steven; Lemahieu, Ignace
2009-10-01
EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.
Error analysis of satellite attitude determination using a vision-based approach
NASA Astrophysics Data System (ADS)
Carozza, Ludovico; Bevilacqua, Alessandro
2013-09-01
Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).
Neutron-Star Radius from a Population of Binary Neutron Star Mergers.
Bose, Sukanta; Chakravarti, Kabir; Rezzolla, Luciano; Sathyaprakash, B S; Takami, Kentaro
2018-01-19
We show how gravitational-wave observations with advanced detectors of tens to several tens of neutron-star binaries can measure the neutron-star radius with an accuracy of several to a few percent, for mass and spatial distributions that are realistic, and with none of the sources located within 100 Mpc. We achieve such an accuracy by combining measurements of the total mass from the inspiral phase with those of the compactness from the postmerger oscillation frequencies. For estimating the measurement errors of these frequencies, we utilize analytical fits to postmerger numerical relativity waveforms in the time domain, obtained here for the first time, for four nuclear-physics equations of state and a couple of values for the mass. We further exploit quasiuniversal relations to derive errors in compactness from those frequencies. Measuring the average radius to well within 10% is possible for a sample of 100 binaries distributed uniformly in volume between 100 and 300 Mpc, so long as the equation of state is not too soft or the binaries are not too heavy. We also give error estimates for the Einstein Telescope.
Gao, Nuo; Zhu, S A; He, Bin
2005-06-07
We have developed a new algorithm for magnetic resonance electrical impedance tomography (MREIT), which uses only one component of the magnetic flux density to reconstruct the electrical conductivity distribution within the body. The radial basis function (RBF) network and simplex method are used in the present approach to estimate the conductivity distribution by minimizing the errors between the 'measured' and model-predicted magnetic flux densities. Computer simulations were conducted in a realistic-geometry head model to test the feasibility of the proposed approach. Single-variable and three-variable simulations were performed to estimate the brain-skull conductivity ratio and the conductivity values of the brain, skull and scalp layers. When SNR = 15 for magnetic flux density measurements with the target skull-to-brain conductivity ratio being 1/15, the relative error (RE) between the target and estimated conductivity was 0.0737 +/- 0.0746 in the single-variable simulations. In the three-variable simulations, the RE was 0.1676 +/- 0.0317. Effects of electrode position uncertainty were also assessed by computer simulations. The present promising results suggest the feasibility of estimating important conductivity values within the head from noninvasive magnetic flux density measurements.
An operational GLS model for hydrologic regression
Tasker, Gary D.; Stedinger, J.R.
1989-01-01
Recent Monte Carlo studies have documented the value of generalized least squares (GLS) procedures to estimate empirical relationships between streamflow statistics and physiographic basin characteristics. This paper presents a number of extensions of the GLS method that deal with realities and complexities of regional hydrologic data sets that were not addressed in the simulation studies. These extensions include: (1) a more realistic model of the underlying model errors; (2) smoothed estimates of cross correlation of flows; (3) procedures for including historical flow data; (4) diagnostic statistics describing leverage and influence for GLS regression; and (5) the formulation of a mathematical program for evaluating future gaging activities. ?? 1989.
In vivo estimation of target registration errors during augmented reality laparoscopic surgery.
Thompson, Stephen; Schneider, Crispin; Bosi, Michele; Gurusamy, Kurinchi; Ourselin, Sébastien; Davidson, Brian; Hawkes, David; Clarkson, Matthew J
2018-06-01
Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.
Invariant Tori in the Secular Motions of the Three-body Planetary Systems
NASA Astrophysics Data System (ADS)
Locatelli, Ugo; Giorgilli, Antonio
We consider the problem of the applicability of KAM theorem to a realistic problem of three bodies. In the framework of the averaged dynamics over the fast angles for the Sun-Jupiter-Saturn system we can prove the perpetual stability of the orbit. The proof is based on semi-numerical algorithms requiring both explicit algebraic manipulations of series and analytical estimates. The proof is made rigorous by using interval arithmetics in order to control the numerical errors.
Milky Way mass and potential recovery using tidal streams in a realistic halo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonaca, Ana; Geha, Marla; Küpper, Andreas H. W.
2014-11-01
We present a new method for determining the Galactic gravitational potential based on forward modeling of tidal stellar streams. We use this method to test the performance of smooth and static analytic potentials in representing realistic dark matter halos, which have substructure and are continually evolving by accretion. Our FAST-FORWARD method uses a Markov Chain Monte Carlo algorithm to compare, in six-dimensional phase space, an 'observed' stream to models created in trial analytic potentials. We analyze a large sample of streams that evolved in the Via Lactea II (VL2) simulation, which represents a realistic Galactic halo potential. The recovered potentialmore » parameters are in agreement with the best fit to the global, present-day VL2 potential. However, merely assuming an analytic potential limits the dark matter halo mass measurement to an accuracy of 5%-20%, depending on the choice of analytic parameterization. Collectively, the mass estimates using streams from our sample reach this fundamental limit, but individually they can be highly biased. Individual streams can both under- and overestimate the mass, and the bias is progressively worse for those with smaller perigalacticons, motivating the search for tidal streams at galactocentric distances larger than 70 kpc. We estimate that the assumption of a static and smooth dark matter potential in modeling of the GD-1- and Pal5-like streams introduces an error of up to 50% in the Milky Way mass estimates.« less
Fast state estimation subject to random data loss in discrete-time nonlinear stochastic systems
NASA Astrophysics Data System (ADS)
Mahdi Alavi, S. M.; Saif, Mehrdad
2013-12-01
This paper focuses on the design of the standard observer in discrete-time nonlinear stochastic systems subject to random data loss. By the assumption that the system response is incrementally bounded, two sufficient conditions are subsequently derived that guarantee exponential mean-square stability and fast convergence of the estimation error for the problem at hand. An efficient algorithm is also presented to obtain the observer gain. Finally, the proposed methodology is employed for monitoring the Continuous Stirred Tank Reactor (CSTR) via a wireless communication network. The effectiveness of the designed observer is extensively assessed by using an experimental tested-bed that has been fabricated for performance evaluation of the over wireless-network estimation techniques under realistic radio channel conditions.
Population viability analysis with species occurrence data from museum collections.
Skarpaas, Olav; Stabbetorp, Odd E
2011-06-01
The most comprehensive data on many species come from scientific collections. Thus, we developed a method of population viability analysis (PVA) in which this type of occurrence data can be used. In contrast to classical PVA, our approach accounts for the inherent observation error in occurrence data and allows the estimation of the population parameters needed for viability analysis. We tested the sensitivity of the approach to spatial resolution of the data, length of the time series, sampling effort, and detection probability with simulated data and conducted PVAs for common, rare, and threatened species. We compared the results of these PVAs with results of standard method PVAs in which observation error is ignored. Our method provided realistic estimates of population growth terms and quasi-extinction risk in cases in which the standard method without observation error could not. For low values of any of the sampling variables we tested, precision decreased, and in some cases biased estimates resulted. The results of our PVAs with the example species were consistent with information in the literature on these species. Our approach may facilitate PVA for a wide range of species of conservation concern for which demographic data are lacking but occurrence data are readily available. ©2011 Society for Conservation Biology.
Experimental investigation of observation error in anuran call surveys
McClintock, B.T.; Bailey, L.L.; Pollock, K.H.; Simons, T.R.
2010-01-01
Occupancy models that account for imperfect detection are often used to monitor anuran and songbird species occurrence. However, presenceabsence data arising from auditory detections may be more prone to observation error (e.g., false-positive detections) than are sampling approaches utilizing physical captures or sightings of individuals. We conducted realistic, replicated field experiments using a remote broadcasting system to simulate simple anuran call surveys and to investigate potential factors affecting observation error in these studies. Distance, time, ambient noise, and observer abilities were the most important factors explaining false-negative detections. Distance and observer ability were the best overall predictors of false-positive errors, but ambient noise and competing species also affected error rates for some species. False-positive errors made up 5 of all positive detections, with individual observers exhibiting false-positive rates between 0.5 and 14. Previous research suggests false-positive errors of these magnitudes would induce substantial positive biases in standard estimators of species occurrence, and we recommend practices to mitigate for false positives when developing occupancy monitoring protocols that rely on auditory detections. These recommendations include additional observer training, limiting the number of target species, and establishing distance and ambient noise thresholds during surveys. ?? 2010 The Wildlife Society.
Updated Magmatic Flux Rate Estimates for the Hawaii Plume
NASA Astrophysics Data System (ADS)
Wessel, P.
2013-12-01
Several studies have estimated the magmatic flux rate along the Hawaiian-Emperor Chain using a variety of methods and arriving at different results. These flux rate estimates have weaknesses because of incomplete data sets and different modeling assumptions, especially for the youngest portion of the chain (<3 Ma). While they generally agree on the 1st order features, there is less agreement on the magnitude and relative size of secondary flux variations. Some of these differences arise from the use of different methodologies, but the significance of this variability is difficult to assess due to a lack of confidence bounds on the estimates obtained with these disparate methods. All methods introduce some error, but to date there has been little or no quantification of error estimates for the inferred melt flux, making an assessment problematic. Here we re-evaluate the melt flux for the Hawaii plume with the latest gridded data sets (SRTM30+ and FAA 21.1) using several methods, including the optimal robust separator (ORS) and directional median filtering techniques (DiM). We also compute realistic confidence limits on the results. In particular, the DiM technique was specifically developed to aid in the estimation of surface loads that are superimposed on wider bathymetric swells and it provides error estimates on the optimal residuals. Confidence bounds are assigned separately for the estimated surface load (obtained from the ORS regional/residual separation techniques) and the inferred subsurface volume (from gravity-constrained isostasy and plate flexure optimizations). These new and robust estimates will allow us to assess which secondary features in the resulting melt flux curve are significant and should be incorporated when correlating melt flux variations with other geophysical and geochemical observations.
Pulsed photoacoustic flow imaging with a handheld system
NASA Astrophysics Data System (ADS)
van den Berg, Pim J.; Daoudi, Khalid; Steenbergen, Wiendelt
2016-02-01
Flow imaging is an important technique in a range of disease areas, but estimating low flow speeds, especially near the walls of blood vessels, remains challenging. Pulsed photoacoustic flow imaging can be an alternative since there is little signal contamination from background tissue with photoacoustic imaging. We propose flow imaging using a clinical photoacoustic system that is both handheld and portable. The system integrates a linear array with 7.5 MHz central frequency in combination with a high-repetition-rate diode laser to allow high-speed photoacoustic imaging-ideal for this application. This work shows the flow imaging performance of the system in vitro using microparticles. Both two-dimensional (2-D) flow images and quantitative flow velocities from 12 to 75 mm/s were obtained. In a transparent bulk medium, flow estimation showed standard errors of ˜7% the estimated speed; in the presence of tissue-realistic optical scattering, the error increased to 40% due to limited signal-to-noise ratio. In the future, photoacoustic flow imaging can potentially be performed in vivo using fluorophore-filled vesicles or with an improved setup on whole blood.
Accuracy assessment of high-rate GPS measurements for seismology
NASA Astrophysics Data System (ADS)
Elosegui, P.; Davis, J. L.; Ekström, G.
2007-12-01
Analysis of GPS measurements with a controlled laboratory system, built to simulate the ground motions caused by tectonic earthquakes and other transient geophysical signals such as glacial earthquakes, enables us to assess the technique of high-rate GPS. The root-mean-square (rms) position error of this system when undergoing realistic simulated seismic motions is 0.05~mm, with maximum position errors of 0.1~mm, thus providing "ground truth" GPS displacements. We have acquired an extensive set of high-rate GPS measurements while inducing seismic motions on a GPS antenna mounted on this system with a temporal spectrum similar to real seismic events. We found that, for a particular 15-min-long test event, the rms error of the 1-Hz GPS position estimates was 2.5~mm, with maximum position errors of 10~mm, and the error spectrum of the GPS estimates was approximately flicker noise. These results may however represent a best-case scenario since they were obtained over a short (~10~m) baseline, thereby greatly mitigating baseline-dependent errors, and when the number and distribution of satellites on the sky was good. For example, we have determined that the rms error can increase by a factor of 2--3 as the GPS constellation changes throughout the day, with an average value of 3.5~mm for eight identical, hourly-spaced, consecutive test events. The rms error also increases with increasing baseline, as one would expect, with an average rms error for a ~1400~km baseline of 9~mm. We will present an assessment of the accuracy of high-rate GPS based on these measurements, discuss the implications of this study for seismology, and describe new applications in glaciology.
Resolving Tropical Cyclone Intensity in Models
NASA Astrophysics Data System (ADS)
Davis, C. A.
2018-02-01
In recent years, global weather forecast models and global climate models have begun to depict intense tropical cyclones, even up to category 5 on the Saffir-Simpson scale. In light of the limitation of horizontal resolution in such models, the author performs calculations, using the extended Best Track data for Atlantic tropical cyclones, to estimate the ability of models with differing grid spacing to represent Atlantic tropical cyclone intensity statistically. Results indicate that, under optimistic assumptions, models with horizontal grid spacing of one fourth degree or coarser should not produce a realistic number of category 4 and 5 storms unless there are errors in spatial attributes of the wind field. Furthermore, the case of Irma (2017) is used to demonstrate the importance of a realistic depiction of angular momentum and to motivate the use of angular momentum in model evaluation.
Optimizing Hybrid Metrology: Rigorous Implementation of Bayesian and Combined Regression
Henn, Mark-Alexander; Silver, Richard M.; Villarrubia, John S.; Zhang, Nien Fan; Zhou, Hui; Barnes, Bryan M.; Ming, Bin; Vladár, András E.
2015-01-01
Hybrid metrology, e.g., the combination of several measurement techniques to determine critical dimensions, is an increasingly important approach to meet the needs of the semiconductor industry. A proper use of hybrid metrology may yield not only more reliable estimates for the quantitative characterization of 3-D structures but also a more realistic estimation of the corresponding uncertainties. Recent developments at the National Institute of Standards and Technology (NIST) feature the combination of optical critical dimension (OCD) measurements and scanning electron microscope (SEM) results. The hybrid methodology offers the potential to make measurements of essential 3-D attributes that may not be otherwise feasible. However, combining techniques gives rise to essential challenges in error analysis and comparing results from different instrument models, especially the effect of systematic and highly correlated errors in the measurement on the χ2 function that is minimized. Both hypothetical examples and measurement data are used to illustrate solutions to these challenges. PMID:26681991
Measurement error in epidemiologic studies of air pollution based on land-use regression models.
Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino
2013-10-15
Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.
Incorporating harvest rates into the sex-age-kill model for white-tailed deer
Norton, Andrew S.; Diefenbach, Duane R.; Rosenberry, Christopher S.; Wallingford, Bret D.
2013-01-01
Although monitoring population trends is an essential component of game species management, wildlife managers rarely have complete counts of abundance. Often, they rely on population models to monitor population trends. As imperfect representations of real-world populations, models must be rigorously evaluated to be applied appropriately. Previous research has evaluated population models for white-tailed deer (Odocoileus virginianus); however, the precision and reliability of these models when tested against empirical measures of variability and bias largely is untested. We were able to statistically evaluate the Pennsylvania sex-age-kill (PASAK) population model using realistic error measured using data from 1,131 radiocollared white-tailed deer in Pennsylvania from 2002 to 2008. We used these data and harvest data (number killed, age-sex structure, etc.) to estimate precision of abundance estimates, identify the most efficient harvest data collection with respect to precision of parameter estimates, and evaluate PASAK model robustness to violation of assumptions. Median coefficient of variation (CV) estimates by Wildlife Management Unit, 13.2% in the most recent year, were slightly above benchmarks recommended for managing game species populations. Doubling reporting rates by hunters or doubling the number of deer checked by personnel in the field reduced median CVs to recommended levels. The PASAK model was robust to errors in estimates for adult male harvest rates but was sensitive to errors in subadult male harvest rates, especially in populations with lower harvest rates. In particular, an error in subadult (1.5-yr-old) male harvest rates resulted in the opposite error in subadult male, adult female, and juvenile population estimates. Also, evidence of a greater harvest probability for subadult female deer when compared with adult (≥2.5-yr-old) female deer resulted in a 9.5% underestimate of the population using the PASAK model. Because obtaining appropriate sample sizes, by management unit, to estimate harvest rate parameters each year may be too expensive, assumptions of constant annual harvest rates may be necessary. However, if changes in harvest regulations or hunter behavior influence subadult male harvest rates, the PASAK model could provide an unreliable index to population changes.
Time-scale invariance as an emergent property in a perceptron with realistic, noisy neurons
Buhusi, Catalin V.; Oprisan, Sorinel A.
2013-01-01
In most species, interval timing is time-scale invariant: errors in time estimation scale up linearly with the estimated duration. In mammals, time-scale invariance is ubiquitous over behavioral, lesion, and pharmacological manipulations. For example, dopaminergic drugs induce an immediate, whereas cholinergic drugs induce a gradual, scalar change in timing. Behavioral theories posit that time-scale invariance derives from particular computations, rules, or coding schemes. In contrast, we discuss a simple neural circuit, the perceptron, whose output neurons fire in a clockwise fashion (interval timing) based on the pattern of coincidental activation of its input neurons. We show numerically that time-scale invariance emerges spontaneously in a perceptron with realistic neurons, in the presence of noise. Under the assumption that dopaminergic drugs modulate the firing of input neurons, and that cholinergic drugs modulate the memory representation of the criterion time, we show that a perceptron with realistic neurons reproduces the pharmacological clock and memory patterns, and their time-scale invariance, in the presence of noise. These results suggest that rather than being a signature of higher-order cognitive processes or specific computations related to timing, time-scale invariance may spontaneously emerge in a massively-connected brain from the intrinsic noise of neurons and circuits, thus providing the simplest explanation for the ubiquity of scale invariance of interval timing. PMID:23518297
Nickerson, Naomi H; Li, Ying; Benjamin, Simon C
2013-01-01
A scalable quantum computer could be built by networking together many simple processor cells, thus avoiding the need to create a single complex structure. The difficulty is that realistic quantum links are very error prone. A solution is for cells to repeatedly communicate with each other and so purify any imperfections; however prior studies suggest that the cells themselves must then have prohibitively low internal error rates. Here we describe a method by which even error-prone cells can perform purification: groups of cells generate shared resource states, which then enable stabilization of topologically encoded data. Given a realistically noisy network (≥10% error rate) we find that our protocol can succeed provided that intra-cell error rates for initialisation, state manipulation and measurement are below 0.82%. This level of fidelity is already achievable in several laboratory systems.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
Stochastic estimation of plant-available soil water under fluctuating water table depths
NASA Astrophysics Data System (ADS)
Or, Dani; Groeneveld, David P.
1994-12-01
Preservation of native valley-floor phreatophytes while pumping groundwater for export from Owens Valley, California, requires reliable predictions of plant water use. These predictions are compared with stored soil water within well field regions and serve as a basis for managing groundwater resources. Soil water measurement errors, variable recharge, unpredictable climatic conditions affecting plant water use, and modeling errors make soil water predictions uncertain and error-prone. We developed and tested a scheme based on soil water balance coupled with implementation of Kalman filtering (KF) for (1) providing physically based soil water storage predictions with prediction errors projected from the statistics of the various inputs, and (2) reducing the overall uncertainty in both estimates and predictions. The proposed KF-based scheme was tested using experimental data collected at a location on the Owens Valley floor where the water table was artificially lowered by groundwater pumping and later allowed to recover. Vegetation composition and per cent cover, climatic data, and soil water information were collected and used for developing a soil water balance. Predictions and updates of soil water storage under different types of vegetation were obtained for a period of 5 years. The main results show that: (1) the proposed predictive model provides reliable and resilient soil water estimates under a wide range of external conditions; (2) the predicted soil water storage and the error bounds provided by the model offer a realistic and rational basis for decisions such as when to curtail well field operation to ensure plant survival. The predictive model offers a practical means for accommodating simple aspects of spatial variability by considering the additional source of uncertainty as part of modeling or measurement uncertainty.
Towards a novel look on low-frequency climate reconstructions
NASA Astrophysics Data System (ADS)
Kamenik, Christian; Goslar, Tomasz; Hicks, Sheila; Barnekow, Lena; Huusko, Antti
2010-05-01
Information on low-frequency (millennial to sub-centennial) climate change is often derived from sedimentary archives, such as peat profiles or lake sediments. Usually, these archives have non-annual and varying time resolution. Their dating is mainly based on radionuclides, which provide probabilistic age-depth relationships with complex error structures. Dating uncertainties impede the interpretation of sediment-based climate reconstructions. They complicate the calculation of time-dependent rates. In most cases, they make any calibration in time impossible. Sediment-based climate proxies are therefore often presented as a single, best-guess time series without proper calibration and error estimation. Errors along time and dating errors that propagate into the calculation of time-dependent rates are neglected. Our objective is to overcome the aforementioned limitations by using a 'swarm' or 'ensemble' of reconstructions instead of a single best-guess. The novelty of our approach is to take into account age-depth uncertainties by permuting through a large number of potential age-depth relationships of the archive of interest. For each individual permutation we can then calculate rates, calibrate proxies in time, and reconstruct the climate-state variable of interest. From the resulting swarm of reconstructions, we can derive realistic estimates of even complex error structures. The likelihood of reconstructions is visualized by a grid of two-dimensional kernels that take into account probabilities along time and the climate-state variable of interest simultaneously. For comparison and regional synthesis, likelihoods can be scored against other independent climate time series.
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com; Johnson, Randall; Larson, Gary
Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authorsmore » estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.« less
Minimizing treatment planning errors in proton therapy using failure mode and effects analysis.
Zheng, Yuanshui; Johnson, Randall; Larson, Gary
2016-06-01
Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their clinic have proven to be useful in error reduction in proton treatment planning, thus improving the effectiveness and safety of proton therapy.
Freitag, L E; Tyack, P L
1993-04-01
A method for localization and tracking of calling marine mammals was tested under realistic field conditions that include noise, multipath, and arbitrarily located sensors. Experiments were performed in two locations using four and six hydrophones with captive Atlantic bottlenose dolphins (Tursiops truncatus). Acoustic signals from the animals were collected in the field using a digital acoustic data acquisition system. The data were then processed off-line to determine relative hydrophone positions and the animal locations. Accurate hydrophone position estimates are achieved by pinging sequentially from each hydrophone to all the others. A two-step least-squares algorithm is then used to determine sensor locations from the calibration data. Animal locations are determined by estimating the time differences of arrival of the dolphin signals at the different sensors. The peak of a matched filter output or the first cycle of the observed waveform is used to determine arrival time of an echolocation click. Cross correlation between hydrophones is used to determine inter-sensor time delays of whistles. Calculation of source location using the time difference of arrival measurements is done using a least-squares solution to minimize error. These preliminary experimental results based on a small set of data show that realistic trajectories for moving animals may be generated from consecutive location estimates.
An optimal pole-matching observer design for estimating tyre-road friction force
NASA Astrophysics Data System (ADS)
Faraji, Mohammad; Johari Majd, Vahid; Saghafi, Behrooz; Sojoodi, Mahdi
2010-10-01
In this paper, considering the dynamical model of tyre-road contacts, we design a nonlinear observer for the on-line estimation of tyre-road friction force using the average lumped LuGre model without any simplification. The design is the extension of a previously offered observer to allow a muchmore realistic estimation by considering the effect of the rolling resistance and a term related to the relative velocity in the observer. Our aim is not to introduce a new friction model, but to present a more accurate nonlinear observer for the assumed model. We derive linear matrix equality conditions to obtain an observer gain with minimum pole mismatch for the desired observer error dynamic system. We prove the convergence of the observer for the non-simplified model. Finally, we compare the performance of the proposed observer with that of the previously mentioned nonlinear observer, which shows significant improvement in the accuracy of estimation.
Navigation system for minimally invasive esophagectomy: experimental study in a porcine model.
Nickel, Felix; Kenngott, Hannes G; Neuhaus, Jochen; Sommer, Christof M; Gehrig, Tobias; Kolb, Armin; Gondan, Matthias; Radeleff, Boris A; Schaible, Anja; Meinzer, Hans-Peter; Gutt, Carsten N; Müller-Stich, Beat-Peter
2013-10-01
Navigation systems potentially facilitate minimally invasive esophagectomy and improve patient outcome by improving intraoperative orientation, position estimation of instruments, and identification of lymph nodes and resection margins. The authors' self-developed navigation system is highly accurate in static environments. This study aimed to test the overall accuracy of the navigation system in a realistic operating room scenario and to identify the different sources of error altering accuracy. To simulate a realistic environment, a porcine model (n = 5) was used with endoscopic clips in the esophagus as navigation targets. Computed tomography imaging was followed by image segmentation and target definition with the medical imaging interaction toolkit software. Optical tracking was used for registration and localization of animals and navigation instruments. Intraoperatively, the instrument was displayed relative to segmented organs in real time. The target registration error (TRE) of the navigation system was defined as the distance between the target and the navigation instrument tip. The TRE was measured on skin targets with the animal in the 0° supine and 25° anti-Trendelenburg position and on the esophagus during laparoscopic transhiatal preparation. On skin targets, the TRE was significantly higher in the 25° position, at 14.6 ± 2.7 mm, compared with the 0° position, at 3.2 ± 1.3 mm. The TRE on the esophagus was 11.2 ± 2.4 mm. The main source of error was soft tissue deformation caused by intraoperative positioning, pneumoperitoneum, surgical manipulation, and tissue dissection. The navigation system obtained acceptable accuracy with a minimally invasive transhiatal approach to the esophagus in a realistic experimental model. Thus the system has the potential to improve intraoperative orientation, identification of lymph nodes and adequate resection margins, and visualization of risk structures. Compensation methods for soft tissue deformation may lead to an even more accurate navigation system in the future.
NASA Astrophysics Data System (ADS)
Hakim, Layal; Lacaze, Guilhem; Khalil, Mohammad; Sargsyan, Khachik; Najm, Habib; Oefelein, Joseph
2018-05-01
This paper demonstrates the development of a simple chemical kinetics model designed for autoignition of n-dodecane in air using Bayesian inference with a model-error representation. The model error, i.e. intrinsic discrepancy from a high-fidelity benchmark model, is represented by allowing additional variability in selected parameters. Subsequently, we quantify predictive uncertainties in the results of autoignition simulations of homogeneous reactors at realistic diesel engine conditions. We demonstrate that these predictive error bars capture model error as well. The uncertainty propagation is performed using non-intrusive spectral projection that can also be used in principle with larger scale computations, such as large eddy simulation. While the present calibration is performed to match a skeletal mechanism, it can be done with equal success using experimental data only (e.g. shock-tube measurements). Since our method captures the error associated with structural model simplifications, we believe that the optimised model could then lead to better qualified predictions of autoignition delay time in high-fidelity large eddy simulations than the existing detailed mechanisms. This methodology provides a way to reduce the cost of reaction kinetics in simulations systematically, while quantifying the accuracy of predictions of important target quantities.
Simultaneous Helmert transformations among multiple frames considering all relevant measurements
NASA Astrophysics Data System (ADS)
Chang, Guobin; Lin, Peng; Bian, Hefang; Gao, Jingxiang
2018-03-01
Helmert or similarity models are widely employed to relate different coordinate frames. It is often encountered in practice to transform coordinates from more than one old frame into a new one. One may perform separate Helmert transformations for each old frame. However, although each transformation is locally optimal, this is not globally optimal. Transformations among three frames, namely one new and two old, are studied as an example. Simultaneous Helmert transformations among all frames are also studied. Least-squares estimation of the transformation parameters and the coordinates in the new frame of all stations involved is performed. A functional model for the transformations among multiple frames is developed. A realistic stochastic model is followed, in which not only non-common stations are taken into consideration, but also errors in all measurements are addressed. An algorithm of iterative linearizations and estimations is derived in detail. The proposed method is globally optimal, and, perhaps more importantly, it produces a unified network of the new frame providing coordinate estimates for all involved stations and the associated covariance matrix, with the latter being consistent with the true errors of the former. Simulations are conducted, and the results validate the superiority of the proposed combined method over separate approaches.
Li, Yihe; Li, Bofeng; Gao, Yang
2015-01-01
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network. PMID:26633400
Li, Yihe; Li, Bofeng; Gao, Yang
2015-11-30
With the increased availability of regional reference networks, Precise Point Positioning (PPP) can achieve fast ambiguity resolution (AR) and precise positioning by assimilating the satellite fractional cycle biases (FCBs) and atmospheric corrections derived from these networks. In such processing, the atmospheric corrections are usually treated as deterministic quantities. This is however unrealistic since the estimated atmospheric corrections obtained from the network data are random and furthermore the interpolated corrections diverge from the realistic corrections. This paper is dedicated to the stochastic modelling of atmospheric corrections and analyzing their effects on the PPP AR efficiency. The random errors of the interpolated corrections are processed as two components: one is from the random errors of estimated corrections at reference stations, while the other arises from the atmospheric delay discrepancies between reference stations and users. The interpolated atmospheric corrections are then applied by users as pseudo-observations with the estimated stochastic model. Two data sets are processed to assess the performance of interpolated corrections with the estimated stochastic models. The results show that when the stochastic characteristics of interpolated corrections are properly taken into account, the successful fix rate reaches 93.3% within 5 min for a medium inter-station distance network and 80.6% within 10 min for a long inter-station distance network.
NASA Astrophysics Data System (ADS)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; Bianchini, Federico; Bleem, Lindsey E.; Crawford, Thomas M.; Holder, Gilbert P.; Manzotti, Alessandro; Reichardt, Christian L.
2017-08-01
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.
Simulating a transmon implementation of the surface code, Part II
NASA Astrophysics Data System (ADS)
O'Brien, Thomas; Tarasinski, Brian; Rol, Adriaan; Bultink, Niels; Fu, Xiang; Criger, Ben; Dicarlo, Leonardo
The majority of quantum error correcting circuit simulations use Pauli error channels, as they can be efficiently calculated. This raises two questions: what is the effect of more complicated physical errors on the logical qubit error rate, and how much more efficient can decoders become when accounting for realistic noise? To answer these questions, we design a minimal weight perfect matching decoder parametrized by a physically motivated noise model and test it on the full density matrix simulation of Surface-17, a distance-3 surface code. We compare performance against other decoders, for a range of physical parameters. Particular attention is paid to realistic sources of error for transmon qubits in a circuit QED architecture, and the requirements for real-time decoding via an FPGA Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.
Rigorous covariance propagation of geoid errors to geodetic MDT estimates
NASA Astrophysics Data System (ADS)
Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.
2012-04-01
The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.
Identification of Low Order Equivalent System Models From Flight Test Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2000-01-01
Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.
Alexeeff, Stacey E.; Schwartz, Joel; Kloog, Itai; Chudnovsky, Alexandra; Koutrakis, Petros; Coull, Brent A.
2016-01-01
Many epidemiological studies use predicted air pollution exposures as surrogates for true air pollution levels. These predicted exposures contain exposure measurement error, yet simulation studies have typically found negligible bias in resulting health effect estimates. However, previous studies typically assumed a statistical spatial model for air pollution exposure, which may be oversimplified. We address this shortcoming by assuming a realistic, complex exposure surface derived from fine-scale (1km x 1km) remote-sensing satellite data. Using simulation, we evaluate the accuracy of epidemiological health effect estimates in linear and logistic regression when using spatial air pollution predictions from kriging and land use regression models. We examined chronic (long-term) and acute (short-term) exposure to air pollution. Results varied substantially across different scenarios. Exposure models with low out-of-sample R2 yielded severe biases in the health effect estimates of some models, ranging from 60% upward bias to 70% downward bias. One land use regression exposure model with greater than 0.9 out-of-sample R2 yielded upward biases up to 13% for acute health effect estimates. Almost all models drastically underestimated the standard errors. Land use regression models performed better in chronic effects simulations. These results can help researchers when interpreting health effect estimates in these types of studies. PMID:24896768
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
NASA Astrophysics Data System (ADS)
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The proposed method can be efficiently used in many porous media applications for problems such as stochastic homogenization/upscaling, propagation of uncertainty from microscopic fluid and rock properties to macro-scale parameters, robust estimation of Representative Elementary Volume size for arbitrary physics.
Characterization of electrophysiological propagation by multichannel sensors
Bradshaw, L. Alan; Kim, Juliana H.; Somarajan, Suseela; Richards, William O.; Cheng, Leo K.
2016-01-01
Objective The propagation of electrophysiological activity measured by multichannel devices could have significant clinical implications. Gastric slow waves normally propagate along longitudinal paths that are evident in recordings of serosal potentials and transcutaneous magnetic fields. We employed a realistic model of gastric slow wave activity to simulate the transabdominal magnetogastrogram (MGG) recorded in a multichannel biomagnetometer and to determine characteristics of electrophysiological propagation from MGG measurements. Methods Using MGG simulations of slow wave sources in a realistic abdomen (both superficial and deep sources) and in a horizontally-layered volume conductor, we compared two analytic methods (Second Order Blind Identification, SOBI and Surface Current Density, SCD) that allow quantitative characterization of slow wave propagation. We also evaluated the performance of the methods with simulated experimental noise. The methods were also validated in an experimental animal model. Results Mean square errors in position estimates were within 2 cm of the correct position, and average propagation velocities within 2 mm/s of the actual velocities. SOBI propagation analysis outperformed the SCD method for dipoles in the superficial and horizontal layer models with and without additive noise. The SCD method gave better estimates for deep sources, but did not handle additive noise as well as SOBI. Conclusion SOBI-MGG and SCD-MGG were used to quantify slow wave propagation in a realistic abdomen model of gastric electrical activity. Significance These methods could be generalized to any propagating electrophysiological activity detected by multichannel sensor arrays. PMID:26595907
Hyvärinen, A
1985-01-01
The main purpose of the present study was to describe the statistical behaviour of daily analytical errors in the dimensions of place and time, providing a statistical basis for realistic estimates of the analytical error, and hence allowing the importance of the error and the relative contributions of its different sources to be re-evaluated. The observation material consists of creatinine and glucose results for control sera measured in daily routine quality control in five laboratories for a period of one year. The observation data were processed and computed by means of an automated data processing system. Graphic representations of time series of daily observations, as well as their means and dispersion limits when grouped over various time intervals, were investigated. For partition of the total variation several two-way analyses of variance were done with laboratory and various time classifications as factors. Pooled sets of observations were tested for normality of distribution and for consistency of variances, and the distribution characteristics of error variation in different categories of place and time were compared. Errors were found from the time series to vary typically between days. Due to irregular fluctuations in general and particular seasonal effects in creatinine, stable estimates of means or of dispersions for errors in individual laboratories could not be easily obtained over short periods of time but only from data sets pooled over long intervals (preferably at least one year). Pooled estimates of proportions of intralaboratory variation were relatively low (less than 33%) when the variation was pooled within days. However, when the variation was pooled over longer intervals this proportion increased considerably, even to a maximum of 89-98% (95-98% in each method category) when an outlying laboratory in glucose was omitted, with a concomitant decrease in the interaction component (representing laboratory-dependent variation with time). This indicates that a substantial part of the variation comes from intralaboratory variation with time rather than from constant interlaboratory differences. Normality and consistency of statistical distributions were best achieved in the long-term intralaboratory sets of the data, under which conditions the statistical estimates of error variability were also most characteristic of the individual laboratories rather than necessarily being similar to one another. Mixing of data from different laboratories may give heterogeneous and nonparametric distributions and hence is not advisable.(ABSTRACT TRUNCATED AT 400 WORDS)
NASA Astrophysics Data System (ADS)
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
NASA Astrophysics Data System (ADS)
Ito, Shin-Ichi; Mitsukura, Yasue; Nakamura Miyamura, Hiroko; Saito, Takafumi; Fukumi, Minoru
EEG is characterized by the unique and individual characteristics. Little research has been done to take into account the individual characteristics when analyzing EEG signals. Often the EEG has frequency components which can describe most of the significant characteristics. Then there is the difference of importance between the analyzed frequency components of the EEG. We think that the importance difference shows the individual characteristics. In this paper, we propose a new EEG extraction method of characteristic vector by a latency structure model in individual characteristics (LSMIC). The LSMIC is the latency structure model, which has personal error as the individual characteristics, based on normal distribution. The real-coded genetic algorithms (RGA) are used for specifying the personal error that is unknown parameter. Moreover we propose an objective estimation method that plots the EEG characteristic vector on a visualization space. Finally, the performance of the proposed method is evaluated using a realistic simulation and applied to a real EEG data. The result of our experiment shows the effectiveness of the proposed method.
Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics
NASA Astrophysics Data System (ADS)
Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane
2014-10-01
This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...
NASA Astrophysics Data System (ADS)
de Montera, L.; Mallet, C.; Barthès, L.; Golé, P.
2008-08-01
This paper shows how nonlinear models originally developed in the finance field can be used to predict rain attenuation level and volatility in Earth-to-Satellite links operating at the Extremely High Frequencies band (EHF, 20 50 GHz). A common approach to solving this problem is to consider that the prediction error corresponds only to scintillations, whose variance is assumed to be constant. Nevertheless, this assumption does not seem to be realistic because of the heteroscedasticity of error time series: the variance of the prediction error is found to be time-varying and has to be modeled. Since rain attenuation time series behave similarly to certain stocks or foreign exchange rates, a switching ARIMA/GARCH model was implemented. The originality of this model is that not only the attenuation level, but also the error conditional distribution are predicted. It allows an accurate upper-bound of the future attenuation to be estimated in real time that minimizes the cost of Fade Mitigation Techniques (FMT) and therefore enables the communication system to reach a high percentage of availability. The performance of the switching ARIMA/GARCH model was estimated using a measurement database of the Olympus satellite 20/30 GHz beacons and this model is shown to outperform significantly other existing models. The model also includes frequency scaling from the downlink frequency to the uplink frequency. The attenuation effects (gases, clouds and rain) are first separated with a neural network and then scaled using specific scaling factors. As to the resulting uplink prediction error, the error contribution of the frequency scaling step is shown to be larger than that of the downlink prediction, indicating that further study should focus on improving the accuracy of the scaling factor.
Grid convergence errors in hemodynamic solution of patient-specific cerebral aneurysms.
Hodis, Simona; Uthamaraj, Susheil; Smith, Andrea L; Dennis, Kendall D; Kallmes, David F; Dragomir-Daescu, Dan
2012-11-15
Computational fluid dynamics (CFD) has become a cutting-edge tool for investigating hemodynamic dysfunctions in the body. It has the potential to help physicians quantify in more detail the phenomena difficult to capture with in vivo imaging techniques. CFD simulations in anatomically realistic geometries pose challenges in generating accurate solutions due to the grid distortion that may occur when the grid is aligned with complex geometries. In addition, results obtained with computational methods should be trusted only after the solution has been verified on multiple high-quality grids. The objective of this study was to present a comprehensive solution verification of the intra-aneurysmal flow results obtained on different morphologies of patient-specific cerebral aneurysms. We chose five patient-specific brain aneurysm models with different dome morphologies and estimated the grid convergence errors for each model. The grid convergence errors were estimated with respect to an extrapolated solution based on the Richardson extrapolation method, which accounts for the degree of grid refinement. For four of the five models, calculated velocity, pressure, and wall shear stress values at six different spatial locations converged monotonically, with maximum uncertainty magnitudes ranging from 12% to 16% on the finest grids. Due to the geometric complexity of the fifth model, the grid convergence errors showed oscillatory behavior; therefore, each patient-specific model required its own grid convergence study to establish the accuracy of the analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.
Fottrell, Edward; Byass, Peter; Berhane, Yemane
2008-03-25
As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of randomly introduced errors indicates a high level of robustness of the dataset. This apparent inertia of population parameter estimates to simulated errors is largely due to the size of the dataset. Tolerable margins of random error in DSS data may exceed 20%. While this is not an argument in favour of poor quality data, reducing the time and valuable resources spent on detecting and correcting random errors in routine DSS operations may be justifiable as the returns from such procedures diminish with increasing overall accuracy. The money and effort currently spent on endlessly correcting DSS datasets would perhaps be better spent on increasing the surveillance population size and geographic spread of DSSs and analysing and disseminating research findings.
Oprisan, Sorinel A.; Buhusi, Catalin V.
2011-01-01
In most species, the capability of perceiving and using the passage of time in the seconds-to-minutes range (interval timing) is not only accurate but also scalar: errors in time estimation are linearly related to the estimated duration. The ubiquity of scalar timing extends over behavioral, lesion, and pharmacological manipulations. For example, in mammals, dopaminergic drugs induce an immediate, scalar change in the perceived time (clock pattern), whereas cholinergic drugs induce a gradual, scalar change in perceived time (memory pattern). How do these properties emerge from unreliable, noisy neurons firing in the milliseconds range? Neurobiological information relative to the brain circuits involved in interval timing provide support for an striatal beat frequency (SBF) model, in which time is coded by the coincidental activation of striatal spiny neurons by cortical neural oscillators. While biologically plausible, the impracticality of perfect oscillators, or their lack thereof, questions this mechanism in a brain with noisy neurons. We explored the computational mechanisms required for the clock and memory patterns in an SBF model with biophysically realistic and noisy Morris–Lecar neurons (SBF–ML). Under the assumption that dopaminergic drugs modulate the firing frequency of cortical oscillators, and that cholinergic drugs modulate the memory representation of the criterion time, we show that our SBF–ML model can reproduce the pharmacological clock and memory patterns observed in the literature. Numerical results also indicate that parameter variability (noise) – which is ubiquitous in the form of small fluctuations in the intrinsic frequencies of neural oscillators within and between trials, and in the errors in recording/retrieving stored information related to criterion time – seems to be critical for the time-scale invariance of the clock and memory patterns. PMID:21977014
Integrity modelling of tropospheric delay models
NASA Astrophysics Data System (ADS)
Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó
2017-04-01
The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual error of tropospheric delays, the mathematical formulation of the overbounding models are currently under development. This study introduces the main findings of the residual error analysis of the studied tropospheric delay models, and discusses the preliminary analysis of the integrity model development for safety-of-life applications.
Helmholtz and parabolic equation solutions to a benchmark problem in ocean acoustics.
Larsson, Elisabeth; Abrahamsson, Leif
2003-05-01
The Helmholtz equation (HE) describes wave propagation in applications such as acoustics and electromagnetics. For realistic problems, solving the HE is often too expensive. Instead, approximations like the parabolic wave equation (PE) are used. For low-frequency shallow-water environments, one persistent problem is to assess the accuracy of the PE model. In this work, a recently developed HE solver that can handle a smoothly varying bathymetry, variable material properties, and layered materials, is used for an investigation of the errors in PE solutions. In the HE solver, a preconditioned Krylov subspace method is applied to the discretized equations. The preconditioner combines domain decomposition and fast transform techniques. A benchmark problem with upslope-downslope propagation over a penetrable lossy seamount is solved. The numerical experiments show that, for the same bathymetry, a soft and slow bottom gives very similar HE and PE solutions, whereas the PE model is far from accurate for a hard and fast bottom. A first attempt to estimate the error is made by computing the relative deviation from the energy balance for the PE solution. This measure gives an indication of the magnitude of the error, but cannot be used as a strict error bound.
Magnetometer-augmented IMU simulator: in-depth elaboration.
Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel
2015-03-04
The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests.
Magnetometer-Augmented IMU Simulator: In-Depth Elaboration
Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel
2015-01-01
The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests. PMID:25746095
Asynchronous discrete event schemes for PDEs
NASA Astrophysics Data System (ADS)
Stone, D.; Geiger, S.; Lord, G. J.
2017-08-01
A new class of asynchronous discrete-event simulation schemes for advection-diffusion-reaction equations is introduced, based on the principle of allowing quanta of mass to pass through faces of a (regular, structured) Cartesian finite volume grid. The timescales of these events are linked to the flux on the face. The resulting schemes are self-adaptive, and local in both time and space. Experiments are performed on realistic physical systems related to porous media flow applications, including a large 3D advection diffusion equation and advection diffusion reaction systems. The results are compared to highly accurate reference solutions where the temporal evolution is computed with exponential integrator schemes using the same finite volume discretisation. This allows a reliable estimation of the solution error. Our results indicate a first order convergence of the error as a control parameter is decreased, and we outline a framework for analysis.
Sampling errors in the measurement of rain and hail parameters
NASA Technical Reports Server (NTRS)
Gertzman, H. S.; Atlas, D.
1977-01-01
Attention is given to a general derivation of the fractional standard deviation (FSD) of any integrated property X such that X(D) = cD to the n. This work extends that of Joss and Waldvogel (1969). The equation is applicable to measuring integrated properties of cloud, rain or hail populations (such as water content, precipitation rate, kinetic energy, or radar reflectivity) which are subject to statistical sampling errors due to the Poisson distributed fluctuations of particles sampled in each particle size interval and the weighted sum of the associated variances in proportion to their contribution to the integral parameter to be measured. Universal curves are presented which are applicable to the exponential size distribution permitting FSD estimation of any parameters from n = 0 to n = 6. The equations and curves also permit corrections for finite upper limits in the size spectrum and a realistic fall speed law.
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.; ...
2017-08-25
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Baxter, Eric J.
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment’s beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico
We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, wemore » examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.« less
Liu, Xuenan; Yang, Xuezhi; Jin, Jing; Li, Jiangshan
2018-06-05
Recent researches indicate that facial epidermis color varies with the rhythm of heat beats. It can be captured by consumer-level cameras and, astonishingly, be adopted to estimate heart rate (HR). The HR estimated remains not as precise as required in practical environment where illumination interference, facial expressions, or motion artifacts are involved, though numerous methods have been proposed in the last few years. A novel algorithm is proposed to make non-contact HR estimation technique more robust. First, the face of subject is detected and tracked to follow the head movement. The facial region then falls into several blocks, and the chrominance feature of each block is extracted to establish raw HR sub-signal. Self-adaptive signals separation (SASS) is performed to separate the noiseless HR sub-signals from raw sub-signals. On that basis, the noiseless sub-signals full of HR information are selected using weight-based scheme to establish the holistic HR signal, from which average HR is computed adopting wavelet transform and data filter. Forty subjects take part in our experiments, whose facial videos are recorded by a normal webcam with the frame rate of 30 fps under ambient lighting conditions. The average HR estimated by our method correlates strongly with ground truth measurements, as indicated in experimental results measured in static scenario with the Pearson's correlation r=0.980 and dynamic scenario with the Pearson's correlation r=0.897. Our method, compared to the newest method, decreases the error rate by 38.63% and increases the Pearson's correlation by 15.59%, indicating that our method evidently outperforms state-of-the-art non-contact HR estimation methods in realistic environments. © 2018 Institute of Physics and Engineering in Medicine.
NASA Astrophysics Data System (ADS)
Dehaes, Mathieu; Grant, P. Ellen; Sliva, Danielle D.; Roche-Labarbe, Nadège; Pienaar, Rudolph; Boas, David A.; Franceschini, Maria Angela; Selb, Juliette
2011-03-01
NIRS is safe, non-invasive and offers the possibility to record local hemodynamic parameters at the bedside, avoiding the transportation of neonates and critically ill patients. In this work, we evaluate the accuracy of the frequency-domain multi-distance (FD-MD) method to retrieve brain optical properties from neonate to adult. Realistic measurements are simulated using a 3D Monte Carlo modeling of light propagation. Height different ages were investigated: a term newborn of 38 weeks gestational age, two infants of 6 and 12 months of age, a toddler of 2 year (yr.) old, two children of 5 and 10 years of age, a teenager of 14 yr. old, and an adult. Measurements are generated at multiple distances on the right parietal area of head models and fitted to a homogeneous FD-MD model to estimate the brain optical properties. In the newborn, infants, toddler and 5 yr. old child models, the error was dominated by the head curvature, while the superficial layer in the 10 yr. old child, teenager and adult heads. The influence of the CSF is also evaluated. In this case, absorption coefficients suffer from an additional error. In all cases, measurements at 5 mm provided worse estimation because of the diffusion approximation.
NASA Astrophysics Data System (ADS)
Zhang, Liangjing; Dahle, Christoph; Neumayer, Karl-Hans; Dobslaw, Henryk; Flechtner, Frank; Thomas, Maik
2016-04-01
Terrestrial water storage (TWS) variations obtained from GRACE play an increasingly important role in various hydrological and hydro-meteorological applications. Since monthly-mean gravity fields are contaminated by errors caused by a number of sources with distinct spatial correlation structures, filtering is needed to remove in particular high frequency noise. Subsequently, bias and leakage caused by the filtering need to be corrected before the final results are interpreted as GRACE-based observations of TWS. Knowledge about the reliability and performance of different post-processing methods is highly important for the GRACE users. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-like gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Two non-isotropic filter methods from Kusche (2007) and Swenson and Wahr (2006) are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-like TWS estimates to correct the bias and leakage. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment and will subsequently recommend a processing strategy that shall also be applied to planned GRACE and GRACE-FO Level-3 products for hydrological applications provided by GFZ. Kusche, J. (2007): Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Swenson, S. and Wahr, J. (2006): Post-processing removal of correlated errors in GRACE data. Geophysical Research Letters, 33(8):L08402.
Bayesian models for comparative analysis integrating phylogenetic uncertainty.
de Villemereuil, Pierre; Wells, Jessie A; Edwards, Robert D; Blomberg, Simon P
2012-06-28
Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language.
Bayesian models for comparative analysis integrating phylogenetic uncertainty
2012-01-01
Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for phylogenetic comparative analyses, particularly for modelling in the face of phylogenetic uncertainty and accounting for measurement error or individual variation in explanatory variables. Code for all models is provided in the BUGS model description language. PMID:22741602
A demonstration of position angle-only weak lensing shear estimators on the GREAT3 simulations
NASA Astrophysics Data System (ADS)
Whittaker, Lee; Brown, Michael L.; Battye, Richard A.
2015-12-01
We develop and apply the position angle-only shear estimator of Whittaker, Brown & Battye to realistic galaxy images. This is done by demonstrating the method on the simulations of the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, which include contributions from anisotropic point spread functions (PSFs). We measure the position angles of the galaxies using three distinct methods - the integrated light method, quadrupole moments of surface brightness, and using model-based ellipticity measurements provided by IM3SHAPE. A weighting scheme is adopted to address biases in the position angle measurements which arise in the presence of an anisotropic PSF. Biases on the shear estimates, due to measurement errors on the position angles and correlations between the measurement errors and the true position angles, are corrected for using simulated galaxy images and an iterative procedure. The properties of the simulations are estimated using the deep field images provided as part of the challenge. A method is developed to match the distributions of galaxy fluxes and half-light radii from the deep fields to the corresponding distributions in the field of interest. We recover angle-only shear estimates with a performance close to current well-established model and moments-based methods for all three angle measurement techniques. The Q-values for all three methods are found to be Q ˜ 400. The code is freely available online at http://www.jb.man.ac.uk/mbrown/angle_only_shear/.
NASA Technical Reports Server (NTRS)
Wang, James S.; Kawa, S. Randolph; Eluszkiewicz, Janusz; Collatz, G. J.; Mountain, Marikate; Henderson, John; Nehrkorn, Thomas; Aschbrenner, Ryan; Zaccheo, T. Scott
2012-01-01
Knowledge of the spatiotemporal variations in emissions and uptake of CO2 is hampered by sparse measurements. The recent advent of satellite measurements of CO2 concentrations is increasing the density of measurements, and the future mission ASCENDS (Active Sensing of CO2 Emissions over Nights, Days and Seasons) will provide even greater coverage and precision. Lagrangian atmospheric transport models run backward in time can quantify surface influences ("footprints") of diverse measurement platforms and are particularly well suited for inverse estimation of regional surface CO2 fluxes at high resolution based on satellite observations. We utilize the STILT Lagrangian particle dispersion model, driven by WRF meteorological fields at 40-km resolution, in a Bayesian synthesis inversion approach to quantify the ability of ASCENDS column CO2 observations to constrain fluxes at high resolution. This study focuses on land-based biospheric fluxes, whose uncertainties are especially large, in a domain encompassing North America. We present results based on realistic input fields for 2007. Pseudo-observation random errors are estimated from backscatter and optical depth measured by the CALIPSO satellite. We estimate a priori flux uncertainties based on output from the CASA-GFED (v.3) biosphere model and make simple assumptions about spatial and temporal error correlations. WRF-STILT footprints are convolved with candidate vertical weighting functions for ASCENDS. We find that at a horizontal flux resolution of 1 degree x 1 degree, ASCENDS observations are potentially able to reduce average weekly flux uncertainties by 0-8% in July, and 0-0.5% in January (assuming an error of 0.5 ppm at the Railroad Valley reference site). Aggregated to coarser resolutions, e.g. 5 degrees x 5 degrees, the uncertainty reductions are larger and more similar to those estimated in previous satellite data observing system simulation experiments.
Unmanned aircraft system sense and avoid integrity and continuity
NASA Astrophysics Data System (ADS)
Jamoom, Michael B.
This thesis describes new methods to guarantee safety of sense and avoid (SAA) functions for Unmanned Aircraft Systems (UAS) by evaluating integrity and continuity risks. Previous SAA efforts focused on relative safety metrics, such as risk ratios, comparing the risk of using an SAA system versus not using it. The methods in this thesis evaluate integrity and continuity risks as absolute measures of safety, as is the established practice in commercial aircraft terminal area navigation applications. The main contribution of this thesis is a derivation of a new method, based on a standard intruder relative constant velocity assumption, that uses hazard state estimates and estimate error covariances to establish (1) the integrity risk of the SAA system not detecting imminent loss of '"well clear," which is the time and distance required to maintain safe separation from intruder aircraft, and (2) the probability of false alert, the continuity risk. Another contribution is applying these integrity and continuity risk evaluation methods to set quantifiable and certifiable safety requirements on sensors. A sensitivity analysis uses this methodology to evaluate the impact of sensor errors on integrity and continuity risks. The penultimate contribution is an integrity and continuity risk evaluation where the estimation model is refined to address realistic intruder relative linear accelerations, which goes beyond the current constant velocity standard. The final contribution is an integrity and continuity risk evaluation addressing multiple intruders. This evaluation is a new innovation-based method to determine the risk of mis-associating intruder measurements. A mis-association occurs when the SAA system incorrectly associates a measurement to the wrong intruder, causing large errors in the estimated intruder trajectories. The new methods described in this thesis can help ensure safe encounters between aircraft and enable SAA sensor certification for UAS integration into the National Airspace System.
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Walker, Gregory K.; Mahanama, Sarith P.; Reichle, Rolf H.
2013-01-01
Offline simulations over the conterminous United States (CONUS) with a land surface model are used to address two issues relevant to the forecasting of large-scale seasonal streamflow: (i) the extent to which errors in soil moisture initialization degrade streamflow forecasts, and (ii) the extent to which a realistic increase in the spatial resolution of forecasted precipitation would improve streamflow forecasts. The addition of error to a soil moisture initialization field is found to lead to a nearly proportional reduction in streamflow forecast skill. The linearity of the response allows the determination of a lower bound for the increase in streamflow forecast skill achievable through improved soil moisture estimation, e.g., through satellite-based soil moisture measurements. An increase in the resolution of precipitation is found to have an impact on large-scale streamflow forecasts only when evaporation variance is significant relative to the precipitation variance. This condition is met only in the western half of the CONUS domain. Taken together, the two studies demonstrate the utility of a continental-scale land surface modeling system as a tool for addressing the science of hydrological prediction.
What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?
ERIC Educational Resources Information Center
Schochet, Peter Z.; Chiang, Hanley S.
2013-01-01
This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…
Hong, Sehee; Kim, Soyoung
2018-01-01
There are basically two modeling approaches applicable to analyzing an actor-partner interdependence model: the multilevel modeling (hierarchical linear model) and the structural equation modeling. This article explains how to use these two models in analyzing an actor-partner interdependence model and how these two approaches work differently. As an empirical example, marital conflict data were used to analyze an actor-partner interdependence model. The multilevel modeling and the structural equation modeling produced virtually identical estimates for a basic model. However, the structural equation modeling approach allowed more realistic assumptions on measurement errors and factor loadings, rendering better model fit indices.
The effects of non-stationary noise on electromagnetic response estimates
NASA Astrophysics Data System (ADS)
Banks, R. J.
1998-11-01
The noise in natural electromagnetic time series is typically non-stationary. Sections of data with high magnetic noise levels bias impedances and generate unreliable error estimates. Sections containing noise that is coherent between electric and magnetic channels also produce inappropriate impedances and errors. The answer is to compute response values for data sections which are as short as is feasible, i.e. which are compatible both with the chosen bandwidth and with the need to over-determine the least-squares estimation of the impedance and coherence. Only those values that are reliable are selected, and the best single measure of the reliability of Earth impedance estimates is their temporal invariance, which is tested by the coherence between the measured and predicted electric fields. Complex demodulation is the method used here to explore the temporal structure of electromagnetic fields in the period range 20-6000 s. For periods above 300 s, noisy sections are readily identified in time series of impedance values. The corresponding estimates deviate strongly from the normal value, are biased towards low impedance values, and are associated with low coherences. Plots of the impedance against coherence are particularly valuable diagnostic aids. For periods below 300 s, impedance bias increases systematically as the coherence falls, identifying input channel noise as the cause. By selecting sections with high coherence (equivalent to the impedance being invariant over the section) unbiased impedances and realistic errors can be determined. The scatter in impedance values among high-coherence sections is due to noise that is coherent between input and output channels, implying the presence of two or more systems for which a consistent response can be defined. Where the Earth and noise responses are significantly different, it may be possible to improve estimates of the former by rejecting sections that do not generate satisfactory values for all the response elements.
An assessment of the realism of digital human manikins used for simulation in ergonomics.
Nérot, Agathe; Skalli, Wafa; Wang, Xuguang
2015-01-01
In this study, the accuracy of the joint centres of the manikins generated by RAMSIS and Human Builder (HB), two digital human modelling (DHM) systems widely used in industry for virtual ergonomics simulation, was investigated. Eighteen variously sized females and males were generated from external anthropometric dimensions and six joint centres (knee, hip and four spine joints) were compared with their anatomic locations obtained from the three-dimensional reconstructed bones from a low-dose X-ray system. Both RAMSIS and HB could correctly reproduce external anthropometric dimensions, while the estimation of internal joint centres location presented an average error of 27.6 mm for HB and 38.3 mm for RAMSIS. Differences between both manikins showed that a more realistic kinematic linkage led to better accuracy in joint location. This study opens the way to further research on the relationship between the external body geometry and internal skeleton in order to improve the realism of the internal skeleton of DHMs, especially for a biomechanical analysis requiring information of joint load and muscle force estimation. This study assessed two digital human modelling (DHM) systems widely used in industry for virtual ergonomics. Results support the need of a more realistic human modelling, especially for a biomechanical analysis and a standardisation of DHMs.
NASA Astrophysics Data System (ADS)
Gao, Bin; Liu, Wanyu; Wang, Liang; Liu, Zhengjun; Croisille, Pierre; Delachartre, Philippe; Clarysse, Patrick
2016-12-01
Cine-MRI is widely used for the analysis of cardiac function in clinical routine, because of its high soft tissue contrast and relatively short acquisition time in comparison with other cardiac MRI techniques. The gray level distribution in cardiac cine-MRI is relatively homogenous within the myocardium, and can therefore make motion quantification difficult. To ensure that the motion estimation problem is well posed, more image features have to be considered. This work is inspired by a method previously developed for color image processing. The monogenic signal provides a framework to estimate the local phase, orientation, and amplitude, of an image, three features which locally characterize the 2D intensity profile. The independent monogenic features are combined into a 3D matrix for motion estimation. To improve motion estimation accuracy, we chose the zero-mean normalized cross-correlation as a matching measure, and implemented a bilateral filter for denoising and edge-preservation. The monogenic features distance is used in lieu of the color space distance in the bilateral filter. Results obtained from four realistic simulated sequences outperformed two other state of the art methods even in the presence of noise. The motion estimation errors (end point error) using our proposed method were reduced by about 20% in comparison with those obtained by the other tested methods. The new methodology was evaluated on four clinical sequences from patients presenting with cardiac motion dysfunctions and one healthy volunteer. The derived strain fields were analyzed favorably in their ability to identify myocardial regions with impaired motion.
Wu, Allison Chia-Yi; Rifkin, Scott A
2015-03-27
Recent techniques for tagging and visualizing single molecules in fixed or living organisms and cell lines have been revolutionizing our understanding of the spatial and temporal dynamics of fundamental biological processes. However, fluorescence microscopy images are often noisy, and it can be difficult to distinguish a fluorescently labeled single molecule from background speckle. We present a computational pipeline to distinguish the true signal of fluorescently labeled molecules from background fluorescence and noise. We test our technique using the challenging case of wide-field, epifluorescence microscope image stacks from single molecule fluorescence in situ experiments on nematode embryos where there can be substantial out-of-focus light and structured noise. The software recognizes and classifies individual mRNA spots by measuring several features of local intensity maxima and classifying them with a supervised random forest classifier. A key innovation of this software is that, by estimating the probability that each local maximum is a true spot in a statistically principled way, it makes it possible to estimate the error introduced by image classification. This can be used to assess the quality of the data and to estimate a confidence interval for the molecule count estimate, all of which are important for quantitative interpretations of the results of single-molecule experiments. The software classifies spots in these images well, with >95% AUROC on realistic artificial data and outperforms other commonly used techniques on challenging real data. Its interval estimates provide a unique measure of the quality of an image and confidence in the classification.
Lopes, Antonio Augusto; dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria
2009-01-01
BACKGROUND: In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. OBJECTIVE: We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. MATERIALS AND METHODS: Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. RESULTS: By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. CONCLUSION: The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations. PMID:19641642
AGN neutrino flux estimates for a realistic hybrid model
NASA Astrophysics Data System (ADS)
Richter, S.; Spanier, F.
2018-07-01
Recent reports of possible correlations between high energy neutrinos observed by IceCube and Active Galactic Nuclei (AGN) activity sparked a burst of publications that attempt to predict the neutrino flux of these sources. However, often rather crude estimates are used to derive the neutrino rate from the observed photon spectra. In this work neutrino fluxes were computed in a wide parameter space. The starting point of the model was a representation of the full spectral energy density (SED) of 3C 279. The time-dependent hybrid model that was used for this study takes into account the full pγ reaction chain as well as proton synchrotron, electron-positron-pair cascades and the full SSC scheme. We compare our results to estimates frequently used in the literature. This allows to identify regions in the parameter space for which such estimates are still valid and those in which they can produce significant errors. Furthermore, if estimates for the Doppler factor, magnetic field, proton and electron densities of a source exist, the expected IceCube detection rate is readily available.
Ultrahigh Error Threshold for Surface Codes with Biased Noise
NASA Astrophysics Data System (ADS)
Tuckett, David K.; Bartlett, Stephen D.; Flammia, Steven T.
2018-02-01
We show that a simple modification of the surface code can exhibit an enormous gain in the error correction threshold for a noise model in which Pauli Z errors occur more frequently than X or Y errors. Such biased noise, where dephasing dominates, is ubiquitous in many quantum architectures. In the limit of pure dephasing noise we find a threshold of 43.7(1)% using a tensor network decoder proposed by Bravyi, Suchara, and Vargo. The threshold remains surprisingly large in the regime of realistic noise bias ratios, for example 28.2(2)% at a bias of 10. The performance is, in fact, at or near the hashing bound for all values of the bias. The modified surface code still uses only weight-4 stabilizers on a square lattice, but merely requires measuring products of Y instead of Z around the faces, as this doubles the number of useful syndrome bits associated with the dominant Z errors. Our results demonstrate that large efficiency gains can be found by appropriately tailoring codes and decoders to realistic noise models, even under the locality constraints of topological codes.
A Framework for Estimating Stratospheric Wind Speeds from Infrasound Noise
NASA Astrophysics Data System (ADS)
Arrowsmith, S.; Marcillo, O. E.
2012-12-01
We present a methodology for infrasonic remote sensing of winds in the stratosphere that does not require discrete ground-truth events. Our method uses measured time delays between arrays of sensors to provide group velocities and then minimizes the difference between observed and predicted group velocities. Because we focus on inter-array propagation effects, it is not necessary to simulate the full propagation path from source to receiver. This feature allows us to use a relatively simple forward model that is applicable over short-regional distances. By focusing on stratospheric returns, we show that our nonlinear inversion scheme converges much better if the starting model contains a strong stratospheric duct. Using the HWM/MSISE model, we demonstrate that the inversion scheme is robust to large uncertainties in backazimuth, but that uncertainties in the measured trace velocity and group velocity should be controlled through the addition of adjoint constraints. Using realistic estimates of measurement error, our results show that the inversion scheme will nevertheless improve upon a starting model under most scenarios for the 9-array Utah infrasound network. Future research should investigate the effects of model error associated with these measurements.
NASA Astrophysics Data System (ADS)
Taylor, Thomas E.; L'Ecuyer, Tristan; Slusser, James; Stephens, Graeme; Krotkov, Nick; Davis, John; Goering, Christian
2005-08-01
Extensive sensitivity and error characteristics of a recently developed optimal estimation retrieval algorithm which simultaneously determines aerosol optical depth (AOD), aerosol single scatter albedo (SSA) and total ozone column (TOC) from ultra-violet irradiances are described. The algorithm inverts measured diffuse and direct irradiances at 7 channels in the UV spectral range obtained from the United States Department of Agriculture's (USDA) UV-B Monitoring and Research Program's (UVMRP) network of 33 ground-based UV-MFRSR instruments to produce aerosol optical properties and TOC at all seven wavelengths. Sensitivity studies of the Tropospheric Ultra-violet/Visible (TUV) radiative transfer model performed for various operating modes (Delta-Eddington versus n-stream Discrete Ordinate) over domains of AOD, SSA, TOC, asymmetry parameter and surface albedo show that the solutions are well constrained. Realistic input error budgets and diagnostic and error outputs from the retrieval are analyzed to demonstrate the atmospheric conditions under which the retrieval provides useful and significant results. After optimizing the algorithm for the USDA site in Panther Junction, Texas the retrieval algorithm was run on a cloud screened set of irradiance measurements for the month of May 2003. Comparisons to independently derived AOD's are favorable with root mean square (RMS) differences of about 3% to 7% at 300nm and less than 1% at 368nm, on May 12 and 22, 2003. This retrieval method will be used to build an aerosol climatology and provide ground-truthing of satellite measurements by running it operationally on the USDA UV network database.
NASA Technical Reports Server (NTRS)
da Silva, Arlindo
2010-01-01
A challenge common to many constituent data assimilation applications is the fact that one observes a much smaller fraction of the phase space that one wishes to estimate. For example, remotely sensed estimates of the column average concentrations are available, while one is faced with the problem of estimating 3D concentrations for initializing a prognostic model. This problem is exacerbated in the case of aerosols because the observable Aerosol Optical Depth (AOD) is not only a column integrated quantity, but it also sums over a large number of species (dust, sea-salt, carbonaceous and sulfate aerosols. An aerosol transport model when driven by high-resolution, state-of-the-art analysis of meteorological fields and realistic emissions can produce skillful forecasts even when no aerosol data is assimilated. The main task of aerosol data assimilation is to address the bias arising from inaccurate emissions, and Lagrangian misplacement of plumes induced by errors in the driving meteorological fields. As long as one decouples the meteorological and aerosol assimilation as we do here, the classic baroclinic growth of error is no longer the main order of business. We will describe an aerosol data assimilation scheme in which the analysis update step is conducted in observation space, using an adaptive maximum-likelihood scheme for estimating background errors in AOD space. This scheme includes e explicit sequential bias estimation as in Dee and da Silva. Unlikely existing aerosol data assimilation schemes we do not obtain analysis increments of the 3D concentrations by scaling the background profiles. Instead we explore the Lagrangian characteristics of the problem for generating local displacement ensembles. These high-resolution state-dependent ensembles are then used to parameterize the background errors and generate 3D aerosol increments. The algorithm has computational complexity running at a resolution of 1/4 degree, globally. We will present the result of assimilating AOD retrievals from MODIS (on both Aqua and TERRA satellites) from AERONET for validation. The impact on the GEOS-5 Aerosol Forecasting will be fully documented.
Babiloni, F; Babiloni, C; Carducci, F; Fattorini, L; Onorati, P; Urbano, A
1996-04-01
This paper presents a realistic Laplacian (RL) estimator based on a tensorial formulation of the surface Laplacian (SL) that uses the 2-D thin plate spline function to obtain a mathematical description of a realistic scalp surface. Because of this tensorial formulation, the RL does not need an orthogonal reference frame placed on the realistic scalp surface. In simulation experiments the RL was estimated with an increasing number of "electrodes" (up to 256) on a mathematical scalp model, the analytic Laplacian being used as a reference. Second and third order spherical spline Laplacian estimates were examined for comparison. Noise of increasing magnitude and spatial frequency was added to the simulated potential distributions. Movement-related potentials and somatosensory evoked potentials sampled with 128 electrodes were used to estimate the RL on a realistically shaped, MR-constructed model of the subject's scalp surface. The RL was also estimated on a mathematical spherical scalp model computed from the real scalp surface. Simulation experiments showed that the performances of the RL estimator were similar to those of the second and third order spherical spline Laplacians. Furthermore, the information content of scalp-recorded potentials was clearly better when the RL estimator computed the SL of the potential on an MR-constructed scalp surface model.
Visual difference metric for realistic image synthesis
NASA Astrophysics Data System (ADS)
Bolin, Mark R.; Meyer, Gary W.
1999-05-01
An accurate and efficient model of human perception has been developed to control the placement of sample in a realistic image synthesis algorithm. Previous sampling techniques have sought to spread the error equally across the image plane. However, this approach neglects the fact that the renderings are intended to be displayed for a human observer. The human visual system has a varying sensitivity to error that is based upon the viewing context. This means that equivalent optical discrepancies can be very obvious in one situation and imperceptible in another. It is ultimately the perceptibility of this error that governs image quality and should be used as the basis of a sampling algorithm. This paper focuses on a simplified version of the Lubin Visual Discrimination Metric (VDM) that was developed for insertion into an image synthesis algorithm. The sampling VDM makes use of a Haar wavelet basis for the cortical transform and a less severe spatial pooling operation. The model was extended for color including the effects of chromatic aberration. Comparisons are made between the execution time and visual difference map for the original Lubin and simplified visual difference metrics. Results for the realistic image synthesis algorithm are also presented.
A statistical approach to quasi-extinction forecasting.
Holmes, Elizabeth Eli; Sabo, John L; Viscido, Steven Vincent; Fagan, William Fredric
2007-12-01
Forecasting population decline to a certain critical threshold (the quasi-extinction risk) is one of the central objectives of population viability analysis (PVA), and such predictions figure prominently in the decisions of major conservation organizations. In this paper, we argue that accurate forecasting of a population's quasi-extinction risk does not necessarily require knowledge of the underlying biological mechanisms. Because of the stochastic and multiplicative nature of population growth, the ensemble behaviour of population trajectories converges to common statistical forms across a wide variety of stochastic population processes. This paper provides a theoretical basis for this argument. We show that the quasi-extinction surfaces of a variety of complex stochastic population processes (including age-structured, density-dependent and spatially structured populations) can be modelled by a simple stochastic approximation: the stochastic exponential growth process overlaid with Gaussian errors. Using simulated and real data, we show that this model can be estimated with 20-30 years of data and can provide relatively unbiased quasi-extinction risk with confidence intervals considerably smaller than (0,1). This was found to be true even for simulated data derived from some of the noisiest population processes (density-dependent feedback, species interactions and strong age-structure cycling). A key advantage of statistical models is that their parameters and the uncertainty of those parameters can be estimated from time series data using standard statistical methods. In contrast for most species of conservation concern, biologically realistic models must often be specified rather than estimated because of the limited data available for all the various parameters. Biologically realistic models will always have a prominent place in PVA for evaluating specific management options which affect a single segment of a population, a single demographic rate, or different geographic areas. However, for forecasting quasi-extinction risk, statistical models that are based on the convergent statistical properties of population processes offer many advantages over biologically realistic models.
Temporal rainfall estimation using input data reduction and model inversion
NASA Astrophysics Data System (ADS)
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
Floods are devastating natural hazards. To provide accurate, precise and timely flood forecasts there is a need to understand the uncertainties associated with temporal rainfall and model parameters. The estimation of temporal rainfall and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of rainfall input to be considered when estimating model parameters and provides the ability to estimate rainfall from poorly gauged catchments. Current methods to estimate temporal rainfall distributions from streamflow are unable to adequately explain and invert complex non-linear hydrologic systems. This study uses the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia. The reduction of rainfall to DWT coefficients allows the input rainfall time series to be simultaneously estimated along with model parameters. The estimation process is conducted using multi-chain Markov chain Monte Carlo simulation with the DREAMZS algorithm. The use of a likelihood function that considers both rainfall and streamflow error allows for model parameter and temporal rainfall distributions to be estimated. Estimation of the wavelet approximation coefficients of lower order decomposition structures was able to estimate the most realistic temporal rainfall distributions. These rainfall estimates were all able to simulate streamflow that was superior to the results of a traditional calibration approach. It is shown that the choice of wavelet has a considerable impact on the robustness of the inversion. The results demonstrate that streamflow data contains sufficient information to estimate temporal rainfall and model parameter distributions. The extent and variance of rainfall time series that are able to simulate streamflow that is superior to that simulated by a traditional calibration approach is a demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
SU-E-T-503: IMRT Optimization Using Monte Carlo Dose Engine: The Effect of Statistical Uncertainty.
Tian, Z; Jia, X; Graves, Y; Uribe-Sanchez, A; Jiang, S
2012-06-01
With the development of ultra-fast GPU-based Monte Carlo (MC) dose engine, it becomes clinically realistic to compute the dose-deposition coefficients (DDC) for IMRT optimization using MC simulation. However, it is still time-consuming if we want to compute DDC with small statistical uncertainty. This work studies the effects of the statistical error in DDC matrix on IMRT optimization. The MC-computed DDC matrices are simulated here by adding statistical uncertainties at a desired level to the ones generated with a finite-size pencil beam algorithm. A statistical uncertainty model for MC dose calculation is employed. We adopt a penalty-based quadratic optimization model and gradient descent method to optimize fluence map and then recalculate the corresponding actual dose distribution using the noise-free DDC matrix. The impacts of DDC noise are assessed in terms of the deviation of the resulted dose distributions. We have also used a stochastic perturbation theory to theoretically estimate the statistical errors of dose distributions on a simplified optimization model. A head-and-neck case is used to investigate the perturbation to IMRT plan due to MC's statistical uncertainty. The relative errors of the final dose distributions of the optimized IMRT are found to be much smaller than those in the DDC matrix, which is consistent with our theoretical estimation. When history number is decreased from 108 to 106, the dose-volume-histograms are still very similar to the error-free DVHs while the error in DDC is about 3.8%. The results illustrate that the statistical errors in the DDC matrix have a relatively small effect on IMRT optimization in dose domain. This indicates we can use relatively small number of histories to obtain the DDC matrix with MC simulation within a reasonable amount of time, without considerably compromising the accuracy of the optimized treatment plan. This work is supported by Varian Medical Systems through a Master Research Agreement. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Tobin, K. J.; Bennett, M. E.
2008-05-01
The Cimarron River Basin (3110 sq km) between Dodge and Guthrie, Oklahoma is located in northern Oklahoma and was used as a test bed to compare the hydrological model performance associated with different methods of precipitation quantification. The Soil and Water Assessment Tool (SWAT) was selected for this project, which is a comprehensive model that, besides quantifying watershed hydrology, can simulate water quality as well as nutrient and sediment loading within stream reaches. An advantage of this location is the extensive monitoring of MET parameters (precipitation, temperature, relative humidity, wind speed, solar radiation) afforded by the Oklahoma Mesonet, which has been documented to improve the performance of SWAT. The utility of TRMM 3B42 and NEXRAD Stage III data in supporting the hydrologic modeling of Cimarron River Basin is demonstrated. Minor adjustments to selected model parameters were made to make parameter values more realistic based on results from previous studies and information and to more realistically simulate base flow. Significantly, no ad hoc adjustments to major parameters such as Curve Number or Available Soil Water were made and robust simulations were obtained. TRMM and NEXRAD data are aggregated into an average daily estimate of precipitation for each TRMM grid cell (0.25 degree X 0.25 degree). Preliminary simulation of stream flow (year 2004 to 2006) in the Cimarron River Basin yields acceptable monthly results with very little adjustment of model parameters using TRMM 3B42 precipitation data (mass balance error = 3 percent; Monthly Nash-Sutcliffe efficiency coefficients (NS) = 0.77). However, both Oklahoma Mesonet rain gauge (mass balance error = 13 percent; Monthly NS = 0.91; Daily NS = 0.64) and NEXRAD Stage III data (mass balance error = -5 percent; Monthly NS = 0.95; Daily NS = 0.69) produces superior simulations even at a sub-monthly time scale; daily results are time averaged over a three day period. Note that all types of precipitation data perform better than a synthetic precipitation dataset generated using a weather simulator (mass balance error = 12 percent; Monthly NS = 0.40). Our study again documents that merged precipitation satellite products, such as TRMM 3B42, can support semi-distributed hydrologic modeling at the watershed scale. However, apparently additional work is required to improve TRMM precipitation retrievals over land to generate a product that yields more robust hydrological simulations especially at finer time scales. Additionally, ongoing work in this basin will compare TRMM results with stream flow model results generated using CMORPH precipitation estimates. Finally, in the future we plan to use simulated, semi-distributed soil moisture values determined by SWAT for comparison with gridded soil moisture estimates from TRMM-TMI that should provide further validation of our modeling efforts.
Elucidating Reaction Mechanisms on Quantum Computers
NASA Astrophysics Data System (ADS)
Wiebe, Nathan; Reiher, Markus; Svore, Krysta; Wecker, Dave; Troyer, Matthias
We show how a quantum computer can be employed to elucidate reaction mechanisms in complex chemical systems, using the open problem of biological nitrogen fixation in nitrogenase as an example. We discuss how quantum computers can augment classical-computer simulations for such problems, to significantly increase their accuracy and enable hitherto intractable simulations. Detailed resource estimates show that, even when taking into account the substantial overhead of quantum error correction, and the need to compile into discrete gate sets, the necessary computations can be performed in reasonable time on small quantum computers. This demonstrates that quantum computers will realistically be able to tackle important problems in chemistry that are both scientifically and economically significant.
Theory and design of interferometric synthetic aperture radars
NASA Technical Reports Server (NTRS)
Rodriguez, E.; Martin, J. M.
1992-01-01
A derivation of the signal statistics, an optimal estimator of the interferometric phase, and the expression necessary to calculate the height-error budget are presented. These expressions are used to derive methods of optimizing the parameters of the interferometric synthetic aperture radar system (InSAR), and are then employed in a specific design example for a system to perform high-resolution global topographic mapping with a one-year mission lifetime, subject to current technological constraints. A Monte Carlo simulation of this InSAR system is performed to evaluate its performance for realistic topography. The results indicate that this system has the potential to satisfy the stringent accuracy and resolution requirements for geophysical use of global topographic data.
Students’ errors in solving combinatorics problems observed from the characteristics of RME modeling
NASA Astrophysics Data System (ADS)
Meika, I.; Suryadi, D.; Darhim
2018-01-01
This article was written based on the learning evaluation results of students’ errors in solving combinatorics problems observed from the characteristics of Realistic Mathematics Education (RME); that is modeling. Descriptive method was employed by involving 55 students from two international-based pilot state senior high schools in Banten. The findings of the study suggested that the students still committed errors in simplifying the problem as much 46%; errors in making mathematical model (horizontal mathematization) as much 60%; errors in finishing mathematical model (vertical mathematization) as much 65%; and errors in interpretation as well as validation as much 66%.
Malone, Emma; Jehl, Markus; Arridge, Simon; Betcke, Timo; Holder, David
2014-06-01
We investigate the application of multifrequency electrical impedance tomography (MFEIT) to imaging the brain in stroke patients. The use of MFEIT could enable early diagnosis and thrombolysis of ischaemic stroke, and therefore improve the outcome of treatment. Recent advances in the imaging methodology suggest that the use of spectral constraints could allow for the reconstruction of a one-shot image. We performed a simulation study to investigate the feasibility of imaging stroke in a head model with realistic conductivities. We introduced increasing levels of modelling errors to test the robustness of the method to the most common sources of artefact. We considered the case of errors in the electrode placement, spectral constraints, and contact impedance. The results indicate that errors in the position and shape of the electrodes can affect image quality, although our imaging method was successful in identifying tissues with sufficiently distinct spectra.
NASA Astrophysics Data System (ADS)
Fernandez, Alvaro; Müller, Inigo A.; Rodríguez-Sanz, Laura; van Dijk, Joep; Looser, Nathan; Bernasconi, Stefano M.
2017-12-01
Carbonate clumped isotopes offer a potentially transformational tool to interpret Earth's history, but the proxy is still limited by poor interlaboratory reproducibility. Here, we focus on the uncertainties that result from the analysis of only a few replicate measurements to understand the extent to which unconstrained errors affect calibration relationships and paleoclimate reconstructions. We find that highly precise data can be routinely obtained with multiple replicate analyses, but this is not always done in many laboratories. For instance, using published estimates of external reproducibilities we find that typical clumped isotope measurements (three replicate analyses) have margins of error at the 95% confidence level (CL) that are too large for many applications. These errors, however, can be systematically reduced with more replicate measurements. Second, using a Monte Carlo-type simulation we demonstrate that the degree of disagreement on published calibration slopes is about what we should expect considering the precision of Δ47 data, the number of samples and replicate analyses, and the temperature range covered in published calibrations. Finally, we show that the way errors are typically reported in clumped isotope data can be problematic and lead to the impression that data are more precise than warranted. We recommend that uncertainties in Δ47 data should no longer be reported as the standard error of a few replicate measurements. Instead, uncertainties should be reported as margins of error at a specified confidence level (e.g., 68% or 95% CL). These error bars are a more realistic indication of the reliability of a measurement.
Dillon, C R; Borasi, G; Payne, A
2016-01-01
For thermal modeling to play a significant role in treatment planning, monitoring, and control of magnetic resonance-guided focused ultrasound (MRgFUS) thermal therapies, accurate knowledge of ultrasound and thermal properties is essential. This study develops a new analytical solution for the temperature change observed in MRgFUS which can be used with experimental MR temperature data to provide estimates of the ultrasound initial heating rate, Gaussian beam variance, tissue thermal diffusivity, and Pennes perfusion parameter. Simulations demonstrate that this technique provides accurate and robust property estimates that are independent of the beam size, thermal diffusivity, and perfusion levels in the presence of realistic MR noise. The technique is also demonstrated in vivo using MRgFUS heating data in rabbit back muscle. Errors in property estimates are kept less than 5% by applying a third order Taylor series approximation of the perfusion term and ensuring the ratio of the fitting time (the duration of experimental data utilized for optimization) to the perfusion time constant remains less than one. PMID:26741344
Bio-Optics Based Sensation Imaging for Breast Tumor Detection Using Tissue Characterization
Lee, Jong-Ha; Kim, Yoon Nyun; Park, Hee-Jun
2015-01-01
The tissue inclusion parameter estimation method is proposed to measure the stiffness as well as geometric parameters. The estimation is performed based on the tactile data obtained at the surface of the tissue using an optical tactile sensation imaging system (TSIS). A forward algorithm is designed to comprehensively predict the tactile data based on the mechanical properties of tissue inclusion using finite element modeling (FEM). This forward information is used to develop an inversion algorithm that will be used to extract the size, depth, and Young's modulus of a tissue inclusion from the tactile data. We utilize the artificial neural network (ANN) for the inversion algorithm. The proposed estimation method was validated by a realistic tissue phantom with stiff inclusions. The experimental results showed that the proposed estimation method can measure the size, depth, and Young's modulus of a tissue inclusion with 0.58%, 3.82%, and 2.51% relative errors, respectively. The obtained results prove that the proposed method has potential to become a useful screening and diagnostic method for breast cancer. PMID:25785306
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1996-01-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.
Computer algorithm for analyzing and processing borehole strainmeter data
Langbein, John O.
2010-01-01
The newly installed Plate Boundary Observatory (PBO) strainmeters record signals from tectonic activity, Earth tides, and atmospheric pressure. Important information about tectonic processes may occur at amplitudes at and below tidal strains and pressure loading. If incorrect assumptions are made regarding the background noise in the strain data, then the estimates of tectonic signal amplitudes may be incorrect. Furthermore, the use of simplifying assumptions that data are uncorrelated can lead to incorrect results and pressure loading and tides may not be completely removed from the raw data. Instead, any algorithm used to process strainmeter data must incorporate the strong temporal correlations that are inherent with these data. The technique described here uses least squares but employs data covariance that describes the temporal correlation of strainmeter data. There are several advantages to this method since many parameters are estimated simultaneously. These parameters include: (1) functional terms that describe the underlying error model, (2) the tidal terms, (3) the pressure loading term(s), (4) amplitudes of offsets, either those from earthquakes or from the instrument, (5) rate and changes in rate, and (6) the amplitudes and time constants of either logarithmic or exponential curves that can characterize postseismic deformation or diffusion of fluids near the strainmeter. With the proper error model, realistic estimates of the standard errors of the various parameters are obtained; this is especially critical in determining the statistical significance of a suspected, tectonic strain signal. The program also provides a method of tracking the various adjustments required to process strainmeter data. In addition, the program provides several plots to assist with identifying either tectonic signals or other signals that may need to be removed before any geophysical signal can be identified.
Ground truth seismic events and location capability at Degelen mountain, Kazakhstan
NASA Astrophysics Data System (ADS)
Trabant, Chad; Thurber, Clifford; Leith, William
2002-07-01
We utilized nuclear explosions from the Degelen Mountain sub-region of the Semipalatinsk Test Site (STS), Kazakhstan, to assess seismic location capability directly. Excellent ground truth information for these events was either known or was estimated from maps of the Degelen Mountain adit complex. Origin times were refined for events for which absolute origin time information was unknown using catalog arrival times, our ground truth location estimates, and a time baseline provided by fixing known origin times during a joint hypocenter determination (JHD). Precise arrival time picks were determined using a waveform cross-correlation process applied to the available digital data. These data were used in a JHD analysis. We found that very accurate locations were possible when high precision, waveform cross-correlation arrival times were combined with JHD. Relocation with our full digital data set resulted in a mean mislocation of 2 km and a mean 95% confidence ellipse (CE) area of 6.6 km 2 (90% CE: 5.1 km 2), however, only 5 of the 18 computed error ellipses actually covered the associated ground truth location estimate. To test a more realistic nuclear test monitoring scenario, we applied our JHD analysis to a set of seven events (one fixed) using data only from seismic stations within 40° epicentral distance. Relocation with these data resulted in a mean mislocation of 7.4 km, with four of the 95% error ellipses covering less than 570 km 2 (90% CE: 438 km 2), and the other two covering 1730 and 8869 km 2 (90% CE: 1331 and 6822 km 2). Location uncertainties calculated using JHD often underestimated the true error, but a circular region with a radius equal to the mislocation covered less than 1000 km 2 for all events having more than three observations.
NASA Astrophysics Data System (ADS)
Arhonditsis, George B.; Papantou, Dimitra; Zhang, Weitao; Perhar, Gurbir; Massos, Evangelia; Shi, Molu
2008-09-01
Aquatic biogeochemical models have been an indispensable tool for addressing pressing environmental issues, e.g., understanding oceanic response to climate change, elucidation of the interplay between plankton dynamics and atmospheric CO 2 levels, and examination of alternative management schemes for eutrophication control. Their ability to form the scientific basis for environmental management decisions can be undermined by the underlying structural and parametric uncertainty. In this study, we outline how we can attain realistic predictive links between management actions and ecosystem response through a probabilistic framework that accommodates rigorous uncertainty analysis of a variety of error sources, i.e., measurement error, parameter uncertainty, discrepancy between model and natural system. Because model uncertainty analysis essentially aims to quantify the joint probability distribution of model parameters and to make inference about this distribution, we believe that the iterative nature of Bayes' Theorem is a logical means to incorporate existing knowledge and update the joint distribution as new information becomes available. The statistical methodology begins with the characterization of parameter uncertainty in the form of probability distributions, then water quality data are used to update the distributions, and yield posterior parameter estimates along with predictive uncertainty bounds. Our illustration is based on a six state variable (nitrate, ammonium, dissolved organic nitrogen, phytoplankton, zooplankton, and bacteria) ecological model developed for gaining insight into the mechanisms that drive plankton dynamics in a coastal embayment; the Gulf of Gera, Island of Lesvos, Greece. The lack of analytical expressions for the posterior parameter distributions was overcome using Markov chain Monte Carlo simulations; a convenient way to obtain representative samples of parameter values. The Bayesian calibration resulted in realistic reproduction of the key temporal patterns of the system, offered insights into the degree of information the data contain about model inputs, and also allowed the quantification of the dependence structure among the parameter estimates. Finally, our study uses two synthetic datasets to examine the ability of the updated model to provide estimates of predictive uncertainty for water quality variables of environmental management interest.
Generalized Redistribute-to-the-Right Algorithm: Application to the Analysis of Censored Cost Data
CHEN, SHUAI; ZHAO, HONGWEI
2013-01-01
Medical cost estimation is a challenging task when censoring of data is present. Although researchers have proposed methods for estimating mean costs, these are often derived from theory and are not always easy to understand. We provide an alternative method, based on a replace-from-the-right algorithm, for estimating mean costs more efficiently. We show that our estimator is equivalent to an existing one that is based on the inverse probability weighting principle and semiparametric efficiency theory. We also propose an alternative method for estimating the survival function of costs, based on the redistribute-to-the-right algorithm, that was originally used for explaining the Kaplan–Meier estimator. We show that this second proposed estimator is equivalent to a simple weighted survival estimator of costs. Finally, we develop a more efficient survival estimator of costs, using the same redistribute-to-the-right principle. This estimator is naturally monotone, more efficient than some existing survival estimators, and has a quite small bias in many realistic settings. We conduct numerical studies to examine the finite sample property of the survival estimators for costs, and show that our new estimator has small mean squared errors when the sample size is not too large. We apply both existing and new estimators to a data example from a randomized cardiovascular clinical trial. PMID:24403869
Error assessment of local tie vectors in space geodesy
NASA Astrophysics Data System (ADS)
Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald
2014-05-01
For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.
NASA Astrophysics Data System (ADS)
Siddans, Richard; Knappett, Diane; Kerridge, Brian; Waterfall, Alison; Hurley, Jane; Latter, Barry; Boesch, Hartmut; Parker, Robert
2017-11-01
This paper describes the global height-resolved methane (CH4) retrieval scheme for the Infrared Atmospheric Sounding Interferometer (IASI) on MetOp, developed at the Rutherford Appleton Laboratory (RAL). The scheme precisely fits measured spectra in the 7.9 micron region to allow information to be retrieved on two independent layers centred in the upper and lower troposphere. It also uses nitrous oxide (N2O) spectral features in the same spectral interval to directly retrieve effective cloud parameters to mitigate errors in retrieved methane due to residual cloud and other geophysical variables. The scheme has been applied to analyse IASI measurements between 2007 and 2015. Results are compared to model fields from the MACC greenhouse gas inversion and independent measurements from satellite (GOSAT), airborne (HIPPO) and ground (TCCON) sensors. The estimated error on methane mixing ratio in the lower- and upper-tropospheric layers ranges from 20 to 100 and from 30 to 40 ppbv, respectively, and error on the derived column-average ranges from 20 to 40 ppbv. Vertical sensitivity extends through the lower troposphere, though it decreases near to the surface. Systematic differences with the other datasets are typically < 10 ppbv regionally and < 5 ppbv globally. In the Southern Hemisphere, a bias of around 20 ppbv is found with respect to MACC, which is not explained by vertical sensitivity or found in comparison of IASI to TCCON. Comparisons to HIPPO and MACC support the assertion that two layers can be independently retrieved and provide confirmation that the estimated random errors on the column- and layer-averaged amounts are realistic. The data have been made publically available via the Centre for Environmental Data Analysis (CEDA) data archive (Siddans, 2016).
Feng, Tao; Wang, Jizhe; Tsui, Benjamin M W
2018-04-01
The goal of this study was to develop and evaluate four post-reconstruction respiratory and cardiac (R&C) motion vector field (MVF) estimation methods for cardiac 4D PET data. In Method 1, the dual R&C motions were estimated directly from the dual R&C gated images. In Method 2, respiratory motion (RM) and cardiac motion (CM) were separately estimated from the respiratory gated only and cardiac gated only images. The effects of RM on CM estimation were modeled in Method 3 by applying an image-based RM correction on the cardiac gated images before CM estimation, the effects of CM on RM estimation were neglected. Method 4 iteratively models the mutual effects of RM and CM during dual R&C motion estimations. Realistic simulation data were generated for quantitative evaluation of four methods. Almost noise-free PET projection data were generated from the 4D XCAT phantom with realistic R&C MVF using Monte Carlo simulation. Poisson noise was added to the scaled projection data to generate additional datasets of two more different noise levels. All the projection data were reconstructed using a 4D image reconstruction method to obtain dual R&C gated images. The four dual R&C MVF estimation methods were applied to the dual R&C gated images and the accuracy of motion estimation was quantitatively evaluated using the root mean square error (RMSE) of the estimated MVFs. Results show that among the four estimation methods, Methods 2 performed the worst for noise-free case while Method 1 performed the worst for noisy cases in terms of quantitative accuracy of the estimated MVF. Methods 4 and 3 showed comparable results and achieved RMSE lower by up to 35% than that in Method 1 for noisy cases. In conclusion, we have developed and evaluated 4 different post-reconstruction R&C MVF estimation methods for use in 4D PET imaging. Comparison of the performance of four methods on simulated data indicates separate R&C estimation with modeling of RM before CM estimation (Method 3) to be the best option for accurate estimation of dual R&C motion in clinical situation. © 2018 American Association of Physicists in Medicine.
Using transfer functions to quantify El Niño Southern Oscillation dynamics in data and models.
MacMartin, Douglas G; Tziperman, Eli
2014-09-08
Transfer function tools commonly used in engineering control analysis can be used to better understand the dynamics of El Niño Southern Oscillation (ENSO), compare data with models and identify systematic model errors. The transfer function describes the frequency-dependent input-output relationship between any pair of causally related variables, and can be estimated from time series. This can be used first to assess whether the underlying relationship is or is not frequency dependent, and if so, to diagnose the underlying differential equations that relate the variables, and hence describe the dynamics of individual subsystem processes relevant to ENSO. Estimating process parameters allows the identification of compensating model errors that may lead to a seemingly realistic simulation in spite of incorrect model physics. This tool is applied here to the TAO array ocean data, the GFDL-CM2.1 and CCSM4 general circulation models, and to the Cane-Zebiak ENSO model. The delayed oscillator description is used to motivate a few relevant processes involved in the dynamics, although any other ENSO mechanism could be used instead. We identify several differences in the processes between the models and data that may be useful for model improvement. The transfer function methodology is also useful in understanding the dynamics and evaluating models of other climate processes.
PRECISE TULLY-FISHER RELATIONS WITHOUT GALAXY INCLINATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obreschkow, D.; Meyer, M.
2013-11-10
Power-law relations between tracers of baryonic mass and rotational velocities of disk galaxies, so-called Tully-Fisher relations (TFRs), offer a wealth of applications in galaxy evolution and cosmology. However, measurements of rotational velocities require galaxy inclinations, which are difficult to measure, thus limiting the range of TFR studies. This work introduces a maximum likelihood estimation (MLE) method for recovering the TFR in galaxy samples with limited or no information on inclinations. The robustness and accuracy of this method is demonstrated using virtual and real galaxy samples. Intriguingly, the MLE reliably recovers the TFR of all test samples, even without using anymore » inclination measurements—that is, assuming a random sin i-distribution for galaxy inclinations. Explicitly, this 'inclination-free MLE' recovers the three TFR parameters (zero-point, slope, scatter) with statistical errors only about 1.5 times larger than the best estimates based on perfectly known galaxy inclinations with zero uncertainty. Thus, given realistic uncertainties, the inclination-free MLE is highly competitive. If inclination measurements have mean errors larger than 10°, it is better not to use any inclinations than to consider the inclination measurements to be exact. The inclination-free MLE opens interesting perspectives for future H I surveys by the Square Kilometer Array and its pathfinders.« less
Darrington, Richard T; Jiao, Jim
2004-04-01
Rapid and accurate stability prediction is essential to pharmaceutical formulation development. Commonly used stability prediction methods include monitoring parent drug loss at intended storage conditions or initial rate determination of degradants under accelerated conditions. Monitoring parent drug loss at the intended storage condition does not provide a rapid and accurate stability assessment because often <0.5% drug loss is all that can be observed in a realistic time frame, while the accelerated initial rate method in conjunction with extrapolation of rate constants using the Arrhenius or Eyring equations often introduces large errors in shelf-life prediction. In this study, the shelf life prediction of a model pharmaceutical preparation utilizing sensitive high-performance liquid chromatography-mass spectrometry (LC/MS) to directly quantitate degradant formation rates at the intended storage condition is proposed. This method was compared to traditional shelf life prediction approaches in terms of time required to predict shelf life and associated error in shelf life estimation. Results demonstrated that the proposed LC/MS method using initial rates analysis provided significantly improved confidence intervals for the predicted shelf life and required less overall time and effort to obtain the stability estimation compared to the other methods evaluated. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association.
Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses
Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy
2015-01-01
Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
Kim, Haksoo; Park, Samuel B; Monroe, James I; Traughber, Bryan J; Zheng, Yiran; Lo, Simon S; Yao, Min; Mansur, David; Ellis, Rodney; Machtay, Mitchell; Sohn, Jason W
2015-08-01
This article proposes quantitative analysis tools and digital phantoms to quantify intrinsic errors of deformable image registration (DIR) systems and establish quality assurance (QA) procedures for clinical use of DIR systems utilizing local and global error analysis methods with clinically realistic digital image phantoms. Landmark-based image registration verifications are suitable only for images with significant feature points. To address this shortfall, we adapted a deformation vector field (DVF) comparison approach with new analysis techniques to quantify the results. Digital image phantoms are derived from data sets of actual patient images (a reference image set, R, a test image set, T). Image sets from the same patient taken at different times are registered with deformable methods producing a reference DVFref. Applying DVFref to the original reference image deforms T into a new image R'. The data set, R', T, and DVFref, is from a realistic truth set and therefore can be used to analyze any DIR system and expose intrinsic errors by comparing DVFref and DVFtest. For quantitative error analysis, calculating and delineating differences between DVFs, 2 methods were used, (1) a local error analysis tool that displays deformation error magnitudes with color mapping on each image slice and (2) a global error analysis tool that calculates a deformation error histogram, which describes a cumulative probability function of errors for each anatomical structure. Three digital image phantoms were generated from three patients with a head and neck, a lung and a liver cancer. The DIR QA was evaluated using the case with head and neck. © The Author(s) 2014.
Mitigating leakage errors due to cavity modes in a superconducting quantum computer
NASA Astrophysics Data System (ADS)
McConkey, T. G.; Béjanin, J. H.; Earnest, C. T.; McRae, C. R. H.; Pagel, Z.; Rinehart, J. R.; Mariantoni, M.
2018-07-01
A practical quantum computer requires quantum bit (qubit) operations with low error probabilities in extensible architectures. We study a packaging method that makes it possible to address hundreds of superconducting qubits by means of coaxial Pogo pins. A qubit chip is housed in a superconducting box, where both box and chip dimensions lead to unwanted modes that can interfere with qubit operations. We analyze these interference effects in the context of qubit coherent leakage and qubit decoherence induced by damped modes. We propose two methods, half-wave fencing and antinode pinning, to mitigate the resulting errors by detuning the resonance frequency of the modes from the qubit frequency. We perform electromagnetic field simulations indicating that the resonance frequency of the modes increases with the number of installed pins and can be engineered to be significantly higher than the highest qubit frequency. We estimate that the error probabilities and decoherence rates due to suitably shifted modes in realistic scenarios can be up to two orders of magnitude lower than the state-of-the-art superconducting qubit error and decoherence rates. Our methods can be extended to different types of packages that do not rely on Pogo pins. Conductive bump bonds, for example, can serve the same purpose in qubit architectures based on flip chip technology. Metalized vias, instead, can be used to mitigate modes due to the increasing size of the dielectric substrate on which qubit arrays are patterned.
González, R C; Alvarez, D; López, A M; Alvarez, J C
2009-12-01
It has been reported that spatio-temporal gait parameters can be estimated using an accelerometer to calculate the vertical displacement of the body's centre of gravity. This method has the potential to produce realistic ambulatory estimations of those parameters during unconstrained walking. In this work, we want to evaluate the crude estimations of mean step length so obtained, for their possible application in the construction of an ambulatory walking distance measurement device. Two methods have been tested with a set of volunteers in 20 m excursions. Experimental results show that estimations of walking distance can be obtained with sufficient accuracy and precision for most practical applications (errors of 3.66 +/- 6.24 and 0.96 +/- 5.55%), the main difficulty being inter-individual variability (biggest deviations of 19.70 and 15.09% for each estimator). Also, the results indicate that an inverted pendulum model for the displacement during the single stance phase, and a constant displacement per step during double stance, constitute a valid model for the travelled distance with no need of further adjustments. It allows us to explain the main part of the erroneous distance estimations in different subjects as caused by fundamental limitations of the simple inverted pendulum approach.
Estimated Viscosities and Thermal Conductivities of Gases at High Temperatures
NASA Technical Reports Server (NTRS)
Svehla, Roger A.
1962-01-01
Viscosities and thermal conductivities, suitable for heat-transfer calculations, were estimated for about 200 gases in the ground state from 100 to 5000 K and 1-atmosphere pressure. Free radicals were included, but excited states and ions were not. Calculations for the transport coefficients were based upon the Lennard-Jones (12-6) potential for all gases. This potential was selected because: (1) It is one of the most realistic models available and (2) intermolecular force constants can be estimated from physical properties or by other techniques when experimental data are not available; such methods for estimating force constants are not as readily available for other potentials. When experimental viscosity data were available, they were used to obtain the force constants; otherwise the constants were estimated. These constants were then used to calculate both the viscosities and thermal conductivities tabulated in this report. For thermal conductivities of polyatomic gases an Eucken-type correction was made to correct for exchange between internal and translational energies. Though this correction may be rather poor at low temperatures, it becomes more satisfactory with increasing temperature. It was not possible to obtain force constants from experimental thermal conductivity data except for the inert atoms, because most conductivity data are available at low temperatures only (200 to 400 K), the temperature range where the Eucken correction is probably most in error. However, if the same set of force constants is used for both viscosity and thermal conductivity, there is a large degree of cancellation of error when these properties are used in heat-transfer equations such as the Dittus-Boelter equation. It is therefore concluded that the properties tabulated in this report are suitable for heat-transfer calculations of gaseous systems.
Lee, Yoojin; Callaghan, Martina F; Nagy, Zoltan
2017-01-01
In magnetic resonance imaging, precise measurements of longitudinal relaxation time ( T 1 ) is crucial to acquire useful information that is applicable to numerous clinical and neuroscience applications. In this work, we investigated the precision of T 1 relaxation time as measured using the variable flip angle method with emphasis on the noise propagated from radiofrequency transmit field ([Formula: see text]) measurements. The analytical solution for T 1 precision was derived by standard error propagation methods incorporating the noise from the three input sources: two spoiled gradient echo (SPGR) images and a [Formula: see text] map. Repeated in vivo experiments were performed to estimate the total variance in T 1 maps and we compared these experimentally obtained values with the theoretical predictions to validate the established theoretical framework. Both the analytical and experimental results showed that variance in the [Formula: see text] map propagated comparable noise levels into the T 1 maps as either of the two SPGR images. Improving precision of the [Formula: see text] measurements significantly reduced the variance in the estimated T 1 map. The variance estimated from the repeatedly measured in vivo T 1 maps agreed well with the theoretically-calculated variance in T 1 estimates, thus validating the analytical framework for realistic in vivo experiments. We concluded that for T 1 mapping experiments, the error propagated from the [Formula: see text] map must be considered. Optimizing the SPGR signals while neglecting to improve the precision of the [Formula: see text] map may result in grossly overestimating the precision of the estimated T 1 values.
Solution of the weighted symmetric similarity transformations based on quaternions
NASA Astrophysics Data System (ADS)
Mercan, H.; Akyilmaz, O.; Aydin, C.
2017-12-01
A new method through Gauss-Helmert model of adjustment is presented for the solution of the similarity transformations, either 3D or 2D, in the frame of errors-in-variables (EIV) model. EIV model assumes that all the variables in the mathematical model are contaminated by random errors. Total least squares estimation technique may be used to solve the EIV model. Accounting for the heteroscedastic uncertainty both in the target and the source coordinates, that is the more common and general case in practice, leads to a more realistic estimation of the transformation parameters. The presented algorithm can handle the heteroscedastic transformation problems, i.e., positions of the both target and the source points may have full covariance matrices. Therefore, there is no limitation such as the isotropic or the homogenous accuracy for the reference point coordinates. The developed algorithm takes the advantage of the quaternion definition which uniquely represents a 3D rotation matrix. The transformation parameters: scale, translations, and the quaternion (so that the rotation matrix) along with their covariances, are iteratively estimated with rapid convergence. Moreover, prior least squares (LS) estimation of the unknown transformation parameters is not required to start the iterations. We also show that the developed method can also be used to estimate the 2D similarity transformation parameters by simply treating the problem as a 3D transformation problem with zero (0) values assigned for the z-components of both target and source points. The efficiency of the new algorithm is presented with the numerical examples and comparisons with the results of the previous studies which use the same data set. Simulation experiments for the evaluation and comparison of the proposed and the conventional weighted LS (WLS) method is also presented.
Trainor, Patrick J; DeFilippis, Andrew P; Rai, Shesh N
2017-06-21
Statistical classification is a critical component of utilizing metabolomics data for examining the molecular determinants of phenotypes. Despite this, a comprehensive and rigorous evaluation of the accuracy of classification techniques for phenotype discrimination given metabolomics data has not been conducted. We conducted such an evaluation using both simulated and real metabolomics datasets, comparing Partial Least Squares-Discriminant Analysis (PLS-DA), Sparse PLS-DA, Random Forests, Support Vector Machines (SVM), Artificial Neural Network, k -Nearest Neighbors ( k -NN), and Naïve Bayes classification techniques for discrimination. We evaluated the techniques on simulated data generated to mimic global untargeted metabolomics data by incorporating realistic block-wise correlation and partial correlation structures for mimicking the correlations and metabolite clustering generated by biological processes. Over the simulation studies, covariance structures, means, and effect sizes were stochastically varied to provide consistent estimates of classifier performance over a wide range of possible scenarios. The effects of the presence of non-normal error distributions, the introduction of biological and technical outliers, unbalanced phenotype allocation, missing values due to abundances below a limit of detection, and the effect of prior-significance filtering (dimension reduction) were evaluated via simulation. In each simulation, classifier parameters, such as the number of hidden nodes in a Neural Network, were optimized by cross-validation to minimize the probability of detecting spurious results due to poorly tuned classifiers. Classifier performance was then evaluated using real metabolomics datasets of varying sample medium, sample size, and experimental design. We report that in the most realistic simulation studies that incorporated non-normal error distributions, unbalanced phenotype allocation, outliers, missing values, and dimension reduction, classifier performance (least to greatest error) was ranked as follows: SVM, Random Forest, Naïve Bayes, sPLS-DA, Neural Networks, PLS-DA and k -NN classifiers. When non-normal error distributions were introduced, the performance of PLS-DA and k -NN classifiers deteriorated further relative to the remaining techniques. Over the real datasets, a trend of better performance of SVM and Random Forest classifier performance was observed.
An automated calibration method for non-see-through head mounted displays.
Gilson, Stuart J; Fitzgibbon, Andrew W; Glennerster, Andrew
2011-08-15
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements. Copyright © 2011 Elsevier B.V. All rights reserved.
Pedestrian dead reckoning employing simultaneous activity recognition cues
NASA Astrophysics Data System (ADS)
Altun, Kerem; Barshan, Billur
2012-02-01
We consider the human localization problem using body-worn inertial/magnetic sensor units. Inertial sensors are characterized by a drift error caused by the integration of their rate output to obtain position information. Because of this drift, the position and orientation data obtained from inertial sensors are reliable over only short periods of time. Therefore, position updates from externally referenced sensors are essential. However, if the map of the environment is known, the activity context of the user can provide information about his position. In particular, the switches in the activity context correspond to discrete locations on the map. By performing localization simultaneously with activity recognition, we detect the activity context switches and use the corresponding position information as position updates in a localization filter. The localization filter also involves a smoother that combines the two estimates obtained by running the zero-velocity update algorithm both forward and backward in time. We performed experiments with eight subjects in indoor and outdoor environments involving walking, turning and standing activities. Using a spatial error criterion, we show that the position errors can be decreased by about 85% on the average. We also present the results of two 3D experiments performed in realistic indoor environments and demonstrate that it is possible to achieve over 90% error reduction in position by performing localization simultaneously with activity recognition.
Spatial abstraction for autonomous robot navigation.
Epstein, Susan L; Aroor, Anoop; Evanusa, Matthew; Sklar, Elizabeth I; Parsons, Simon
2015-09-01
Optimal navigation for a simulated robot relies on a detailed map and explicit path planning, an approach problematic for real-world robots that are subject to noise and error. This paper reports on autonomous robots that rely on local spatial perception, learning, and commonsense rationales instead. Despite realistic actuator error, learned spatial abstractions form a model that supports effective travel.
NASA Technical Reports Server (NTRS)
Barker, Howard W.; Kato, Serji; Wehr, T.
2012-01-01
The main point of this study was to use realistic representations of cloudy atmospheres to assess errors in solar flux estimates associated with 1D radiative transfer models. A scene construction algorithm, developed for the EarthCARE satellite mission, was applied to CloudSat, CALIPSO, and MODIS satellite data thus producing 3D cloudy atmospheres measuring 60 km wide by 13,000 km long at 1 km grid-spacing. Broadband solar fluxes and radiances for each (1 km)2 column where then produced by a Monte Carlo photon transfer model run in both full 3D and independent column approximation mode (i.e., a 1D model).
NASA Astrophysics Data System (ADS)
Köpke, Corinna; Irving, James; Elsheikh, Ahmed H.
2018-06-01
Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward model linking subsurface physical properties to measured data, which is typically assumed to be perfectly known in the inversion procedure. However, to make the stochastic solution of the inverse problem computationally tractable using methods such as Markov-chain-Monte-Carlo (MCMC), fast approximations of the forward model are commonly employed. This gives rise to model error, which has the potential to significantly bias posterior statistics if not properly accounted for. Here, we present a new methodology for dealing with the model error arising from the use of approximate forward solvers in Bayesian solutions to hydrogeophysical inverse problems. Our approach is geared towards the common case where this error cannot be (i) effectively characterized through some parametric statistical distribution; or (ii) estimated by interpolating between a small number of computed model-error realizations. To this end, we focus on identification and removal of the model-error component of the residual during MCMC using a projection-based approach, whereby the orthogonal basis employed for the projection is derived in each iteration from the K-nearest-neighboring entries in a model-error dictionary. The latter is constructed during the inversion and grows at a specified rate as the iterations proceed. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar travel-time data considering three different subsurface parameterizations of varying complexity. Synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed for their inversion. In each case, our developed approach enables us to remove posterior bias and obtain a more realistic characterization of uncertainty.
A statistical survey of heat input parameters into the cusp thermosphere
NASA Astrophysics Data System (ADS)
Moen, J. I.; Skjaeveland, A.; Carlson, H. C.
2017-12-01
Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.
NASA Astrophysics Data System (ADS)
Ding, Lei; Lai, Yuan; He, Bin
2005-01-01
It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.
Correcting electrode modelling errors in EIT on realistic 3D head models.
Jehl, Markus; Avery, James; Malone, Emma; Holder, David; Betcke, Timo
2015-12-01
Electrical impedance tomography (EIT) is a promising medical imaging technique which could aid differentiation of haemorrhagic from ischaemic stroke in an ambulance. One challenge in EIT is the ill-posed nature of the image reconstruction, i.e., that small measurement or modelling errors can result in large image artefacts. It is therefore important that reconstruction algorithms are improved with regard to stability to modelling errors. We identify that wrongly modelled electrode positions constitute one of the biggest sources of image artefacts in head EIT. Therefore, the use of the Fréchet derivative on the electrode boundaries in a realistic three-dimensional head model is investigated, in order to reconstruct electrode movements simultaneously to conductivity changes. We show a fast implementation and analyse the performance of electrode position reconstructions in time-difference and absolute imaging for simulated and experimental voltages. Reconstructing the electrode positions and conductivities simultaneously increased the image quality significantly in the presence of electrode movement.
Saliba, Christopher M; Brandon, Scott C E; Deluzio, Kevin J
2017-05-24
Musculoskeletal models are increasingly used to estimate medial and lateral knee contact forces, which are difficult to measure in vivo. The sensitivity of contact force predictions to modeling parameters is important to the interpretation and implication of results generated by the model. The purpose of this study was to quantify the sensitivity of knee contact force predictions to simultaneous errors in frontal plane knee alignment and contact locations under different dynamic conditions. We scaled a generic musculoskeletal model for N=23 subjects' stature and radiographic knee alignment, then perturbed frontal plane alignment and mediolateral contact locations within experimentally-possible ranges of 10° to -10° and 10 to -10mm, respectively. The sensitivity of first peak, second peak, and mean medial and lateral knee contact forces to knee adduction angle and contact locations was modeled using linear regression. Medial loads increased, and lateral loads decreased, by between 3% and 6% bodyweight for each degree of varus perturbation. Shifting the medial contact point medially increased medial loads and decreased lateral loads by between 1% and 4% bodyweight per millimeter. This study demonstrates that realistic measurement errors of 5mm (contact distance) or 5° (frontal plane alignment) could result in a combined 50% BW error in subject specific contact force estimates. We also show that model sensitivity varies between subjects as a result of differences in gait dynamics. These results demonstrate that predicted knee joint contact forces should be considered as a range of possible values determined by model uncertainty. Copyright © 2017 Elsevier Ltd. All rights reserved.
Meta-analysis of multiple outcomes: a multilevel approach.
Van den Noortgate, Wim; López-López, José Antonio; Marín-Martínez, Fulgencio; Sánchez-Meca, Julio
2015-12-01
In meta-analysis, dependent effect sizes are very common. An example is where in one or more studies the effect of an intervention is evaluated on multiple outcome variables for the same sample of participants. In this paper, we evaluate a three-level meta-analytic model to account for this kind of dependence, extending the simulation results of Van den Noortgate, López-López, Marín-Martínez, and Sánchez-Meca Behavior Research Methods, 45, 576-594 (2013) by allowing for a variation in the number of effect sizes per study, in the between-study variance, in the correlations between pairs of outcomes, and in the sample size of the studies. At the same time, we explore the performance of the approach if the outcomes used in a study can be regarded as a random sample from a population of outcomes. We conclude that although this approach is relatively simple and does not require prior estimates of the sampling covariances between effect sizes, it gives appropriate mean effect size estimates, standard error estimates, and confidence interval coverage proportions in a variety of realistic situations.
NASA Astrophysics Data System (ADS)
Elangovan, Premkumar; Mackenzie, Alistair; Dance, David R.; Young, Kenneth C.; Cooke, Victoria; Wilkinson, Louise; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Wells, Kevin
2017-04-01
A novel method has been developed for generating quasi-realistic voxel phantoms which simulate the compressed breast in mammography and digital breast tomosynthesis (DBT). The models are suitable for use in virtual clinical trials requiring realistic anatomy which use the multiple alternative forced choice (AFC) paradigm and patches from the complete breast image. The breast models are produced by extracting features of breast tissue components from DBT clinical images including skin, adipose and fibro-glandular tissue, blood vessels and Cooper’s ligaments. A range of different breast models can then be generated by combining these components. Visual realism was validated using a receiver operating characteristic (ROC) study of patches from simulated images calculated using the breast models and from real patient images. Quantitative analysis was undertaken using fractal dimension and power spectrum analysis. The average areas under the ROC curves for 2D and DBT images were 0.51 ± 0.06 and 0.54 ± 0.09 demonstrating that simulated and real images were statistically indistinguishable by expert breast readers (7 observers); errors represented as one standard error of the mean. The average fractal dimensions (2D, DBT) for real and simulated images were (2.72 ± 0.01, 2.75 ± 0.01) and (2.77 ± 0.03, 2.82 ± 0.04) respectively; errors represented as one standard error of the mean. Excellent agreement was found between power spectrum curves of real and simulated images, with average β values (2D, DBT) of (3.10 ± 0.17, 3.21 ± 0.11) and (3.01 ± 0.32, 3.19 ± 0.07) respectively; errors represented as one standard error of the mean. These results demonstrate that radiological images of these breast models realistically represent the complexity of real breast structures and can be used to simulate patches from mammograms and DBT images that are indistinguishable from patches from the corresponding real breast images. The method can generate about 500 radiological patches (~30 mm × 30 mm) per day for AFC experiments on a single workstation. This is the first study to quantitatively validate the realism of simulated radiological breast images using direct blinded comparison with real data via the ROC paradigm with expert breast readers.
Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.
Schimpf, Paul H
2017-09-15
This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.
On the accuracy of personality judgment: a realistic approach.
Funder, D C
1995-10-01
The "accuracy paradigm" for the study of personality judgment provides an important, new complement to the "error paradigm" that dominated this area of research for almost 2 decades. The present article introduces a specific approach within the accuracy paradigm called the Realistic Accuracy Model (RAM). RAM begins with the assumption that personality traits are real attributes of individuals. This assumption entails the use of a broad array of criteria for the evaluation of personality judgment and leads to a model that describes accuracy as a function of the availability, detection, and utilization of relevant behavioral cues. RAM provides a common explanation for basic moderators of accuracy, sheds light on how these moderators interact, and outlines a research agenda that includes the reintegration of the study of error with the study of accuracy.
A Luenberger observer for reaction-diffusion models with front position data
NASA Astrophysics Data System (ADS)
Collin, Annabelle; Chapelle, Dominique; Moireau, Philippe
2015-11-01
We propose a Luenberger observer for reaction-diffusion models with propagating front features, and for data associated with the location of the front over time. Such models are considered in various application fields, such as electrophysiology, wild-land fire propagation and tumor growth modeling. Drawing our inspiration from image processing methods, we start by proposing an observer for the eikonal-curvature equation that can be derived from the reaction-diffusion model by an asymptotic expansion. We then carry over this observer to the underlying reaction-diffusion equation by an ;inverse asymptotic analysis;, and we show that the associated correction in the dynamics has a stabilizing effect for the linearized estimation error. We also discuss the extension to joint state-parameter estimation by using the earlier-proposed ROUKF strategy. We then illustrate and assess our proposed observer method with test problems pertaining to electrophysiology modeling, including with a realistic model of cardiac atria. Our numerical trials show that state estimation is directly very effective with the proposed Luenberger observer, while specific strategies are needed to accurately perform parameter estimation - as is usual with Kalman filtering used in a nonlinear setting - and we demonstrate two such successful strategies.
Delay Tracking of Spread-Spectrum Signals for Indoor Optical Ranging
Salido-Monzú, David; Martín-Gorostiza, Ernesto; Lázaro-Galilea, José Luis; Martos-Naya, Eduardo; Wieser, Andreas
2014-01-01
Delay tracking of spread-spectrum signals is widely used for ranging in radio frequency based navigation. Its use in non-coherent optical ranging, however, has not been extensively studied since optical channels are less subject to narrowband interference situations where these techniques become more useful. In this work, an early-late delay-locked loop adapted to indoor optical ranging is presented and analyzed. The specific constraints of free-space infrared channels in this context substantially differ from those typically considered in radio frequency applications. The tracking stage is part of an infrared differential range measuring system with application to mobile target indoor localization. Spread-spectrum signals are used in this context to provide accurate ranging while reducing the effect of multipath interferences. The performance of the stage regarding noise and dynamic errors is analyzed and validated, providing expressions that allow an adequate selection of the design parameters depending on the expected input signal characteristics. The behavior of the stage in a general multipath scenario is also addressed to estimate the multipath error bounds. The results, evaluated under realistic conditions corresponding to an 870 nm link with 25 MHz chip-rate, built with low-cost up-to-date devices, show that an overall error below 6% of a chip time can be achieved. PMID:25490585
Benchmarking homogenization algorithms for monthly data
NASA Astrophysics Data System (ADS)
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratiannil, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.; Willett, K.
2013-09-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies. The algorithms were validated against a realistic benchmark dataset. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including i) the centered root mean square error relative to the true homogeneous values at various averaging scales, ii) the error in linear trend estimates and iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data. Moreover, state-of-the-art relative homogenization algorithms developed to work with an inhomogeneous reference are shown to perform best. The study showed that currently automatic algorithms can perform as well as manual ones.
Zlotnik, Alexander; Gallardo-Antolín, Ascensión; Cuchí Alfaro, Miguel; Pérez Pérez, María Carmen; Montero Martínez, Juan Manuel
2015-08-01
Although emergency department visit forecasting can be of use for nurse staff planning, previous research has focused on models that lacked sufficient resolution and realistic error metrics for these predictions to be applied in practice. Using data from a 1100-bed specialized care hospital with 553,000 patients assigned to its healthcare area, forecasts with different prediction horizons, from 2 to 24 weeks ahead, with an 8-hour granularity, using support vector regression, M5P, and stratified average time-series models were generated with an open-source software package. As overstaffing and understaffing errors have different implications, error metrics and potential personnel monetary savings were calculated with a custom validation scheme, which simulated subsequent generation of predictions during a 4-year period. Results were then compared with a generalized estimating equation regression. Support vector regression and M5P models were found to be superior to the stratified average model with a 95% confidence interval. Our findings suggest that medium and severe understaffing situations could be reduced in more than an order of magnitude and average yearly savings of up to €683,500 could be achieved if dynamic nursing staff allocation was performed with support vector regression instead of the static staffing levels currently in use.
3D Face Modeling Using the Multi-Deformable Method
Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun
2012-01-01
In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xing, Y; Macq, B; Bondar, L
Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less
NASA Astrophysics Data System (ADS)
Straus, D. M.
2006-12-01
The transitions between portions of the state space of the large-scale flow is studied from daily wintertime data over the Pacific North America region using the NCEP reanalysis data set (54 winters) and very large suites of hindcasts made with the COLA atmospheric GCM with observed SST (55 members for each of 18 winters). The partition of the large-scale state space is guided by cluster analysis, whose statistical significance and relationship to SST is reviewed (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). The determination of the global nature of the flow through state space is studied using Markov Chains (Crommelin, 2004). In particular the non-diffusive part of the flow is contrasted in nature (small data sample) and the AGCM (large data sample). The intrinsic error growth associated with different portions of the state space is studied through sets of identical twin AGCM simulations. The goal is to obtain realistic estimates of predictability times for large-scale transitions that should be useful in long-range forecasting.
Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks
Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan
2016-01-01
To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning. PMID:27608019
Continuous Indoor Positioning Fusing WiFi, Smartphone Sensors and Landmarks.
Deng, Zhi-An; Wang, Guofeng; Qin, Danyang; Na, Zhenyu; Cui, Yang; Chen, Juan
2016-09-05
To exploit the complementary strengths of WiFi positioning, pedestrian dead reckoning (PDR), and landmarks, we propose a novel fusion approach based on an extended Kalman filter (EKF). For WiFi positioning, unlike previous fusion approaches setting measurement noise parameters empirically, we deploy a kernel density estimation-based model to adaptively measure the related measurement noise statistics. Furthermore, a trusted area of WiFi positioning defined by fusion results of previous step and WiFi signal outlier detection are exploited to reduce computational cost and improve WiFi positioning accuracy. For PDR, we integrate a gyroscope, an accelerometer, and a magnetometer to determine the user heading based on another EKF model. To reduce accumulation error of PDR and enable continuous indoor positioning, not only the positioning results but also the heading estimations are recalibrated by indoor landmarks. Experimental results in a realistic indoor environment show that the proposed fusion approach achieves substantial positioning accuracy improvement than individual positioning approaches including PDR and WiFi positioning.
Effect of electrical coupling on ionic current and synaptic potential measurements.
Rabbah, Pascale; Golowasch, Jorge; Nadim, Farzan
2005-07-01
Recent studies have found electrical coupling to be more ubiquitous than previously thought, and coupling through gap junctions is known to play a crucial role in neuronal function and network output. In particular, current spread through gap junctions may affect the activation of voltage-dependent conductances as well as chemical synaptic release. Using voltage-clamp recordings of two strongly electrically coupled neurons of the lobster stomatogastric ganglion and conductance-based models of these neurons, we identified effects of electrical coupling on the measurement of leak and voltage-gated outward currents, as well as synaptic potentials. Experimental measurements showed that both leak and voltage-gated outward currents are recruited by gap junctions from neurons coupled to the clamped cell. Nevertheless, in spite of the strong coupling between these neurons, the errors made in estimating voltage-gated conductance parameters were relatively minor (<10%). Thus in many cases isolation of coupled neurons may not be required if a small degree of measurement error of the voltage-gated currents or the synaptic potentials is acceptable. Modeling results show, however, that such errors may be as high as 20% if the gap-junction position is near the recording site or as high as 90% when measuring smaller voltage-gated ionic currents. Paradoxically, improved space clamp increases the errors arising from electrical coupling because voltage control across gap junctions is poor for even the highest realistic coupling conductances. Furthermore, the common procedure of leak subtraction can add an extra error to the conductance measurement, the sign of which depends on the maximal conductance.
Energy-efficient quantum frequency estimation
NASA Astrophysics Data System (ADS)
Liuzzo-Scorpo, Pietro; Correa, Luis A.; Pollock, Felix A.; Górecka, Agnieszka; Modi, Kavan; Adesso, Gerardo
2018-06-01
The problem of estimating the frequency of a two-level atom in a noisy environment is studied. Our interest is to minimise both the energetic cost of the protocol and the statistical uncertainty of the estimate. In particular, we prepare a probe in a ‘GHZ-diagonal’ state by means of a sequence of qubit gates applied on an ensemble of n atoms in thermal equilibrium. Noise is introduced via a phenomenological time-non-local quantum master equation, which gives rise to a phase-covariant dissipative dynamics. After an interval of free evolution, the n-atom probe is globally measured at an interrogation time chosen to minimise the error bars of the final estimate. We model explicitly a measurement scheme which becomes optimal in a suitable parameter range, and are thus able to calculate the total energetic expenditure of the protocol. Interestingly, we observe that scaling up our multipartite entangled probes offers no precision enhancement when the total available energy {\\boldsymbol{ \\mathcal E }} is limited. This is at stark contrast with standard frequency estimation, where larger probes—more sensitive but also more ‘expensive’ to prepare—are always preferred. Replacing {\\boldsymbol{ \\mathcal E }} by the resource that places the most stringent limitation on each specific experimental setup, would thus help to formulate more realistic metrological prescriptions.
In Search of Grid Converged Solutions
NASA Technical Reports Server (NTRS)
Lockard, David P.
2010-01-01
Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.
Simulation and mitigation of higher-order ionospheric errors in PPP
NASA Astrophysics Data System (ADS)
Zus, Florian; Deng, Zhiguo; Wickert, Jens
2017-04-01
We developed a rapid and precise algorithm to compute ionospheric phase advances in a realistic electron density field. The electron density field is derived from a plasmaspheric extension of the International Reference Ionosphere (Gulyaeva and Bilitza, 2012) and the magnetic field stems from the International Geomagnetic Reference Field. For specific station locations, elevation and azimuth angles the ionospheric phase advances are stored in a look-up table. The higher-order ionospheric residuals are computed by forming the standard linear combination of the ionospheric phase advances. In a simulation study we examine how the higher-order ionospheric residuals leak into estimated station coordinates, clocks, zenith delays and tropospheric gradients in precise point positioning. The simulation study includes a few hundred globally distributed stations and covers the time period 1990-2015. We take a close look on the estimated zenith delays and tropospheric gradients as they are considered a data source for meteorological and climate related research. We also show how the by product of this simulation study, the look-up tables, can be used to mitigate higher-order ionospheric errors in practise. Gulyaeva, T.L., and Bilitza, D. Towards ISO Standard Earth Ionosphere and Plasmasphere Model. In: New Developments in the Standard Model, edited by R.J. Larsen, pp. 1-39, NOVA, Hauppauge, New York, 2012, available at https://www.novapublishers.com/catalog/product_info.php?products_id=35812
Zhang, Shengzhi; Yu, Shuai; Liu, Chaojun; Liu, Sheng
2016-06-01
Tracking the position of pedestrian is urgently demanded when the most commonly used GPS (Global Position System) is unavailable. Benefited from the small size, low-power consumption, and relatively high reliability, micro-electro-mechanical system sensors are well suited for GPS-denied indoor pedestrian heading estimation. In this paper, a real-time miniature orientation determination system (MODS) was developed for indoor heading and trajectory tracking based on a novel dual-linear Kalman filter. The proposed filter precludes the impact of geomagnetic distortions on pitch and roll that the heading is subjected to. A robust calibration approach was designed to improve the accuracy of sensors measurements based on a unified sensor model. Online tests were performed on the MODS with an improved turntable. The results demonstrate that the average RMSE (root-mean-square error) of heading estimation is less than 1°. Indoor heading experiments were carried out with the MODS mounted on the shoe of pedestrian. Besides, we integrated the existing MODS into an indoor pedestrian dead reckoning application as an example of its utility in realistic actions. A human attitude-based walking model was developed to calculate the walking distance. Test results indicate that mean percentage error of indoor trajectory tracking achieves 2% of the total walking distance. This paper provides a feasible alternative for accurate indoor heading and trajectory tracking.
RF transient analysis and stabilization of the phase and energy of the proposed PIP-II LINAC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, J. P.; Chase, B. E.
This paper describes a recent effort to develop and benchmark a simulation tool for the analysis of RF transients and their compensation in an H- linear accelerator. Existing tools in this area either focus on electron LINACs or lack fundamental details about the LLRF system that are necessary to provide realistic performance estimates. In our paper we begin with a discussion of our computational models followed by benchmarking with existing beam-dynamics codes and measured data. We then analyze the effect of RF transients and their compensation in the PIP-II LINAC, followed by an analysis of calibration errors and how amore » Newton’s Method based feedback scheme can be used to regulate the beam energy to within the specified limits.« less
Zhang, Chun-Hui; Zhang, Chun-Mei; Guo, Guang-Can; Wang, Qin
2018-02-19
At present, most of the measurement-device-independent quantum key distributions (MDI-QKD) are based on weak coherent sources and limited in the transmission distance under realistic experimental conditions, e.g., considering the finite-size-key effects. Hence in this paper, we propose a new biased decoy-state scheme using heralded single-photon sources for the three-intensity MDI-QKD, where we prepare the decoy pulses only in X basis and adopt both the collective constraints and joint parameter estimation techniques. Compared with former schemes with WCS or HSPS, after implementing full parameter optimizations, our scheme gives distinct reduced quantum bit error rate in the X basis and thus show excellent performance, especially when the data size is relatively small.
NASA Astrophysics Data System (ADS)
Bindschadler, Michael; Modgil, Dimple; Branch, Kelley R.; La Riviere, Patrick J.; Alessio, Adam M.
2014-04-01
Myocardial blood flow (MBF) can be estimated from dynamic contrast enhanced (DCE) cardiac CT acquisitions, leading to quantitative assessment of regional perfusion. The need for low radiation dose and the lack of consensus on MBF estimation methods motivates this study to refine the selection of acquisition protocols and models for CT-derived MBF. DCE cardiac CT acquisitions were simulated for a range of flow states (MBF = 0.5, 1, 2, 3 ml (min g)-1, cardiac output = 3, 5, 8 L min-1). Patient kinetics were generated by a mathematical model of iodine exchange incorporating numerous physiological features including heterogenenous microvascular flow, permeability and capillary contrast gradients. CT acquisitions were simulated for multiple realizations of realistic x-ray flux levels. CT acquisitions that reduce radiation exposure were implemented by varying both temporal sampling (1, 2, and 3 s sampling intervals) and tube currents (140, 70, and 25 mAs). For all acquisitions, we compared three quantitative MBF estimation methods (two-compartment model, an axially-distributed model, and the adiabatic approximation to the tissue homogeneous model) and a qualitative slope-based method. In total, over 11 000 time attenuation curves were used to evaluate MBF estimation in multiple patient and imaging scenarios. After iodine-based beam hardening correction, the slope method consistently underestimated flow by on average 47.5% and the quantitative models provided estimates with less than 6.5% average bias and increasing variance with increasing dose reductions. The three quantitative models performed equally well, offering estimates with essentially identical root mean squared error (RMSE) for matched acquisitions. MBF estimates using the qualitative slope method were inferior in terms of bias and RMSE compared to the quantitative methods. MBF estimate error was equal at matched dose reductions for all quantitative methods and range of techniques evaluated. This suggests that there is no particular advantage between quantitative estimation methods nor to performing dose reduction via tube current reduction compared to temporal sampling reduction. These data are important for optimizing implementation of cardiac dynamic CT in clinical practice and in prospective CT MBF trials.
Self-calibration method without joint iteration for distributed small satellite SAR systems
NASA Astrophysics Data System (ADS)
Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan
2013-12-01
The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.
Experimental quantum verification in the presence of temporally correlated noise
NASA Astrophysics Data System (ADS)
Mavadia, S.; Edmunds, C. L.; Hempel, C.; Ball, H.; Roy, F.; Stace, T. M.; Biercuk, M. J.
2018-02-01
Growth in the capabilities of quantum information hardware mandates access to techniques for performance verification that function under realistic laboratory conditions. Here we experimentally characterise the impact of common temporally correlated noise processes on both randomised benchmarking (RB) and gate-set tomography (GST). Our analysis highlights the role of sequence structure in enhancing or suppressing the sensitivity of quantum verification protocols to either slowly or rapidly varying noise, which we treat in the limiting cases of quasi-DC miscalibration and white noise power spectra. We perform experiments with a single trapped 171Yb+ ion-qubit and inject engineered noise (" separators="∝σ^ z ) to probe protocol performance. Experiments on RB validate predictions that measured fidelities over sequences are described by a gamma distribution varying between approximately Gaussian, and a broad, highly skewed distribution for rapidly and slowly varying noise, respectively. Similarly we find a strong gate set dependence of default experimental GST procedures in the presence of correlated errors, leading to significant deviations between estimated and calculated diamond distances in the presence of correlated σ^ z errors. Numerical simulations demonstrate that expansion of the gate set to include negative rotations can suppress these discrepancies and increase reported diamond distances by orders of magnitude for the same error processes. Similar effects do not occur for correlated σ^ x or σ^ y errors or depolarising noise processes, highlighting the impact of the critical interplay of selected gate set and the gauge optimisation process on the meaning of the reported diamond norm in correlated noise environments.
Improving Barotropic Tides by Two-way Nesting High and Low Resolution Domains
NASA Astrophysics Data System (ADS)
Jeon, C. H.; Buijsman, M. C.; Wallcraft, A. J.; Shriver, J. F.; Hogan, P. J.; Arbic, B. K.; Richman, J. G.
2017-12-01
In a realistically forced global ocean model, relatively large sea-surface-height root-mean-square (RMS) errors are observed in the North Atlantic near the Hudson Strait. These may be associated with large tidal resonances interacting with coastal bathymetry that are not correctly represented with a low resolution grid. This issue can be overcome by using high resolution grids, but at a high computational cost. In this paper we apply two-way nesting as an alternative solution. This approach applies high resolution to the area with large RMS errors and a lower resolution to the rest. It is expected to improve the tidal solution as well as reduce the computational cost. To minimize modification of the original source codes of the ocean circulation model (HYCOM), we apply the coupler OASIS3-MCT. This coupler is used to exchange barotropic pressures and velocity fields through its APIs (Application Programming Interface) between the parent and the child components. The developed two-way nesting framework has been validated with an idealized test case where the parent and the child domains have identical grid resolutions. The result of the idealized case shows very small RMS errors between the child and parent solutions. We plan to show results for a case with realistic tidal forcing in which the resolution of the child grid is three times that of the parent grid. The numerical results of this realistic case are compared to TPXO data.
Schweighofer, N; Spoelstra, J; Arbib, M A; Kawato, M
1998-01-01
The cerebellum is essential for the control of multijoint movements; when the cerebellum is lesioned, the performance error is more than the summed errors produced by single joints. In the companion paper (Schweighofer et al., 1998), a functional anatomical model for visually guided arm movement was proposed. The model comprised a basic feedforward/feedback controller with realistic transmission delays and was connected to a two-link, six-muscle, planar arm. In the present study, we examined the role of the cerebellum in reaching movements by embedding a novel, detailed cerebellar neural network in this functional control model. We could derive realistic cerebellar inputs and the role of the cerebellum in learning to control the arm was assessed. This cerebellar network learned the part of the inverse dynamics of the arm not provided by the basic feedforward/feedback controller. Despite realistically low inferior olive firing rates and noisy mossy fibre inputs, the model could reduce the error between intended and planned movements. The responses of the different cell groups were comparable to those of biological cell groups. In particular, the modelled Purkinje cells exhibited directional tuning after learning and the parallel fibres, due to their length, provide Purkinje cells with the input required for this coordination task. The inferior olive responses contained two different components; the earlier response, locked to movement onset, was always present and the later response disappeared after learning. These results support the theory that the cerebellum is involved in motor learning.
A fast analytical undulator model for realistic high-energy FEL simulations
NASA Astrophysics Data System (ADS)
Tatchyn, R.; Cremer, T.
1997-02-01
A number of leading FEL simulation codes used for modeling gain in the ultralong undulators required for SASE saturation in the <100 Å range employ simplified analytical models both for field and error representations. Although it is recognized that both the practical and theoretical validity of such codes could be enhanced by incorporating realistic undulator field calculations, the computational cost of doing this can be prohibitive, especially for point-to-point integration of the equations of motion through each undulator period. In this paper we describe a simple analytical model suitable for modeling realistic permanent magnet (PM), hybrid/PM, and non-PM undulator structures, and discuss selected techniques for minimizing computation time.
NASA Astrophysics Data System (ADS)
Xiong, Qiufen; Hu, Jianglin
2013-05-01
The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and daily temporal scale. The primary factors influencing the dataset precision are elevation and terrain complexity. In general, the gridded dataset has a relatively high precision in plains and flatlands and a relatively low precision in mountainous areas.
Ground truth seismic events and location capability at Degelen mountain, Kazakhstan
Trabant, C.; Thurber, C.; Leith, W.
2002-01-01
We utilized nuclear explosions from the Degelen Mountain sub-region of the Semipalatinsk Test Site (STS), Kazakhstan, to assess seismic location capability directly. Excellent ground truth information for these events was either known or was estimated from maps of the Degelen Mountain adit complex. Origin times were refined for events for which absolute origin time information was unknown using catalog arrival times, our ground truth location estimates, and a time baseline provided by fixing known origin times during a joint hypocenter determination (JHD). Precise arrival time picks were determined using a waveform cross-correlation process applied to the available digital data. These data were used in a JHD analysis. We found that very accurate locations were possible when high precision, waveform cross-correlation arrival times were combined with JHD. Relocation with our full digital data set resulted in a mean mislocation of 2 km and a mean 95% confidence ellipse (CE) area of 6.6 km2 (90% CE: 5.1 km2), however, only 5 of the 18 computed error ellipses actually covered the associated ground truth location estimate. To test a more realistic nuclear test monitoring scenario, we applied our JHD analysis to a set of seven events (one fixed) using data only from seismic stations within 40?? epicentral distance. Relocation with these data resulted in a mean mislocation of 7.4 km, with four of the 95% error ellipses covering less than 570 km2 (90% CE: 438 km2), and the other two covering 1730 and 8869 km2 (90% CE: 1331 and 6822 km2). Location uncertainties calculated using JHD often underestimated the true error, but a circular region with a radius equal to the mislocation covered less than 1000 km2 for all events having more than three observations. ?? 2002 Elsevier Science B.V. All rights reserved.
Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera.
Reyes, Bersain A; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H
2016-03-18
A smartphone-based tidal volume (V(T)) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference V(T) measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and V(T) to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of -0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis.
Employing an Incentive Spirometer to Calibrate Tidal Volumes Estimated from a Smartphone Camera
Reyes, Bersain A.; Reljin, Natasa; Kong, Youngsun; Nam, Yunyoung; Ha, Sangho; Chon, Ki H.
2016-01-01
A smartphone-based tidal volume (VT) estimator was recently introduced by our research group, where an Android application provides a chest movement signal whose peak-to-peak amplitude is highly correlated with reference VT measured by a spirometer. We found a Normalized Root Mean Squared Error (NRMSE) of 14.998% ± 5.171% (mean ± SD) when the smartphone measures were calibrated using spirometer data. However, the availability of a spirometer device for calibration is not realistic outside clinical or research environments. In order to be used by the general population on a daily basis, a simple calibration procedure not relying on specialized devices is required. In this study, we propose taking advantage of the linear correlation between smartphone measurements and VT to obtain a calibration model using information computed while the subject breathes through a commercially-available incentive spirometer (IS). Experiments were performed on twelve (N = 12) healthy subjects. In addition to corroborating findings from our previous study using a spirometer for calibration, we found that the calibration procedure using an IS resulted in a fixed bias of −0.051 L and a RMSE of 0.189 ± 0.074 L corresponding to 18.559% ± 6.579% when normalized. Although it has a small underestimation and slightly increased error, the proposed calibration procedure using an IS has the advantages of being simple, fast, and affordable. This study supports the feasibility of developing a portable smartphone-based breathing status monitor that provides information about breathing depth, in addition to the more commonly estimated respiratory rate, on a daily basis. PMID:26999152
Computer-aided diagnosis with potential application to rapid detection of disease outbreaks.
Burr, Tom; Koster, Frederick; Picard, Rick; Forslund, Dave; Wokoun, Doug; Joyce, Ed; Brillman, Judith; Froman, Phil; Lee, Jack
2007-04-15
Our objectives are to quickly interpret symptoms of emergency patients to identify likely syndromes and to improve population-wide disease outbreak detection. We constructed a database of 248 syndromes, each syndrome having an estimated probability of producing any of 85 symptoms, with some two-way, three-way, and five-way probabilities reflecting correlations among symptoms. Using these multi-way probabilities in conjunction with an iterative proportional fitting algorithm allows estimation of full conditional probabilities. Combining these conditional probabilities with misdiagnosis error rates and incidence rates via Bayes theorem, the probability of each syndrome is estimated. We tested a prototype of computer-aided differential diagnosis (CADDY) on simulated data and on more than 100 real cases, including West Nile Virus, Q fever, SARS, anthrax, plague, tularaemia and toxic shock cases. We conclude that: (1) it is important to determine whether the unrecorded positive status of a symptom means that the status is negative or that the status is unknown; (2) inclusion of misdiagnosis error rates produces more realistic results; (3) the naive Bayes classifier, which assumes all symptoms behave independently, is slightly outperformed by CADDY, which includes available multi-symptom information on correlations; as more information regarding symptom correlations becomes available, the advantage of CADDY over the naive Bayes classifier should increase; (4) overlooking low-probability, high-consequence events is less likely if the standard output summary is augmented with a list of rare syndromes that are consistent with observed symptoms, and (5) accumulating patient-level probabilities across a larger population can aid in biosurveillance for disease outbreaks. c 2007 John Wiley & Sons, Ltd.
Brain, Richard A; Teed, R Scott; Bang, JiSu; Thorbek, Pernille; Perine, Jeff; Peranginangin, Natalia; Kim, Myoungwoo; Valenti, Ted; Chen, Wenlin; Breton, Roger L; Rodney, Sara I; Moore, Dwayne R J
2015-01-01
Simple, deterministic screening-level assessments that are highly conservative by design facilitate a rapid initial screening to determine whether a pesticide active ingredient has the potential to adversely affect threatened or endangered species. If a worst-case estimate of pesticide exposure is below a very conservative effects metric (e.g., the no observed effects concentration of the most sensitive tested surrogate species) then the potential risks are considered de minimis and unlikely to jeopardize the existence of a threatened or endangered species. Thus by design, such compounded layers of conservatism are intended to minimize potential Type II errors (failure to reject a false null hypothesis of de minimus risk), but correspondingly increase Type I errors (falsely reject a null hypothesis of de minimus risk). Because of the conservatism inherent in screening-level risk assessments, higher-tier scientific information and analyses that provide additional environmental realism can be applied in cases where a potential risk has been identified. This information includes community-level effects data, environmental fate and exposure data, monitoring data, geospatial location and proximity data, species biology data, and probabilistic exposure and population models. Given that the definition of "risk" includes likelihood and magnitude of effect, higher-tier risk assessments should use probabilistic techniques that more accurately and realistically characterize risk. Moreover, where possible and appropriate, risk assessments should focus on effects at the population and community levels of organization rather than the more traditional focus on the organism level. This document provides a review of some types of higher-tier data and assessment refinements available to more accurately and realistically evaluate potential risks of pesticide use to threatened and endangered species. © 2014 SETAC.
Karlsson, Kristin; Lax, Ingmar; Lindbäck, Elias; Poludniowski, Gavin
2017-09-01
Geometrical uncertainties can result in a delivered dose to the tumor different from that estimated in the static treatment plan. The purpose of this project was to investigate the accuracy of the dose calculated to the clinical target volume (CTV) with the dose-shift approximation, in stereotactic body radiation therapy (SBRT) of lung tumors considering setup errors and breathing motion. The dose-shift method was compared with a beam-shift method with dose recalculation. Included were 10 patients (10 tumors) selected to represent a variety of SBRT-treated lung tumors in terms of tumor location, CTV volume, and tumor density. An in-house developed toolkit within a treatment planning system allowed the shift of either the dose matrix or a shift of the beam isocenter with dose recalculation, to simulate setup errors and breathing motion. Setup shifts of different magnitudes (up to 10 mm) and directions as well as breathing with different peak-to-peak amplitudes (up to 10:5:5 mm) were modeled. The resulting dose-volume histograms (DVHs) were recorded and dose statistics were extracted. Generally, both the dose-shift and beam-shift methods resulted in calculated doses lower than the static planned dose, although the minimum (D 98% ) dose exceeded the prescribed dose in all cases, for setup shifts up to 5 mm. The dose-shift method also generally underestimated the dose compared with the beam-shift method. For clinically realistic systematic displacements of less than 5 mm, the results demonstrated that in the minimum dose region within the CTV, the dose-shift method was accurate to 2% (root-mean-square error). Breathing motion only marginally degraded the dose distributions. Averaged over the patients and shift directions, the dose-shift approximation was determined to be accurate to approximately 2% (RMS) within the CTV, for clinically relevant geometrical uncertainties for SBRT of lung tumors.
Constraints on a scale-dependent bias from galaxy clustering
NASA Astrophysics Data System (ADS)
Amendola, L.; Menegoni, E.; Di Porto, C.; Corsi, M.; Branchini, E.
2017-01-01
We forecast the future constraints on scale-dependent parametrizations of galaxy bias and their impact on the estimate of cosmological parameters from the power spectrum of galaxies measured in a spectroscopic redshift survey. For the latter we assume a wide survey at relatively large redshifts, similar to the planned Euclid survey, as the baseline for future experiments. To assess the impact of the bias we perform a Fisher matrix analysis, and we adopt two different parametrizations of scale-dependent bias. The fiducial models for galaxy bias are calibrated using mock catalogs of H α emitting galaxies mimicking the expected properties of the objects that will be targeted by the Euclid survey. In our analysis we have obtained two main results. First of all, allowing for a scale-dependent bias does not significantly increase the errors on the other cosmological parameters apart from the rms amplitude of density fluctuations, σ8 , and the growth index γ , whose uncertainties increase by a factor up to 2, depending on the bias model adopted. Second, we find that the accuracy in the linear bias parameter b0 can be estimated to within 1%-2% at various redshifts regardless of the fiducial model. The nonlinear bias parameters have significantly large errors that depend on the model adopted. Despite this, in the more realistic scenarios departures from the simple linear bias prescription can be detected with a ˜2 σ significance at each redshift explored. Finally, we use the Fisher matrix formalism to assess the impact od assuming an incorrect bias model and find that the systematic errors induced on the cosmological parameters are similar or even larger than the statistical ones.
Jiřík, Miroslav; Bartoš, Martin; Tomášek, Petr; Malečková, Anna; Kural, Tomáš; Horáková, Jana; Lukáš, David; Suchý, Tomáš; Kochová, Petra; Hubálek Kalbáčová, Marie; Králíčková, Milena; Tonar, Zbyněk
2018-06-01
Quantification of the structure and composition of biomaterials using micro-CT requires image segmentation due to the low contrast and overlapping radioopacity of biological materials. The amount of bias introduced by segmentation procedures is generally unknown. We aim to develop software that generates three-dimensional models of fibrous and porous structures with known volumes, surfaces, lengths, and object counts in fibrous materials and to provide a software tool that calibrates quantitative micro-CT assessments. Virtual image stacks were generated using the newly developed software TeIGen, enabling the simulation of micro-CT scans of unconnected tubes, connected tubes, and porosities. A realistic noise generator was incorporated. Forty image stacks were evaluated using micro-CT, and the error between the true known and estimated data was quantified. Starting with geometric primitives, the error of the numerical estimation of surfaces and volumes was eliminated, thereby enabling the quantification of volumes and surfaces of colliding objects. Analysis of the sensitivity of the thresholding upon parameters of generated testing image sets revealed the effects of decreasing resolution and increasing noise on the accuracy of the micro-CT quantification. The size of the error increased with decreasing resolution when the voxel size exceeded 1/10 of the typical object size, which simulated the effect of the smallest details that could still be reliably quantified. Open-source software for calibrating quantitative micro-CT assessments by producing and saving virtually generated image data sets with known morphometric data was made freely available to researchers involved in morphometry of three-dimensional fibrillar and porous structures in micro-CT scans. © 2018 Wiley Periodicals, Inc.
Field evaluation of distance-estimation error during wetland-dependent bird surveys
Nadeau, Christopher P.; Conway, Courtney J.
2012-01-01
Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1997-01-01
We proposed a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and is required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and amplification or bias correction of forecast anomalies. Characterizing and decomposing forecast error in this way has two important applications, which we term the assessment application and the objective analysis application. For the assessment application, our approach results in new objective measures of forecast skill which are more in line with subjective measures of forecast skill and which are useful in validating models and diagnosing their shortcomings. With regard to the objective analysis application, meteorological analysis schemes balance forecast error and observational error to obtain an optimal analysis. Presently, representations of the error covariance matrix used to measure the forecast error are severely limited. For the objective analysis application our approach will improve analyses by providing a more realistic measure of the forecast error. We expect, a priori, that our approach should greatly improve the utility of remotely sensed data which have relatively high horizontal resolution, but which are indirectly related to the conventional atmospheric variables. In this project, we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically, we study the forecast errors of the sea level pressure (SLP) and 500 hPa geopotential height fields for forecasts of the short and medium range. Since the forecasts are generated by the GEOS (Goddard Earth Observing System) data assimilation system with and without ERS 1 scatterometer data, these preliminary studies serve several purposes. They (1) provide a testbed for the use of the distortion representation of forecast errors, (2) act as one means of validating the GEOS data assimilation system and (3) help to describe the impact of the ERS 1 scatterometer data.
NASA Astrophysics Data System (ADS)
Zhang, Liangjing; Dobslaw, Henryk; Dahle, Christoph; Thomas, Maik; Neumayer, Karl-Hans; Flechtner, Frank
2017-04-01
By operating for more than one decade now, the GRACE satellite provides valuable information on the total water storage (TWS) for hydrological and hydro-meteorological applications. The increasing interest in use of the GRACE-based TWS requires an in-depth assessment of the reliability of the outputs and also its uncertainties. Through years of development, different post-processing methods have been suggested for TWS estimation. However, since GRACE offers an unique way to provide high spatial and temporal scale TWS, there is no global ground truth data available to fully validate the results. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-type gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Three non-isotropic filter methods from Kusche (2007) and a combined filter from DDK1 and DDK3 based on the ground tracks are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-type TWS estimates to correct the bias and leakage. Time variant rescaling factors as monthly scaling factors and scaling factors for seasonal and long-term variations separately are investigated as well. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment (Zhang et al., 2016) and will subsequently recommend a processing strategy that shall also be applied for planned GRACE and GRACE-FO Level-3 products for terrestrial applications provided by GFZ. Kusche, J., 2007:Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Zhang L, Dobslaw H, Thomas M (2016) Globally gridded terrestrial water storage variations from GRACE satellite gravimetry for hydrometeorological applications. Geophysical Journal International 206(1):368-378, DOI 10.1093/gji/ggw153.
Application of empirical and dynamical closure methods to simple climate models
NASA Astrophysics Data System (ADS)
Padilla, Lauren Elizabeth
This dissertation applies empirically- and physically-based methods for closure of uncertain parameters and processes to three model systems that lie on the simple end of climate model complexity. Each model isolates one of three sources of closure uncertainty: uncertain observational data, large dimension, and wide ranging length scales. They serve as efficient test systems toward extension of the methods to more realistic climate models. The empirical approach uses the Unscented Kalman Filter (UKF) to estimate the transient climate sensitivity (TCS) parameter in a globally-averaged energy balance model. Uncertainty in climate forcing and historical temperature make TCS difficult to determine. A range of probabilistic estimates of TCS computed for various assumptions about past forcing and natural variability corroborate ranges reported in the IPCC AR4 found by different means. Also computed are estimates of how quickly uncertainty in TCS may be expected to diminish in the future as additional observations become available. For higher system dimensions the UKF approach may become prohibitively expensive. A modified UKF algorithm is developed in which the error covariance is represented by a reduced-rank approximation, substantially reducing the number of model evaluations required to provide probability densities for unknown parameters. The method estimates the state and parameters of an abstract atmospheric model, known as Lorenz 96, with accuracy close to that of a full-order UKF for 30-60% rank reduction. The physical approach to closure uses the Multiscale Modeling Framework (MMF) to demonstrate closure of small-scale, nonlinear processes that would not be resolved directly in climate models. A one-dimensional, abstract test model with a broad spatial spectrum is developed. The test model couples the Kuramoto-Sivashinsky equation to a transport equation that includes cloud formation and precipitation-like processes. In the test model, three main sources of MMF error are evaluated independently. Loss of nonlinear multi-scale interactions and periodic boundary conditions in closure models were dominant sources of error. Using a reduced order modeling approach to maximize energy content allowed reduction of the closure model dimension up to 75% without loss in accuracy. MMF and a comparable alternative model peformed equally well compared to direct numerical simulation.
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Fujii, Keisuke; Nishimura, Harumichi
2017-04-01
The one-clean qubit model (or the DQC1 model) is a restricted model of quantum computing where only a single qubit of the initial state is pure and others are maximally mixed. Although the model is not universal, it can efficiently solve several problems whose classical efficient solutions are not known. Furthermore, it was recently shown that if the one-clean qubit model is classically efficiently simulated, the polynomial hierarchy collapses to the second level. A disadvantage of the one-clean qubit model is, however, that the clean qubit is too clean: for example, in realistic NMR experiments, polarizations are not high enough to have the perfectly pure qubit. In this paper, we consider a more realistic one-clean qubit model, where the clean qubit is not clean, but depolarized. We first show that, for any polarization, a multiplicative-error calculation of the output probability distribution of the model is possible in a classical polynomial time if we take an appropriately large multiplicative error. The result is in strong contrast with that of the ideal one-clean qubit model where the classical efficient multiplicative-error calculation (or even the sampling) with the same amount of error causes the collapse of the polynomial hierarchy. We next show that, for any polarization lower-bounded by an inverse polynomial, a classical efficient sampling (in terms of a sufficiently small multiplicative error or an exponentially small additive error) of the output probability distribution of the model is impossible unless BQP (bounded error quantum polynomial time) is contained in the second level of the polynomial hierarchy, which suggests the hardness of the classical efficient simulation of the one nonclean qubit model.
Geometric error analysis for shuttle imaging spectrometer experiment
NASA Technical Reports Server (NTRS)
Wang, S. J.; Ih, C. H.
1984-01-01
The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.
Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.
Wang, Yibin; Nedelman, Jerry
2002-04-01
To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.
Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)
NASA Technical Reports Server (NTRS)
Adler, Robert; Gu, Guojun; Huffman, George
2012-01-01
A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.
Dynamic estimation of three-dimensional cerebrovascular deformation from rotational angiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Chong; Villa-Uriol, Maria-Cruz; De Craene, Mathieu
2011-03-15
Purpose: The objective of this study is to investigate the feasibility of detecting and quantifying 3D cerebrovascular wall motion from a single 3D rotational x-ray angiography (3DRA) acquisition within a clinically acceptable time and computing from the estimated motion field for the further biomechanical modeling of the cerebrovascular wall. Methods: The whole motion cycle of the cerebral vasculature is modeled using a 4D B-spline transformation, which is estimated from a 4D to 2D+t image registration framework. The registration is performed by optimizing a single similarity metric between the entire 2D+t measured projection sequence and the corresponding forward projections of themore » deformed volume at their exact time instants. The joint use of two acceleration strategies, together with their implementation on graphics processing units, is also proposed so as to reach computation times close to clinical requirements. For further characterizing vessel wall properties, an approximation of the wall thickness changes is obtained through a strain calculation. Results: Evaluation on in silico and in vitro pulsating phantom aneurysms demonstrated an accurate estimation of wall motion curves. In general, the error was below 10% of the maximum pulsation, even in the situation when substantial inhomogeneous intensity pattern was present. Experiments on in vivo data provided realistic aneurysm and vessel wall motion estimates, whereas in regions where motion was neither visible nor anatomically possible, no motion was detected. The use of the acceleration strategies enabled completing the estimation process for one entire cycle in 5-10 min without degrading the overall performance. The strain map extracted from our motion estimation provided a realistic deformation measure of the vessel wall. Conclusions: The authors' technique has demonstrated that it can provide accurate and robust 4D estimates of cerebrovascular wall motion within a clinically acceptable time, although it has to be applied to a larger patient population prior to possible wide application to routine endovascular procedures. In particular, for the first time, this feasibility study has shown that in vivo cerebrovascular motion can be obtained intraprocedurally from a 3DRA acquisition. Results have also shown the potential of performing strain analysis using this imaging modality, thus making possible for the future modeling of biomechanical properties of the vascular wall.« less
[Innovative training for enhancing patient safety. Safety culture and integrated concepts].
Rall, M; Schaedle, B; Zieger, J; Naef, W; Weinlich, M
2002-11-01
Patient safety is determined by the performance safety of the medical team. Errors in medicine are amongst the leading causes of death of hospitalized patients. These numbers call for action. Backgrounds, methods and new forms of training are introduced in this article. Concepts from safety research are transformed to the field of emergency medical treatment. Strategies from realistic patient simulator training sessions and innovative training concepts are discussed. The reasons for the high numbers of errors in medicine are not due to a lack of medical knowledge, but due to human factors and organisational circumstances. A first step towards an improved patient safety is to accept this. We always need to be prepared that errors will occur. A next step would be to separate "error" from guilt (culture of blame) allowing for a real analysis of accidents and establishment of meaningful incident reporting systems. Concepts with a good success record from aviation like "crew resource management" (CRM) training have been adapted my medicine and are ready to use. These concepts require theoretical education as well as practical training. Innovative team training sessions using realistic patient simulator systems with video taping (for self reflexion) and interactive debriefing following the sessions are very promising. As the need to reduce error rates in medicine is very high and the reasons, methods and training concepts are known, we are urged to implement these new training concepts widely and consequently. To err is human - not to counteract it is not.
Bowhead whale localization using time-difference-of-arrival data from asynchronous recorders.
Warner, Graham A; Dosso, Stan E; Hannay, David E
2017-03-01
This paper estimates bowhead whale locations and uncertainties using nonlinear Bayesian inversion of the time-difference-of-arrival (TDOA) of low-frequency whale calls recorded on onmi-directional asynchronous recorders in the shallow waters of the northeastern Chukchi Sea, Alaska. A Y-shaped cluster of seven autonomous ocean-bottom hydrophones, separated by 0.5-9.2 km, was deployed for several months over which time their clocks drifted out of synchronization. Hundreds of recorded whale calls are manually associated between recorders. The TDOA between hydrophone pairs are calculated from filtered waveform cross correlations and depend on the whale locations, hydrophone locations, relative recorder clock offsets, and effective waveguide sound speed. A nonlinear Bayesian inversion estimates all of these parameters and their uncertainties as well as data error statistics. The problem is highly nonlinear and a linearized inversion did not produce physically realistic results. Whale location uncertainties from nonlinear inversion can be low enough to allow accurate tracking of migrating whales that vocalize repeatedly over several minutes. Estimates of clock drift rates are obtained from inversions of TDOA data over two weeks and agree with corresponding estimates obtained from long-time averaged ambient noise cross correlations. The inversion is suitable for application to large data sets of manually or automatically detected whale calls.
Where can pixel counting area estimates meet user-defined accuracy requirements?
NASA Astrophysics Data System (ADS)
Waldner, François; Defourny, Pierre
2017-08-01
Pixel counting is probably the most popular way to estimate class areas from satellite-derived maps. It involves determining the number of pixels allocated to a specific thematic class and multiplying it by the pixel area. In the presence of asymmetric classification errors, the pixel counting estimator is biased. The overarching objective of this article is to define the applicability conditions of pixel counting so that the estimates are below a user-defined accuracy target. By reasoning in terms of landscape fragmentation and spatial resolution, the proposed framework decouples the resolution bias and the classifier bias from the overall classification bias. The consequence is that prior to any classification, part of the tolerated bias is already committed due to the choice of the spatial resolution of the imagery. How much classification bias is affordable depends on the joint interaction of spatial resolution and fragmentation. The method was implemented over South Africa for cropland mapping, demonstrating its operational applicability. Particular attention was paid to modeling a realistic sensor's spatial response by explicitly accounting for the effect of its point spread function. The diagnostic capabilities offered by this framework have multiple potential domains of application such as guiding users in their choice of imagery and providing guidelines for space agencies to elaborate the design specifications of future instruments.
Estimating Building Age with 3d GIS
NASA Astrophysics Data System (ADS)
Biljecki, F.; Sindram, M.
2017-10-01
Building datasets (e.g. footprints in OpenStreetMap and 3D city models) are becoming increasingly available worldwide. However, the thematic (attribute) aspect is not always given attention, as many of such datasets are lacking in completeness of attributes. A prominent attribute of buildings is the year of construction, which is useful for some applications, but its availability may be scarce. This paper explores the potential of estimating the year of construction (or age) of buildings from other attributes using random forest regression. The developed method has a two-fold benefit: enriching datasets and quality control (verification of existing attributes). Experiments are carried out on a semantically rich LOD1 dataset of Rotterdam in the Netherlands using 9 attributes. The results are mixed: the accuracy in the estimation of building age depends on the available information used in the regression model. In the best scenario we have achieved predictions with an RMSE of 11 years, but in more realistic situations with limited knowledge about buildings the error is much larger (RMSE = 26 years). Hence the main conclusion of the paper is that inferring building age with 3D city models is possible to a certain extent because it reveals the approximate period of construction, but precise estimations remain a difficult task.
A new approach on seismic mortality estimations based on average population density
NASA Astrophysics Data System (ADS)
Zhu, Xiaoxin; Sun, Baiqing; Jin, Zhanyong
2016-12-01
This study examines a new methodology to predict the final seismic mortality from earthquakes in China. Most studies established the association between mortality estimation and seismic intensity without considering the population density. In China, however, the data are not always available, especially when it comes to the very urgent relief situation in the disaster. And the population density varies greatly from region to region. This motivates the development of empirical models that use historical death data to provide the path to analyze the death tolls for earthquakes. The present paper employs the average population density to predict the final death tolls in earthquakes using a case-based reasoning model from realistic perspective. To validate the forecasting results, historical data from 18 large-scale earthquakes occurred in China are used to estimate the seismic morality of each case. And a typical earthquake case occurred in the northwest of Sichuan Province is employed to demonstrate the estimation of final death toll. The strength of this paper is that it provides scientific methods with overall forecast errors lower than 20 %, and opens the door for conducting final death forecasts with a qualitative and quantitative approach. Limitations and future research are also analyzed and discussed in the conclusion.
A Likelihood-Based Framework for Association Analysis of Allele-Specific Copy Numbers.
Hu, Y J; Lin, D Y; Sun, W; Zeng, D
2014-10-01
Copy number variants (CNVs) and single nucleotide polymorphisms (SNPs) co-exist throughout the human genome and jointly contribute to phenotypic variations. Thus, it is desirable to consider both types of variants, as characterized by allele-specific copy numbers (ASCNs), in association studies of complex human diseases. Current SNP genotyping technologies capture the CNV and SNP information simultaneously via fluorescent intensity measurements. The common practice of calling ASCNs from the intensity measurements and then using the ASCN calls in downstream association analysis has important limitations. First, the association tests are prone to false-positive findings when differential measurement errors between cases and controls arise from differences in DNA quality or handling. Second, the uncertainties in the ASCN calls are ignored. We present a general framework for the integrated analysis of CNVs and SNPs, including the analysis of total copy numbers as a special case. Our approach combines the ASCN calling and the association analysis into a single step while allowing for differential measurement errors. We construct likelihood functions that properly account for case-control sampling and measurement errors. We establish the asymptotic properties of the maximum likelihood estimators and develop EM algorithms to implement the corresponding inference procedures. The advantages of the proposed methods over the existing ones are demonstrated through realistic simulation studies and an application to a genome-wide association study of schizophrenia. Extensions to next-generation sequencing data are discussed.
NASA Astrophysics Data System (ADS)
Brandstetter, Gerd; Govindjee, Sanjay
2012-03-01
Existing analytical and numerical methodologies are discussed and then extended in order to calculate critical contamination-particle sizes, which will result in deleterious effects during EUVL E-chucking in the face of an error budget on the image-placement-error (IPE). The enhanced analytical models include a gap dependant clamping pressure formulation, the consideration of a general material law for realistic particle crushing and the influence of frictional contact. We present a discussion of the defects of the classical de-coupled modeling approach where particle crushing and mask/chuck indentation are separated from the global computation of mask bending. To repair this defect we present a new analytic approach based on an exact Hankel transform method which allows a fully coupled solution. This will capture the contribution of the mask indentation to the image-placement-error (estimated IPE increase of 20%). A fully coupled finite element model is used to validate the analytical models and to further investigate the impact of a mask back-side CrN-layer. The models are applied to existing experimental data with good agreement. For a standard material combination, a given IPE tolerance of 1 nm and a 15 kPa closing pressure, we derive bounds for single particles of cylindrical shape (radius × height < 44 μm) and spherical shape (diameter < 12 μm).
Preserving subject variability in group fMRI analysis: performance evaluation of GICA vs. IVA
Michael, Andrew M.; Anderson, Mathew; Miller, Robyn L.; Adalı, Tülay; Calhoun, Vince D.
2014-01-01
Independent component analysis (ICA) is a widely applied technique to derive functionally connected brain networks from fMRI data. Group ICA (GICA) and Independent Vector Analysis (IVA) are extensions of ICA that enable users to perform group fMRI analyses; however a full comparison of the performance limits of GICA and IVA has not been investigated. Recent interest in resting state fMRI data with potentially higher degree of subject variability makes the evaluation of the above techniques important. In this paper we compare component estimation accuracies of GICA and an improved version of IVA using simulated fMRI datasets. We systematically change the degree of inter-subject spatial variability of components and evaluate estimation accuracy over all spatial maps (SMs) and time courses (TCs) of the decomposition. Our results indicate the following: (1) at low levels of SM variability or when just one SM is varied, both GICA and IVA perform well, (2) at higher levels of SM variability or when more than one SMs are varied, IVA continues to perform well but GICA yields SM estimates that are composites of other SMs with errors in TCs, (3) both GICA and IVA remove spatial correlations of overlapping SMs and introduce artificial correlations in their TCs, (4) if number of SMs is over estimated, IVA continues to perform well but GICA introduces artifacts in the varying and extra SMs with artificial correlations in the TCs of extra components, and (5) in the absence or presence of SMs unique to one subject, GICA produces errors in TCs and IVA estimates are accurate. In summary, our simulation experiments (both simplistic and realistic) and our holistic analyses approach indicate that IVA produces results that are closer to ground truth and thereby better preserves subject variability. The improved version of IVA is now packaged into the GIFT toolbox (http://mialab.mrn.org/software/gift). PMID:25018704
Loukas, Constantinos; Lahanas, Vasileios; Georgiou, Evangelos
2013-12-01
Despite the popular use of virtual and physical reality simulators in laparoscopic training, the educational potential of augmented reality (AR) has not received much attention. A major challenge is the robust tracking and three-dimensional (3D) pose estimation of the endoscopic instrument, which are essential for achieving interaction with the virtual world and for realistic rendering when the virtual scene is occluded by the instrument. In this paper we propose a method that addresses these issues, based solely on visual information obtained from the endoscopic camera. Two different tracking algorithms are combined for estimating the 3D pose of the surgical instrument with respect to the camera. The first tracker creates an adaptive model of a colour strip attached to the distal part of the tool (close to the tip). The second algorithm tracks the endoscopic shaft, using a combined Hough-Kalman approach. The 3D pose is estimated with perspective geometry, using appropriate measurements extracted by the two trackers. The method has been validated on several complex image sequences for its tracking efficiency, pose estimation accuracy and applicability in AR-based training. Using a standard endoscopic camera, the absolute average error of the tip position was 2.5 mm for working distances commonly found in laparoscopic training. The average error of the instrument's angle with respect to the camera plane was approximately 2°. The results are also supplemented by video segments of laparoscopic training tasks performed in a physical and an AR environment. The experiments yielded promising results regarding the potential of applying AR technologies for laparoscopic skills training, based on a computer vision framework. The issue of occlusion handling was adequately addressed. The estimated trajectory of the instruments may also be used for surgical gesture interpretation and assessment. Copyright © 2013 John Wiley & Sons, Ltd.
Group Augmentation in Realistic Visual-Search Decisions via a Hybrid Brain-Computer Interface.
Valeriani, Davide; Cinel, Caterina; Poli, Riccardo
2017-08-10
Groups have increased sensing and cognition capabilities that typically allow them to make better decisions. However, factors such as communication biases and time constraints can lead to less-than-optimal group decisions. In this study, we use a hybrid Brain-Computer Interface (hBCI) to improve the performance of groups undertaking a realistic visual-search task. Our hBCI extracts neural information from EEG signals and combines it with response times to build an estimate of the decision confidence. This is used to weigh individual responses, resulting in improved group decisions. We compare the performance of hBCI-assisted groups with the performance of non-BCI groups using standard majority voting, and non-BCI groups using weighted voting based on reported decision confidence. We also investigate the impact on group performance of a computer-mediated form of communication between members. Results across three experiments suggest that the hBCI provides significant advantages over non-BCI decision methods in all cases. We also found that our form of communication increases individual error rates by almost 50% compared to non-communicating observers, which also results in worse group performance. Communication also makes reported confidence uncorrelated with the decision correctness, thereby nullifying its value in weighing votes. In summary, best decisions are achieved by hBCI-assisted, non-communicating groups.
NASA Astrophysics Data System (ADS)
Reniers, Jorn M.; Mulder, Grietus; Ober-Blöbaum, Sina; Howey, David A.
2018-03-01
The increased deployment of intermittent renewable energy generators opens up opportunities for grid-connected energy storage. Batteries offer significant flexibility but are relatively expensive at present. Battery lifetime is a key factor in the business case, and it depends on usage, but most techno-economic analyses do not account for this. For the first time, this paper quantifies the annual benefits of grid-connected batteries including realistic physical dynamics and nonlinear electrochemical degradation. Three lithium-ion battery models of increasing realism are formulated, and the predicted degradation of each is compared with a large-scale experimental degradation data set (Mat4Bat). A respective improvement in RMS capacity prediction error from 11% to 5% is found by increasing the model accuracy. The three models are then used within an optimal control algorithm to perform price arbitrage over one year, including degradation. Results show that the revenue can be increased substantially while degradation can be reduced by using more realistic models. The estimated best case profit using a sophisticated model is a 175% improvement compared with the simplest model. This illustrates that using a simplistic battery model in a techno-economic assessment of grid-connected batteries might substantially underestimate the business case and lead to erroneous conclusions.
A numerical comparison of discrete Kalman filtering algorithms: An orbit determination case study
NASA Technical Reports Server (NTRS)
Thornton, C. L.; Bierman, G. J.
1976-01-01
The numerical stability and accuracy of various Kalman filter algorithms are thoroughly studied. Numerical results and conclusions are based on a realistic planetary approach orbit determination study. The case study results of this report highlight the numerical instability of the conventional and stabilized Kalman algorithms. Numerical errors associated with these algorithms can be so large as to obscure important mismodeling effects and thus give misleading estimates of filter accuracy. The positive result of this study is that the Bierman-Thornton U-D covariance factorization algorithm is computationally efficient, with CPU costs that differ negligibly from the conventional Kalman costs. In addition, accuracy of the U-D filter using single-precision arithmetic consistently matches the double-precision reference results. Numerical stability of the U-D filter is further demonstrated by its insensitivity of variations in the a priori statistics.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
Defense Acquisitions: Addressing Incentives is Key to Further Reform Efforts
2014-04-30
championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering . While some progress has been made...other reforms have championed sound management practices, such as realistic cost estimating, prototyping, and systems engineering . DOD’s declining...principles from disciplines such as systems engineering , as well as lessons learned and past reforms. The body of work we have done on benchmarking
The role of blood vessels in high-resolution volume conductor head modeling of EEG.
Fiederer, L D J; Vorwerk, J; Lucka, F; Dannhauer, M; Yang, S; Dümpelmann, M; Schulze-Bonhage, A; Aertsen, A; Speck, O; Wolters, C H; Ball, T
2016-03-01
Reconstruction of the electrical sources of human EEG activity at high spatio-temporal accuracy is an important aim in neuroscience and neurological diagnostics. Over the last decades, numerous studies have demonstrated that realistic modeling of head anatomy improves the accuracy of source reconstruction of EEG signals. For example, including a cerebro-spinal fluid compartment and the anisotropy of white matter electrical conductivity were both shown to significantly reduce modeling errors. Here, we for the first time quantify the role of detailed reconstructions of the cerebral blood vessels in volume conductor head modeling for EEG. To study the role of the highly arborized cerebral blood vessels, we created a submillimeter head model based on ultra-high-field-strength (7T) structural MRI datasets. Blood vessels (arteries and emissary/intraosseous veins) were segmented using Frangi multi-scale vesselness filtering. The final head model consisted of a geometry-adapted cubic mesh with over 17×10(6) nodes. We solved the forward model using a finite-element-method (FEM) transfer matrix approach, which allowed reducing computation times substantially and quantified the importance of the blood vessel compartment by computing forward and inverse errors resulting from ignoring the blood vessels. Our results show that ignoring emissary veins piercing the skull leads to focal localization errors of approx. 5 to 15mm. Large errors (>2cm) were observed due to the carotid arteries and the dense arterial vasculature in areas such as in the insula or in the medial temporal lobe. Thus, in such predisposed areas, errors caused by neglecting blood vessels can reach similar magnitudes as those previously reported for neglecting white matter anisotropy, the CSF or the dura - structures which are generally considered important components of realistic EEG head models. Our findings thus imply that including a realistic blood vessel compartment in EEG head models will be helpful to improve the accuracy of EEG source analyses particularly when high accuracies in brain areas with dense vasculature are required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Selb, Juliette; Ogden, Tyler M.; Dubb, Jay; Fang, Qianqian; Boas, David A.
2013-03-01
Time-domain near-infrared spectroscopy (TD-NIRS) offers the ability to measure the absolute baseline optical properties of a tissue. Specifically, for brain imaging, the robust assessment of cerebral blood volume and oxygenation based on measurement of cerebral hemoglobin concentrations is essential for reliable cross-sectional and longitudinal studies. In adult heads, these baseline measurements are complicated by the presence of thick extra-cerebral tissue (scalp, skull, CSF). A simple semi-infinite homogeneous model of the head has proven to have limited use because of the large errors it introduces in the recovered brain absorption. Analytical solutions for layered media have shown improved performance on Monte-Carlo simulated data and layered phantom experiments, but their validity on real adult head data has never been demonstrated. With the advance of fast Monte Carlo approaches based on GPU computation, numerical methods to solve the radiative transfer equation become viable alternatives to analytical solutions of the diffusion equation. Monte Carlo approaches provide the additional advantage to be adaptable to any geometry, in particular more realistic head models. The goals of the present study were twofold: (1) to implement a fast and flexible Monte Carlo-based fitting routine to retrieve the brain optical properties; (2) to characterize the performances of this fitting method on realistic adult head data. We generated time-resolved data at various locations over the head, and fitted them with different models of light propagation: the homogeneous analytical model, and Monte Carlo simulations for three head models: a two-layer slab, the true subject's anatomy, and that of a generic atlas head. We found that the homogeneous model introduced a median 20 to 25% error on the recovered brain absorption, with large variations over the range of true optical properties. The two-layer slab model only improved moderately the results over the homogeneous one. On the other hand, using a generic atlas head registered to the subject's head surface decreased the error by a factor of 2. When the information is available, using the true subject anatomy offers the best performance.
NASA Technical Reports Server (NTRS)
Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.
2001-01-01
Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1981-01-01
A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.
Model error estimation for distributed systems described by elliptic equations
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1983-01-01
A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.
NASA Astrophysics Data System (ADS)
Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel
2016-12-01
On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.
NASA Astrophysics Data System (ADS)
Määttä, A.; Laine, M.; Tamminen, J.; Veefkind, J. P.
2013-09-01
We study uncertainty quantification in remote sensing of aerosols in the atmosphere with top of the atmosphere reflectance measurements from the nadir-viewing Ozone Monitoring Instrument (OMI). Focus is on the uncertainty in aerosol model selection of pre-calculated aerosol models and on the statistical modelling of the model inadequacies. The aim is to apply statistical methodologies that improve the uncertainty estimates of the aerosol optical thickness (AOT) retrieval by propagating model selection and model error related uncertainties more realistically. We utilise Bayesian model selection and model averaging methods for the model selection problem and use Gaussian processes to model the smooth systematic discrepancies from the modelled to observed reflectance. The systematic model error is learned from an ensemble of operational retrievals. The operational OMI multi-wavelength aerosol retrieval algorithm OMAERO is used for cloud free, over land pixels of the OMI instrument with the additional Bayesian model selection and model discrepancy techniques. The method is demonstrated with four examples with different aerosol properties: weakly absorbing aerosols, forest fires over Greece and Russia, and Sahara dessert dust. The presented statistical methodology is general; it is not restricted to this particular satellite retrieval application.
NASA Astrophysics Data System (ADS)
Arneitz, P.; Leonhardt, R.; Fabian, K.; Egli, R.
2017-12-01
Historical and paleomagnetic data are the two main sources of information about the long-term geomagnetic field evolution. Historical observations extend to the late Middle Ages, and prior to the 19th century, they consisted mainly of pure declination measurements from navigation and orientation logs. Field reconstructions going back further in time rely solely on magnetization acquired by rocks, sediments, and archaeological artefacts. The combined dataset is characterized by a strongly inhomogeneous spatio-temporal distribution and highly variable data reliability and quality. Therefore, an adequate weighting of the data that correctly accounts for data density, type, and realistic error estimates represents the major challenge for an inversion approach. Until now, there has not been a fully self-consistent geomagnetic model that correctly recovers the variation of the geomagnetic dipole together with the higher-order spherical harmonics. Here we present a new geomagnetic field model for the last 4 kyrs based on historical, archeomagnetic and volcanic records. The iterative Bayesian inversion approach targets the implementation of reliable error treatment, which allows different record types to be combined in a fully self-consistent way. Modelling results will be presented along with a thorough analysis of model limitations, validity and sensitivity.
NASA Technical Reports Server (NTRS)
Blanchard, R. C.; Phillips, W. P.; Kelly, G. M.; Findlay, J. T.
1980-01-01
Subsonic drag coefficients have been obtained from flight data for the Pioneer Venus multiprobes. The technique used to extract the information from the data consisted of utilizing in situ pressure and temperature measurements. Analysis of the major model parameter error sources indicates overall error levels of five percent or less in the flight values of the drag coefficient. Comparisons of the flight coefficients with preflight wind-tunnel test data showed generally good agreement except for the Sounder descent probe configuration. To preclude atmospheric phenomena as a possible explanation of this difference, additional wind-tunnel tests were performed on the Sounder descent probe. Special attempts were made to duplicate the probe geometry for tests in a high Reynolds number environment in order to achieve as realistic model and flight conditions as practical. Preliminary results from this testing in the NASA LaRC Low Turbulence Pressure Tunnel produced a drag coefficient of 0.68 at 0 deg angle of attack which is within the expected accuracy limits of the flight derived drag coefficient value of 0.72 + or - 0.04, thus eliminating atmospheric phenomena as the explanation for the initial difference.
An inverse finite-element model of heel-pad indentation.
Erdemir, Ahmet; Viveiros, Meredith L; Ulbrecht, Jan S; Cavanagh, Peter R
2006-01-01
A numerical-experimental approach has been developed to characterize heel-pad deformation at the material level. Left and right heels of 20 diabetic subjects and 20 nondiabetic subjects matched for age, gender and body mass index were indented using force-controlled ultrasound. Initial tissue thickness and deformation were measured using M-mode ultrasound; indentation forces were recorded simultaneously. An inverse finite-element analysis of the indentation protocol using axisymmetric models adjusted to reflect individual heel thickness was used to extract nonlinear material properties describing the hyperelastic behavior of each heel. Student's t-tests revealed that heel pads of diabetic subjects were not significantly different in initial thickness nor were they stiffer than those from nondiabetic subjects. Another heel-pad model with anatomically realistic surface representations of the calcaneus and soft tissue was developed to estimate peak pressure prediction errors when average rather than individualized material properties were used. Root-mean-square errors of up to 7% were calculated, indicating the importance of subject-specific modeling of the nonlinear elastic behavior of the heel pad. Indentation systems combined with the presented numerical approach can provide this information for further analysis of patient-specific foot pathologies and therapeutic footwear designs.
NASA Astrophysics Data System (ADS)
Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian
2017-10-01
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.
Lorenzi, Juan M; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian
2017-10-28
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.
Ion diffusion may introduce spurious current sources in current-source density (CSD) analysis.
Halnes, Geir; Mäki-Marttunen, Tuomo; Pettersen, Klas H; Andreassen, Ole A; Einevoll, Gaute T
2017-07-01
Current-source density (CSD) analysis is a well-established method for analyzing recorded local field potentials (LFPs), that is, the low-frequency part of extracellular potentials. Standard CSD theory is based on the assumption that all extracellular currents are purely ohmic, and thus neglects the possible impact from ionic diffusion on recorded potentials. However, it has previously been shown that in physiological conditions with large ion-concentration gradients, diffusive currents can evoke slow shifts in extracellular potentials. Using computer simulations, we here show that diffusion-evoked potential shifts can introduce errors in standard CSD analysis, and can lead to prediction of spurious current sources. Further, we here show that the diffusion-evoked prediction errors can be removed by using an improved CSD estimator which accounts for concentration-dependent effects. NEW & NOTEWORTHY Standard CSD analysis does not account for ionic diffusion. Using biophysically realistic computer simulations, we show that unaccounted-for diffusive currents can lead to the prediction of spurious current sources. This finding may be of strong interest for in vivo electrophysiologists doing extracellular recordings in general, and CSD analysis in particular. Copyright © 2017 the American Physiological Society.
Model identification using stochastic differential equation grey-box models in diabetes.
Duun-Henriksen, Anne Katrine; Schmidt, Signe; Røge, Rikke Meldgaard; Møller, Jonas Bech; Nørgaard, Kirsten; Jørgensen, John Bagterp; Madsen, Henrik
2013-03-01
The acceptance of virtual preclinical testing of control algorithms is growing and thus also the need for robust and reliable models. Models based on ordinary differential equations (ODEs) can rarely be validated with standard statistical tools. Stochastic differential equations (SDEs) offer the possibility of building models that can be validated statistically and that are capable of predicting not only a realistic trajectory, but also the uncertainty of the prediction. In an SDE, the prediction error is split into two noise terms. This separation ensures that the errors are uncorrelated and provides the possibility to pinpoint model deficiencies. An identifiable model of the glucoregulatory system in a type 1 diabetes mellitus (T1DM) patient is used as the basis for development of a stochastic-differential-equation-based grey-box model (SDE-GB). The parameters are estimated on clinical data from four T1DM patients. The optimal SDE-GB is determined from likelihood-ratio tests. Finally, parameter tracking is used to track the variation in the "time to peak of meal response" parameter. We found that the transformation of the ODE model into an SDE-GB resulted in a significant improvement in the prediction and uncorrelated errors. Tracking of the "peak time of meal absorption" parameter showed that the absorption rate varied according to meal type. This study shows the potential of using SDE-GBs in diabetes modeling. Improved model predictions were obtained due to the separation of the prediction error. SDE-GBs offer a solid framework for using statistical tools for model validation and model development. © 2013 Diabetes Technology Society.
Murado, M A; Prieto, M A
2013-09-01
NOEC and LOEC (no and lowest observed effect concentrations, respectively) are toxicological concepts derived from analysis of variance (ANOVA), a not very sensitive method that produces ambiguous results and does not provide confidence intervals (CI) of its estimates. For a long time, despite the abundant criticism that such concepts have raised, the field of the ecotoxicology is reticent to abandon them (two possible reasons will be discussed), adducing the difficulty of clear alternatives. However, this work proves that a debugged dose-response (DR) modeling, through explicit algebraic equations, enables two simple options to accurately calculate the CI of substantially lower doses than NOEC. Both ANOVA and DR analyses are affected by the experimental error, response profile, number of observations and experimental design. The study of these effects--analytically complex and experimentally unfeasible--was carried out using systematic simulations with realistic data, including different error levels. Results revealed the weakness of NOEC and LOEC notions, confirmed the feasibility of the proposed alternatives and allowed to discuss the--often violated--conditions that minimize the CI of the parametric estimates from DR assays. In addition, a table was developed providing the experimental design that minimizes the parametric CI for a given set of working conditions. This makes possible to reduce the experimental effort and to avoid the inconclusive results that are frequently obtained from intuitive experimental plans. Copyright © 2013 Elsevier B.V. All rights reserved.
Short-term ensemble radar rainfall forecasts for hydrological applications
NASA Astrophysics Data System (ADS)
Codo de Oliveira, M.; Rico-Ramirez, M. A.
2016-12-01
Flooding is a very common natural disaster around the world, putting local population and economy at risk. Forecasting floods several hours ahead and issuing warnings are of main importance to permit proper response in emergency situations. However, it is important to know the uncertainties related to the rainfall forecasting in order to produce more reliable forecasts. Nowcasting models (short-term rainfall forecasts) are able to produce high spatial and temporal resolution predictions that are useful in hydrological applications. Nonetheless, they are subject to uncertainties mainly due to the nowcasting model used, errors in radar rainfall estimation, temporal development of the velocity field and to the fact that precipitation processes such as growth and decay are not taken into account. In this study an ensemble generation scheme using rain gauge data as a reference to estimate radars errors is used to produce forecasts with up to 3h lead-time. The ensembles try to assess in a realistic way the residual uncertainties that remain even after correction algorithms are applied in the radar data. The ensembles produced are compered to a stochastic ensemble generator. Furthermore, the rainfall forecast output was used as an input in a hydrodynamic sewer network model and also in hydrological model for catchments of different sizes in north England. A comparative analysis was carried of how was carried out to assess how the radar uncertainties propagate into these models. The first named author is grateful to CAPES - Ciencia sem Fronteiras for funding this PhD research.
Bias in error estimation when using cross-validation for model selection.
Varma, Sudhir; Simon, Richard
2006-02-23
Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.
Parallel computers - Estimate errors caused by imprecise data
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne
1991-01-01
A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.
Nerot, A; Skalli, W; Wang, X
2016-10-03
Recent progress in 3D scanning technologies allows easy access to 3D human body envelope. To create personalized human models with an articulated linkage for realistic re-posturing and motion analyses, an accurate estimation of internal skeleton points, including joint centers, from the external envelope is required. For this research project, 3D reconstructions of both internal skeleton and external envelope from low dose biplanar X-rays of 40 male adults were obtained. Using principal component analysis technique (PCA), a low-dimensional dataset was used to predict internal points of the upper body from the trunk envelope. A least squares method was used to find PC scores that fit the PCA-based model to the envelope of a new subject. To validate the proposed approach, estimated internal points were evaluated using a leave-one-out (LOO) procedure, i.e. successively considering each individual from our dataset as an extra-subject. In addition, different methods were proposed to reduce the variability in data and improve the performance of the PCA-based prediction. The best method was considered as the one providing the smallest errors between estimated and reference internal points with an average error of 8.3mm anterior-posteriorly, 6.7mm laterally and 6.5mm vertically. As the proposed approach relies on few or no bony landmarks, it could be easily applicable and generalizable to surface scans from any devices. Combined with automatic body scanning techniques, this study could potentially constitute a new step towards automatic generation of external/internal subject-specific manikins. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Lal Shrestha, Durga; Solomatine, Dimitri
2010-05-01
This research presents an extension of UNEEC (Uncertainty Estimation based on Local Errors and Clustering, Shrestha and Solomatine, 2006, 2008 & Solomatine and Shrestha, 2009) method in the direction of explicit inclusion of parameter uncertainty. UNEEC method assumes that there is an optimal model and the residuals of the model can be used to assess the uncertainty of the model prediction. It is assumed that all sources of uncertainty including input, parameter and model structure uncertainty are explicitly manifested in the model residuals. In this research, theses assumptions are relaxed, and the UNEEC method is extended to consider parameter uncertainty as well (abbreviated as UNEEC-P). In UNEEC-P, first we use Monte Carlo (MC) sampling in parameter space to generate N model realizations (each of which is a time series), estimate the prediction quantiles based on the empirical distribution functions of the model residuals considering all the residual realizations, and only then apply the standard UNEEC method that encapsulates the uncertainty of a hydrologic model (expressed by quantiles of the error distribution) in a machine learning model (e.g., ANN). UNEEC-P is applied first to a linear regression model of synthetic data, and then to a real case study of forecasting inflow to lake Lugano in northern Italy. The inflow forecasting model is a stochastic heteroscedastic model (Pianosi and Soncini-Sessa, 2009). The preliminary results show that the UNEEC-P method produces wider uncertainty bounds, which is consistent with the fact that the method considers also parameter uncertainty of the optimal model. In the future UNEEC method will be further extended to consider input and structure uncertainty which will provide more realistic estimation of model predictions.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
NASA Astrophysics Data System (ADS)
Le Pichon, Alexis; Ceranna, Lars; Taillepied, Doriane
2015-04-01
To monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), a dedicated network is being deployed. Multi-year observations recorded by the International Monitoring System (IMS) infrasound network confirm that its detection capability is highly variable in space and time. Today, numerical modeling techniques provide a basis to better understand the role of different factors describing the source and the atmosphere that influence propagation predictions. Previous studies estimated the radiated source energy from remote observations using frequency dependent attenuation relation and state-of-the-art specifications of the stratospheric wind. In order to account for a realistic description of the dynamic structure of the atmosphere, model predictions are further enhanced by wind and temperature error distributions as measured in the framework of the ARISE project (http://arise-project.eu/). In the context of the future verification of the CTBT, these predictions quantify uncertainties in the spatial and temporal variability of the IMS infrasound network performance in higher resolution, and will be helpful for the design and prioritizing maintenance of any arbitrary infrasound monitoring network.
NASA Astrophysics Data System (ADS)
Le Pichon, Alexis; Blanc, Elisabeth; Rüfenacht, Rolf; Kämpfer, Niklaus; Keckhut, Philippe; Hauchecorne, Alain; Ceranna, Lars; Pilger, Christoph; Ross, Ole
2014-05-01
To monitor compliance with the Comprehensive Nuclear-Test-Ban Treaty (CTBT), a dedicated network is being deployed. Multi-year observations recorded by the International Monitoring System (IMS) infrasound network confirm that its detection capability is highly variable in space and time. Today, numerical modeling techniques provide a basis to better understand the role of different factors describing the source and the atmosphere that influence propagation predictions. Previous studies estimated the radiated source energy from remote observations using frequency dependent attenuation relation and state-of-the-art specifications of the stratospheric wind. In order to account for a realistic description of the dynamic structure of the atmosphere, model predictions are further enhanced by wind and temperature error distributions as measured in the framework of the ARISE project (http://arise-project.eu/). In the context of the future verification of the CTBT, these predictions quantify uncertainties in the spatial and temporal variability of the IMS infrasound network performance in higher resolution, and will be helpful for the design and prioritizing maintenance of any arbitrary infrasound monitoring network.
Office Skills: Measuring Typewriting Output.
ERIC Educational Resources Information Center
Petersen, Lois E.; Kruk, Leonard B.
1978-01-01
The advent of word processing centers has provided typewriting teachers with an alternative measurement system that, instead of penalizing errors, grades students according to Usable Lines Produced (ULP). The ULP system is job-oriented and promotes realistic office standards in typewriting productivity. (MF)
Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario
NASA Astrophysics Data System (ADS)
Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.
1997-06-01
In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.
Zollanvari, Amin; Dougherty, Edward R
2014-06-01
The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
NASA Astrophysics Data System (ADS)
Saide, P. E.; Steinhoff, D.; Kosovic, B.; Weil, J.; Smith, N.; Blewitt, D.; Delle Monache, L.
2017-12-01
There are a wide variety of methods that have been proposed and used to estimate methane emissions from oil and gas production by using air composition and meteorology observations in conjunction with dispersion models. Although there has been some verification of these methodologies using controlled releases and concurrent atmospheric measurements, it is difficult to assess the accuracy of these methods for more realistic scenarios considering factors such as terrain, emissions from multiple components within a well pad, and time-varying emissions representative of typical operations. In this work we use a large-eddy simulation (LES) to generate controlled but realistic synthetic observations, which can be used to test multiple source term estimation methods, also known as an Observing System Simulation Experiment (OSSE). The LES is based on idealized simulations of the Weather Research & Forecasting (WRF) model at 10 m horizontal grid-spacing covering an 8 km by 7 km domain with terrain representative of a region located in the Barnett shale. Well pads are setup in the domain following a realistic distribution and emissions are prescribed every second for the components of each well pad (e.g., chemical injection pump, pneumatics, compressor, tanks, and dehydrator) using a simulator driven by oil and gas production volume, composition and realistic operational conditions. The system is setup to allow assessments under different scenarios such as normal operations, during liquids unloading events, or during other prescribed operational upset events. Methane and meteorology model output are sampled following the specifications of the emission estimation methodologies and considering typical instrument uncertainties, resulting in realistic observations (see Figure 1). We will show the evaluation of several emission estimation methods including the EPA Other Test Method 33A and estimates using the EPA AERMOD regulatory model. We will also show source estimation results from advanced methods such as variational inverse modeling, and Bayesian inference and stochastic sampling techniques. Future directions including other types of observations, other hydrocarbons being considered, and assessment of additional emission estimation methods will be discussed.
Scanning linear estimation: improvements over region of interest (ROI) methods
NASA Astrophysics Data System (ADS)
Kupinski, Meredith K.; Clarkson, Eric W.; Barrett, Harrison H.
2013-03-01
In tomographic medical imaging, a signal activity is typically estimated by summing voxels from a reconstructed image. We introduce an alternative estimation scheme that operates on the raw projection data and offers a substantial improvement, as measured by the ensemble mean-square error (EMSE), when compared to using voxel values from a maximum-likelihood expectation-maximization (MLEM) reconstruction. The scanning-linear (SL) estimator operates on the raw projection data and is derived as a special case of maximum-likelihood estimation with a series of approximations to make the calculation tractable. The approximated likelihood accounts for background randomness, measurement noise and variability in the parameters to be estimated. When signal size and location are known, the SL estimate of signal activity is unbiased, i.e. the average estimate equals the true value. By contrast, unpredictable bias arising from the null functions of the imaging system affect standard algorithms that operate on reconstructed data. The SL method is demonstrated for two different tasks: (1) simultaneously estimating a signal’s size, location and activity; (2) for a fixed signal size and location, estimating activity. Noisy projection data are realistically simulated using measured calibration data from the multi-module multi-resolution small-animal SPECT imaging system. For both tasks, the same set of images is reconstructed using the MLEM algorithm (80 iterations), and the average and maximum values within the region of interest (ROI) are calculated for comparison. This comparison shows dramatic improvements in EMSE for the SL estimates. To show that the bias in ROI estimates affects not only absolute values but also relative differences, such as those used to monitor the response to therapy, the activity estimation task is repeated for three different signal sizes.
Willem W.S. van Hees
2002-01-01
Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...
A-posteriori error estimation for second order mechanical systems
NASA Astrophysics Data System (ADS)
Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter
2012-06-01
One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.
Influence of measurement error on Maxwell's demon
NASA Astrophysics Data System (ADS)
Sørdal, Vegard; Bergli, Joakim; Galperin, Y. M.
2017-06-01
In any general cycle of measurement, feedback, and erasure, the measurement will reduce the entropy of the system when information about the state is obtained, while erasure, according to Landauer's principle, is accompanied by a corresponding increase in entropy due to the compression of logical and physical phase space. The total process can in principle be fully reversible. A measurement error reduces the information obtained and the entropy decrease in the system. The erasure still gives the same increase in entropy, and the total process is irreversible. Another consequence of measurement error is that a bad feedback is applied, which further increases the entropy production if the proper protocol adapted to the expected error rate is not applied. We consider the effect of measurement error on a realistic single-electron box Szilard engine, and we find the optimal protocol for the cycle as a function of the desired power P and error ɛ .
Quantifying errors in trace species transport modeling.
Prather, Michael J; Zhu, Xin; Strahan, Susan E; Steenrod, Stephen D; Rodriguez, Jose M
2008-12-16
One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO(2) using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error.
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
NASA Technical Reports Server (NTRS)
Hartman, Brian Davis
1995-01-01
A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.
Pinzon Morales, Ruben Dario; Hirata, Yutaka
2016-12-20
Motor learning in the cerebellum is believed to entail plastic changes at synapses between parallel fibers and Purkinje cells, induced by the teaching signal conveyed in the climbing fiber (CF) input. Despite the abundant research on the cerebellum, the nature of this signal is still a matter of debate. Two types of movement error information have been proposed to be plausible teaching signals: sensory error (SE) and motor command error (ME); however, their plausibility has not been tested in the real world. Here, we conducted a comparison of different types of CF teaching signals in real-world engineering applications by using a realistic neuronal network model of the cerebellum. We employed a direct current motor (simple task) and a two-wheeled balancing robot (difficult task). We demonstrate that SE, ME or a linear combination of the two is sufficient to yield comparable performance in a simple task. When the task is more difficult, although SE slightly outperformed ME, these types of error information are all able to adequately control the robot. We categorize granular cells according to their inputs and the error signal revealing that different granule cells are preferably engaged for SE, ME or their combination. Thus, unlike previous theoretical and simulation studies that support either SE or ME, it is demonstrated for the first time in a real-world engineering application that both SE and ME are adequate as the CF teaching signal in a realistic computational cerebellar model, even when the control task is as difficult as stabilizing a two-wheeled balancing robot.
Pinzon Morales, Ruben Dario; Hirata, Yutaka
2016-01-01
Motor learning in the cerebellum is believed to entail plastic changes at synapses between parallel fibers and Purkinje cells, induced by the teaching signal conveyed in the climbing fiber (CF) input. Despite the abundant research on the cerebellum, the nature of this signal is still a matter of debate. Two types of movement error information have been proposed to be plausible teaching signals: sensory error (SE) and motor command error (ME); however, their plausibility has not been tested in the real world. Here, we conducted a comparison of different types of CF teaching signals in real-world engineering applications by using a realistic neuronal network model of the cerebellum. We employed a direct current motor (simple task) and a two-wheeled balancing robot (difficult task). We demonstrate that SE, ME or a linear combination of the two is sufficient to yield comparable performance in a simple task. When the task is more difficult, although SE slightly outperformed ME, these types of error information are all able to adequately control the robot. We categorize granular cells according to their inputs and the error signal revealing that different granule cells are preferably engaged for SE, ME or their combination. Thus, unlike previous theoretical and simulation studies that support either SE or ME, it is demonstrated for the first time in a real-world engineering application that both SE and ME are adequate as the CF teaching signal in a realistic computational cerebellar model, even when the control task is as difficult as stabilizing a two-wheeled balancing robot. PMID:27999381
Eppenhof, Koen A J; Pluim, Josien P W
2018-04-01
Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.
Single neuron modeling and data assimilation in BNST neurons
NASA Astrophysics Data System (ADS)
Farsian, Reza
Neurons, although tiny in size, are vastly complicated systems, which are responsible for the most basic yet essential functions of any nervous system. Even the most simple models of single neurons are usually high dimensional, nonlinear, and contain many parameters and states which are unobservable in a typical neurophysiological experiment. One of the most fundamental problems in experimental neurophysiology is the estimation of these parameters and states, since knowing their values is essential in identification, model construction, and forward prediction of biological neurons. Common methods of parameter and state estimation do not perform well for neural models due to their high dimensionality and nonlinearity. In this dissertation, two alternative approaches for parameters and state estimation of biological neurons have been demonstrated: dynamical parameter estimation (DPE) and a Markov Chain Monte Carlo (MCMC) method. The first method uses elements of chaos control and synchronization theory for parameter and state estimation. MCMC is a statistical approach which uses a path integral formulation to evaluate a mean and an error bound for these unobserved parameters and states. These methods have been applied to biological system of neurons in Bed Nucleus of Stria Termialis neurons (BNST) of rats. State and parameters of neurons in both systems were estimated, and their value were used for recreating a realistic model and predicting the behavior of the neurons successfully. The knowledge of biological parameters can ultimately provide a better understanding of the internal dynamics of a neuron in order to build robust models of neuron networks.
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
NASA Astrophysics Data System (ADS)
Gueddana, Amor; Attia, Moez; Chatta, Rihab
2015-03-01
In this work, we study the error sources standing behind the non-perfect linear optical quantum components composing a non-deterministic quantum CNOT gate model, which performs the CNOT function with a success probability of 4/27 and uses a double encoding technique to represent photonic qubits at the control and the target. We generalize this model to an abstract probabilistic CNOT version and determine the realizability limits depending on a realistic range of the errors. Finally, we discuss physical constraints allowing the implementation of the Asymmetric Partially Polarizing Beam Splitter (APPBS), which is at the heart of correctly realizing the CNOT function.
BPM CALIBRATION INDEPENDENT LHC OPTICS CORRECTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
CALAGA,R.; TOMAS, R.; GIOVANNOZZI, M.
2007-06-25
The tight mechanical aperture for the LHC imposes severe constraints on both the beta and dispersion beating. Robust techniques to compensate these errors are critical for operation of high intensity beams in the LHC. We present simulations using realistic errors from magnet measurements and alignment tolerances in the presence of BPM noise. Correction reveals that the use of BPM calibration and model independent observables are key ingredients to accomplish optics correction. Experiments at RHIC to verify the algorithms for optics correction are also presented.
Charmed-meson decay constants in three-flavor lattice QCD.
Aubin, C; Bernard, C; Detar, C; Di Pierro, M; Freeland, E D; Gottlieb, Steven; Heller, U M; Hetrick, J E; El-Khadra, A X; Kronfeld, A S; Levkova, L; Mackenzie, P B; Menscher, D; Maresca, F; Nobes, M; Okamoto, M; Renner, D; Simone, J; Sugar, R; Toussaint, D; Trottier, H D
2005-09-16
We present the first lattice QCD calculation with realistic sea quark content of the D+-meson decay constant f(D+). We use the MILC Collaboration's publicly available ensembles of lattice gauge fields, which have a quark sea with two flavors (up and down) much lighter than a third (strange). We obtain f(D+)=201+/-3+/-17 MeV, where the errors are statistical and a combination of systematic errors. We also obtain f(Ds)=249+/-3+/-16 MeV for the Ds meson.
Implementation and simulations of the sphere solution in FAST
NASA Astrophysics Data System (ADS)
Murgolo, F. P.; Schirone, M. G.; Lattanzi, M.; Bernacca, P. L.
1989-06-01
The details of the implementation of the sphere solution software in the Fundamental Astronomy by Space Techniques (FAST) consortium, are described. The simulation results for realistic data sets, both with and without grid-step errors are given. Expected errors on the astrometric parameters of the primary stars and the precision of the reference great circle zero points, are provided as a function of mission duration. The design matrix, the diagrams of the context processor and the processors experimental results are given.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485
Sokolenko, Stanislav; Aucoin, Marc G
2015-09-04
The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.
MERLIN: a Franco-German LIDAR space mission for atmospheric methane
NASA Astrophysics Data System (ADS)
Bousquet, P.; Ehret, G.; Pierangelo, C.; Marshall, J.; Bacour, C.; Chevallier, F.; Gibert, F.; Armante, R.; Crevoisier, C. D.; Edouart, D.; Esteve, F.; Julien, E.; Kiemle, C.; Alpers, M.; Millet, B.
2017-12-01
The Methane Remote Sensing Lidar Mission (MERLIN), currently in phase C, is a joint cooperation between France and Germany on the development, launch and operation of a space LIDAR dedicated to the retrieval of total weighted methane (CH4) atmospheric columns. Atmospheric methane is the second most potent anthropogenic greenhouse gas, contributing 20% to climate radiative forcing but also plying an important role in atmospheric chemistry as a precursor of tropospheric ozone and low-stratosphere water vapour. Its short lifetime ( 9 years) and the nature and variety of its anthropogenic sources also offer interesting mitigation options in regards to the 2° objective of the Paris agreement. For the first time, measurements of atmospheric composition will be performed from space thanks to an IPDA (Integrated Path Differential Absorption) LIDAR (Light Detecting And Ranging), with a precision (target ±27 ppb for a 50km aggregation along the trace) and accuracy (target <3.7 ppb at 68%) sufficient to significantly reduce the uncertainties on methane emissions. The very low targeted systematic error target is particularly ambitious compared to current passive methane space mission. It is achievable because of the differential active measurements of MERLIN, which guarantees almost no contamination by aerosols or water vapour cross-sensitivity. As an active mission, MERLIN will deliver global methane weighted columns (XCH4) for all seasons and all latitudes, day and night Here, we recall the MERLIN objectives and mission characteristics. We also propose an end-to-end error analysis, from the causes of random and systematic errors of the instrument, of the platform and of the data treatment, to the error on methane emissions. To do so, we propose an OSSE analysis (observing system simulation experiment) to estimate the uncertainty reduction on methane emissions brought by MERLIN XCH4. The originality of our inversion system is to transfer both random and systematic errors from the observation space to the flux space, thus providing more realistic error reductions than usually provided in OSSE only using the random part of errors. Uncertainty reductions are presented using two different atmospheric transport models, TM3 and LMDZ, and compared with error reduction achieved with the GOSAT passive mission.
Measuring the Lense-Thirring precession using a second Lageos satellite
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Ciufolini, I.
1989-01-01
A complete numerical simulation and error analysis was performed for the proposed experiment with the objective of establishing an accurate assessment of the feasibility and the potential accuracy of the measurement of the Lense-Thirring precession. Consideration was given to identifying the error sources which limit the accuracy of the experiment and proposing procedures for eliminating or reducing the effect of these errors. Analytic investigations were conducted to study the effects of major error sources with the objective of providing error bounds on the experiment. The analysis of realistic simulated data is used to demonstrate that satellite laser ranging of two Lageos satellites, orbiting with supplemental inclinations, collected for a period of 3 years or more, can be used to verify the Lense-Thirring precession. A comprehensive covariance analysis for the solution was also developed.
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
NASA Technical Reports Server (NTRS)
Morris, A. Terry
1999-01-01
This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.
Stochastic goal-oriented error estimation with memory
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Marotzke, Jochem; Korn, Peter
2017-11-01
We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
A new numerical benchmark of a freshwater lens
NASA Astrophysics Data System (ADS)
Stoeckl, L.; Walther, M.; Graf, T.
2016-04-01
A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.
Romps, David M.
2016-03-01
Convective entrainment is a process that is poorly represented in existing convective parameterizations. By many estimates, convective entrainment is the leading source of error in global climate models. As a potential remedy, an Eulerian implementation of the Stochastic Parcel Model (SPM) is presented here as a convective parameterization that treats entrainment in a physically realistic and computationally efficient way. Drawing on evidence that convecting clouds comprise air parcels subject to Poisson-process entrainment events, the SPM calculates the deterministic limit of an infinite number of such parcels. For computational efficiency, the SPM groups parcels at each height by their purity, whichmore » is a measure of their total entrainment up to that height. This reduces the calculation of convective fluxes to a sequence of matrix multiplications. The SPM is implemented in a single-column model and compared with a large-eddy simulation of deep convection.« less
A boundedness result for the direct heuristic dynamic programming.
Liu, Feng; Sun, Jian; Si, Jennie; Guo, Wentao; Mei, Shengwei
2012-08-01
Approximate/adaptive dynamic programming (ADP) has been studied extensively in recent years for its potential scalability to solve large state and control space problems, including those involving continuous states and continuous controls. The applicability of ADP algorithms, especially the adaptive critic designs has been demonstrated in several case studies. Direct heuristic dynamic programming (direct HDP) is one of the ADP algorithms inspired by the adaptive critic designs. It has been shown applicable to industrial scale, realistic and complex control problems. In this paper, we provide a uniformly ultimately boundedness (UUB) result for the direct HDP learning controller under mild and intuitive conditions. By using a Lyapunov approach we show that the estimation errors of the learning parameters or the weights in the action and critic networks remain UUB. This result provides a useful controller convergence guarantee for the first time for the direct HDP design. Copyright © 2012 Elsevier Ltd. All rights reserved.
Vanos, J K; Warland, J S; Gillespie, T J; Kenny, N A
2012-11-01
The purpose of this paper is to implement current and novel research techniques in human energy budget estimations to give more accurate and efficient application of models by a variety of users. Using the COMFA model, the conditioning level of an individual is incorporated into overall energy budget predictions, giving more realistic estimations of the metabolism experienced at various fitness levels. Through the use of VO(2) reserve estimates, errors are found when an elite athlete is modelled as an unconditioned or a conditioned individual, giving budgets underpredicted significantly by -173 and -123 W m(-2), respectively. Such underprediction can result in critical errors regarding heat stress, particularly in highly motivated individuals; thus this revision is critical for athletic individuals. A further improvement in the COMFA model involves improved adaptation of clothing insulation (I (cl)), as well clothing non-uniformity, with changing air temperature (T (a)) and metabolic activity (M (act)). Equivalent T (a) values (for I (cl) estimation) are calculated in order to lower the I (cl) value with increasing M (act) at equal T (a). Furthermore, threshold T (a) values are calculated to predict the point at which an individual will change from a uniform I (cl) to a segmented I (cl) (full ensemble to shorts and a T-shirt). Lastly, improved relative velocity (v (r)) estimates were found with a refined equation accounting for the degree angle of wind to body movement. Differences between the original and improved v (r) equations increased with higher wind and activity speeds, and as the wind to body angle moved away from 90°. Under moderate microclimate conditions, and wind from behind a person, the convective heat loss and skin temperature estimates were 47 W m(-2) and 1.7°C higher when using the improved v (r) equation. These model revisions improve the applicability and usability of the COMFA energy budget model for subjects performing physical activity in outdoor environments. Application is possible for other similar energy budget models, and within various urban and rural environments.
NASA Astrophysics Data System (ADS)
Vanos, J. K.; Warland, J. S.; Gillespie, T. J.; Kenny, N. A.
2012-11-01
The purpose of this paper is to implement current and novel research techniques in human energy budget estimations to give more accurate and efficient application of models by a variety of users. Using the COMFA model, the conditioning level of an individual is incorporated into overall energy budget predictions, giving more realistic estimations of the metabolism experienced at various fitness levels. Through the use of VO2 reserve estimates, errors are found when an elite athlete is modelled as an unconditioned or a conditioned individual, giving budgets underpredicted significantly by -173 and -123 W m-2, respectively. Such underprediction can result in critical errors regarding heat stress, particularly in highly motivated individuals; thus this revision is critical for athletic individuals. A further improvement in the COMFA model involves improved adaptation of clothing insulation ( I cl), as well clothing non-uniformity, with changing air temperature ( T a) and metabolic activity ( M act). Equivalent T a values (for I cl estimation) are calculated in order to lower the I cl value with increasing M act at equal T a. Furthermore, threshold T a values are calculated to predict the point at which an individual will change from a uniform I cl to a segmented I cl (full ensemble to shorts and a T-shirt). Lastly, improved relative velocity ( v r) estimates were found with a refined equation accounting for the degree angle of wind to body movement. Differences between the original and improved v r equations increased with higher wind and activity speeds, and as the wind to body angle moved away from 90°. Under moderate microclimate conditions, and wind from behind a person, the convective heat loss and skin temperature estimates were 47 W m-2 and 1.7°C higher when using the improved v r equation. These model revisions improve the applicability and usability of the COMFA energy budget model for subjects performing physical activity in outdoor environments. Application is possible for other similar energy budget models, and within various urban and rural environments.
Decorrelation of the true and estimated classifier errors in high-dimensional settings.
Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R
2007-01-01
The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.
Error estimates for ice discharge calculated using the flux gate approach
NASA Astrophysics Data System (ADS)
Navarro, F. J.; Sánchez Gámez, P.
2017-12-01
Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.
Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-11-14
Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.
NASA Technical Reports Server (NTRS)
Lang, Christapher G.; Bey, Kim S. (Technical Monitor)
2002-01-01
This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.
Cros, David; Sánchez, Leopoldo; Cochard, Benoit; Samper, Patrick; Denis, Marie; Bouvet, Jean-Marc; Fernández, Jesús
2014-04-01
Explicit pedigree reconstruction by simulated annealing gave reliable estimates of genealogical coancestry in plant species, especially when selfing rate was lower than 0.6, using a realistic number of markers. Genealogical coancestry information is crucial in plant breeding to estimate genetic parameters and breeding values. The approach of Fernández and Toro (Mol Ecol 15:1657-1667, 2006) to estimate genealogical coancestries from molecular data through pedigree reconstruction was limited to species with separate sexes. In this study it was extended to plants, allowing hermaphroditism and monoecy, with possible selfing. Moreover, some improvements were made to take previous knowledge on the population demographic history into account. The new method was validated using simulated and real datasets. Simulations showed that accuracy of estimates was high with 30 microsatellites, with the best results obtained for selfing rates below 0.6. In these conditions, the root mean square error (RMSE) between the true and estimated genealogical coancestry was small (<0.07), although the number of ancestors was overestimated and the selfing rate could be biased. Simulations also showed that linkage disequilibrium between markers and departure from the Hardy-Weinberg equilibrium in the founder population did not affect the efficiency of the method. Real oil palm data confirmed the simulation results, with a high correlation between the true and estimated genealogical coancestry (>0.9) and a low RMSE (<0.08) using 38 markers. The method was applied to the Deli oil palm population for which pedigree data were scarce. The estimated genealogical coancestries were highly correlated (>0.9) with the molecular coancestries using 100 markers. Reconstructed pedigrees were used to estimate effective population sizes. In conclusion, this method gave reliable genealogical coancestry estimates. The strategy was implemented in the software MOLCOANC 3.0.
Kobler, Jan-Philipp; Nuelle, Kathrin; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lueder A; Kotlarski, Jens; Ortmaier, Tobias
2016-03-01
Minimally invasive cochlear implantation is a novel surgical technique which requires highly accurate guidance of a drilling tool along a trajectory from the mastoid surface toward the basal turn of the cochlea. The authors propose a passive, reconfigurable, parallel robot which can be directly attached to bone anchors implanted in a patient's skull, avoiding the need for surgical tracking systems. Prior to clinical trials, methods are necessary to patient specifically optimize the configuration of the mechanism with respect to accuracy and stability. Furthermore, the achievable accuracy has to be determined experimentally. A comprehensive error model of the proposed mechanism is established, taking into account all relevant error sources identified in previous studies. Two optimization criteria to exploit the given task redundancy and reconfigurability of the passive robot are derived from the model. The achievable accuracy of the optimized robot configurations is first estimated with the help of a Monte Carlo simulation approach and finally evaluated in drilling experiments using synthetic temporal bone specimen. Experimental results demonstrate that the bone-attached mechanism exhibits a mean targeting accuracy of [Formula: see text] mm under realistic conditions. A systematic targeting error is observed, which indicates that accurate identification of the passive robot's kinematic parameters could further reduce deviations from planned drill trajectories. The accuracy of the proposed mechanism demonstrates its suitability for minimally invasive cochlear implantation. Future work will focus on further evaluation experiments on temporal bone specimen.
NASA Technical Reports Server (NTRS)
Oreopoulos, L.; Chou, M.-D.; Khairoutdinov, M.; Barker, H. W.; Cahalan, R. F.
2003-01-01
We test the performance of the shortwave (SW) and longwave (LW) Column Radiation Models (CORAMs) of Chou and collaborators with heterogeneous cloud fields from a global single-day dataset produced by NCAR's Community Atmospheric Model with a 2-D CRM installed in each gridbox. The original SW version of the CORAM performs quite well compared to reference Independent Column Approximation (ICA) calculations for boundary fluxes, largely due to the success of a combined overlap and cloud scaling parameterization scheme. The absolute magnitude of errors relative to ICA are even smaller for the LW CORAM which applies similar overlap. The vertical distribution of heating and cooling within the atmosphere is also simulated quite well with daily-averaged zonal errors always below 0.3 K/d for SW heating rates and 0.6 K/d for LW cooling rates. The SW CORAM's performance improves by introducing a scheme that accounts for cloud inhomogeneity. These results suggest that previous studies demonstrating the inaccuracy of plane-parallel models may have unfairly focused on worst scenario cases, and that current radiative transfer algorithms of General Circulation Models (GCMs) may be more capable than previously thought in estimating realistic spatial and temporal averages of radiative fluxes, as long as they are provided with correct mean cloud profiles. However, even if the errors of the particular CORAMs are small, they seem to be systematic, and the impact of the biases can be fully assessed only with GCM climate simulations.
Estimation of Local Orientations in Fibrous Structures With Applications to the Purkinje System
Plank, Gernot; Trayanova, Natalia A.; Vidal, René
2011-01-01
The extraction of the cardiac Purkinje system (PS) from intensity images is a critical step toward the development of realistic structural models of the heart. Such models are important for uncovering the mechanisms of cardiac disease and improving its treatment and prevention. Unfortunately, the manual extraction of the PS is a challenging and error-prone task due to the presence of image noise and numerous fiber junctions. To deal with these challenges, we propose a framework that estimates local fiber orientations with high accuracy and reconstructs the fibers via tracking. Our key contribution is the development of a descriptor for estimating the orientation distribution function (ODF), a spherical function encoding the local geometry of the fibers at a point of interest. The fiber/branch orientations are identified as the modes of the ODFs via spherical clustering and guide the extraction of the fiber centerlines. Experiments on synthetic data evaluate the sensitivity of our approach to image noise, width of the fiber, and choice of the mode detection strategy, and show its superior performance compared to those of the existing descriptors. Experiments on the free-running PS in an MR image also demonstrate the accuracy of our method in reconstructing such sparse fibrous structures. PMID:21335301
Naff, R.L.; Haley, D.F.; Sudicky, E.A.
1998-01-01
In this, the second of two papers concerned with the use of numerical simulation to examine flow and transport parameters in heterogeneous porous media via Monte Carlo methods, results from the transport aspect of these simulations are reported on. Transport simulations contained herein assume a finite pulse input of conservative tracer, and the numerical technique endeavors to realistically simulate tracer spreading as the cloud moves through a heterogeneous medium. Medium heterogeneity is limited to the hydraulic conductivity field, and generation of this field assumes that the hydraulic-conductivity process is second-order stationary. Methods of estimating cloud moments, and the interpretation of these moments, are discussed. Techniques for estimation of large-time macrodispersivities from cloud second-moment data, and for the approximation of the standard errors associated with these macrodispersivities, are also presented. These moment and macrodispersivity estimation techniques were applied to tracer clouds resulting from transport scenarios generated by specific Monte Carlo simulations. Where feasible, moments and macrodispersivities resulting from the Monte Carlo simulations are compared with first- and second-order perturbation analyses. Some limited results concerning the possible ergodic nature of these simulations, and the presence of non-Gaussian behavior of the mean cloud, are reported on as well.
Transfer of uncertainty of space-borne high resolution rainfall products at ungauged regions
NASA Astrophysics Data System (ADS)
Tang, Ling
Hydrologically relevant characteristics of high resolution (˜ 0.25 degree, 3 hourly) satellite rainfall uncertainty were derived as a function of season and location using a six year (2002-2007) archive of National Aeronautics and Space Administration (NASA)'s Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) precipitation data. The Next Generation Radar (NEXRAD) Stage IV rainfall data over the continental United States was used as ground validation (GV) data. A geostatistical mapping scheme was developed and tested for transfer (i.e., spatial interpolation) of uncertainty information from GV regions to the vast non-GV regions by leveraging the error characterization work carried out in the earlier step. The open question explored here was, "If 'error' is defined on the basis of independent ground validation (GV) data, how are error metrics estimated for a satellite rainfall data product without the need for much extensive GV data?" After a quantitative analysis of the spatial and temporal structure of the satellite rainfall uncertainty, a proof-of-concept geostatistical mapping scheme (based on the kriging method) was evaluated. The idea was to understand how realistic the idea of 'transfer' is for the GPM era. It was found that it was indeed technically possible to transfer error metrics from a gauged to an ungauged location for certain error metrics and that a regionalized error metric scheme for GPM may be possible. The uncertainty transfer scheme based on a commonly used kriging method (ordinary kriging) was then assessed further at various timescales (climatologic, seasonal, monthly and weekly), and as a function of the density of GV coverage. The results indicated that if a transfer scheme for estimating uncertainty metrics was finer than seasonal scale (ranging from 3-6 hourly to weekly-monthly), the effectiveness for uncertainty transfer worsened significantly. Next, a comprehensive assessment of different kriging methods for spatial transfer (interpolation) of error metrics was performed. Three kriging methods for spatial interpolation are compared, which are: ordinary kriging (OK), indicator kriging (IK) and disjunctive kriging (DK). Additional comparison with the simple inverse distance weighting (IDW) method was also performed to quantify the added benefit (if any) of using geostatistical methods. The overall performance ranking of the kriging methods was found to be as follows: OK=DK > IDW > IK. Lastly, various metrics of satellite rainfall uncertainty were identified for two large continental landmasses that share many similar Koppen climate zones, United States and Australia. The dependence of uncertainty as a function of gauge density was then investigated. The investigation revealed that only the first and second ordered moments of error are most amenable to a Koppen-type climate type classification in different continental landmasses.
Garcia, Tanya P; Ma, Yanyuan
2017-10-01
We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.
NASA Astrophysics Data System (ADS)
Gherm, Vadim E.; Zernov, Nikolay N.; Strangeways, Hal J.
2011-06-01
It can be important to determine the correlation of different frequency signals in L band that have followed transionospheric paths. In the future, both GPS and the new Galileo satellite system will broadcast three frequencies enabling more advanced three frequency correction schemes so that knowledge of correlations of different frequency pairs for scintillation conditions is desirable. Even at present, it would be helpful to know how dual-frequency Global Navigation Satellite Systems positioning can be affected by lack of correlation between the L1 and L2 signals. To treat this problem of signal correlation for the case of strong scintillation, a previously constructed simulator program, based on the hybrid method, has been further modified to simulate the fields for both frequencies on the ground, taking account of their cross correlation. Then, the errors in the two-frequency range finding method caused by scintillation have been estimated for particular ionospheric conditions and for a realistic fully three-dimensional model of the ionospheric turbulence. The results which are presented for five different frequency pairs (L1/L2, L1/L3, L1/L5, L2/L3, and L2/L5) show the dependence of diffractional errors on the scintillation index S4 and that the errors diverge from a linear relationship, the stronger are scintillation effects, and may reach up to ten centimeters, or more. The correlation of the phases at spaced frequencies has also been studied and found that the correlation coefficients for different pairs of frequencies depend on the procedure of phase retrieval, and reduce slowly as both the variance of the electron density fluctuations and cycle slips increase.
Security proof of a three-state quantum-key-distribution protocol without rotational symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fung, C.-H.F.; Lo, H.-K.
2006-10-15
Standard security proofs of quantum-key-distribution (QKD) protocols often rely on symmetry arguments. In this paper, we prove the security of a three-state protocol that does not possess rotational symmetry. The three-state QKD protocol we consider involves three qubit states, where the first two states |0{sub z}> and |1{sub z}> can contribute to key generation, and the third state |+>=(|0{sub z}>+|1{sub z}>)/{radical}(2) is for channel estimation. This protocol has been proposed and implemented experimentally in some frequency-based QKD systems where the three states can be prepared easily. Thus, by founding on the security of this three-state protocol, we prove that thesemore » QKD schemes are, in fact, unconditionally secure against any attacks allowed by quantum mechanics. The main task in our proof is to upper bound the phase error rate of the qubits given the bit error rates observed. Unconditional security can then be proved not only for the ideal case of a single-photon source and perfect detectors, but also for the realistic case of a phase-randomized weak coherent light source and imperfect threshold detectors. Our result in the phase error rate upper bound is independent of the loss in the channel. Also, we compare the three-state protocol with the Bennett-Brassard 1984 (BB84) protocol. For the single-photon source case, our result proves that the BB84 protocol strictly tolerates a higher quantum bit error rate than the three-state protocol, while for the coherent-source case, the BB84 protocol achieves a higher key generation rate and secure distance than the three-state protocol when a decoy-state method is used.« less
An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.
Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan
2017-11-18
Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.
Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters
Park, Chan Gook
2018-01-01
An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539
Quantitative evaluation of statistical errors in small-angle X-ray scattering measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sedlak, Steffen M.; Bruetzel, Linda K.; Lipfert, Jan
A new model is proposed for the measurement errors incurred in typical small-angle X-ray scattering (SAXS) experiments, which takes into account the setup geometry and physics of the measurement process. The model accurately captures the experimentally determined errors from a large range of synchrotron and in-house anode-based measurements. Its most general formulation gives for the variance of the buffer-subtracted SAXS intensity σ 2(q) = [I(q) + const.]/(kq), whereI(q) is the scattering intensity as a function of the momentum transferq;kand const. are fitting parameters that are characteristic of the experimental setup. The model gives a concrete procedure for calculating realistic measurementmore » errors for simulated SAXS profiles. In addition, the results provide guidelines for optimizing SAXS measurements, which are in line with established procedures for SAXS experiments, and enable a quantitative evaluation of measurement errors.« less
Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma
NASA Technical Reports Server (NTRS)
Fisher, Brad L.
2004-01-01
The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
Elimination of Emergency Department Medication Errors Due To Estimated Weights.
Greenwalt, Mary; Griffen, David; Wilkerson, Jim
2017-01-01
From 7/2014 through 6/2015, 10 emergency department (ED) medication dosing errors were reported through the electronic incident reporting system of an urban academic medical center. Analysis of these medication errors identified inaccurate estimated weight on patients as the root cause. The goal of this project was to reduce weight-based dosing medication errors due to inaccurate estimated weights on patients presenting to the ED. Chart review revealed that 13.8% of estimated weights documented on admitted ED patients varied more than 10% from subsequent actual admission weights recorded. A random sample of 100 charts containing estimated weights revealed 2 previously unreported significant medication dosage errors (.02 significant error rate). Key improvements included removing barriers to weighing ED patients, storytelling to engage staff and change culture, and removal of the estimated weight documentation field from the ED electronic health record (EHR) forms. With these improvements estimated weights on ED patients, and the resulting medication errors, were eliminated.
An error-based micro-sensor capture system for real-time motion estimation
NASA Astrophysics Data System (ADS)
Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li
2017-10-01
A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).
An operational retrieval algorithm for determining aerosol optical properties in the ultraviolet
NASA Astrophysics Data System (ADS)
Taylor, Thomas E.; L'Ecuyer, Tristan S.; Slusser, James R.; Stephens, Graeme L.; Goering, Christian D.
2008-02-01
This paper describes a number of practical considerations concerning the optimization and operational implementation of an algorithm used to characterize the optical properties of aerosols across part of the ultraviolet (UV) spectrum. The algorithm estimates values of aerosol optical depth (AOD) and aerosol single scattering albedo (SSA) at seven wavelengths in the UV, as well as total column ozone (TOC) and wavelength-independent asymmetry factor (g) using direct and diffuse irradiances measured with a UV multifilter rotating shadowband radiometer (UV-MFRSR). A novel method for cloud screening the irradiance data set is introduced, as well as several improvements and optimizations to the retrieval scheme which yield a more realistic physical model for the inversion and increase the efficiency of the algorithm. Introduction of a wavelength-dependent retrieval error budget generated from rigorous forward model analysis as well as broadened covariances on the a priori values of AOD, SSA and g and tightened covariances of TOC allows sufficient retrieval sensitivity and resolution to obtain unique solutions of aerosol optical properties as demonstrated by synthetic retrievals. Analysis of a cloud screened data set (May 2003) from Panther Junction, Texas, demonstrates that the algorithm produces realistic values of the optical properties that compare favorably with pseudo-independent methods for AOD, TOC and calculated Ångstrom exponents. Retrieval errors of all parameters (except TOC) are shown to be negatively correlated to AOD, while the Shannon information content is positively correlated, indicating that retrieval skill improves with increasing atmospheric turbidity. When implemented operationally on more than thirty instruments in the Ultraviolet Monitoring and Research Program's (UVMRP) network, this retrieval algorithm will provide a comprehensive and internally consistent climatology of ground-based aerosol properties in the UV spectral range that can be used for both validation of satellite measurements as well as regional aerosol and ultraviolet transmission studies.
NASA Astrophysics Data System (ADS)
Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.
2017-07-01
In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.
An Empirical State Error Covariance Matrix Orbit Determination Example
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2015-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.
Developing a realistic-prototyping road user cost evaluation tool for FDOT.
DOT National Transportation Integrated Search
2008-12-31
The objective of this project is to develop a realistic-prototyping RUC (Road User Cost) calculation tool that is userfriendly : and utilizing limited number of data inputs that are easy to use. The tool can help engineers to estimate RUC on : specif...
Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc
2013-12-01
The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.
Error analysis of finite element method for Poisson–Nernst–Planck equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yuzhou; Sun, Pengtao; Zheng, Bin
A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.
Improving time-delay cosmography with spatially resolved kinematics
NASA Astrophysics Data System (ADS)
Shajib, Anowar J.; Treu, Tommaso; Agnello, Adriano
2018-01-01
Strongly gravitational lensed quasars can be used to measure the so-called time-delay distance DΔt, and thus the Hubble constant H0 and other cosmological parameters. Stellar kinematics of the deflector galaxy play an essential role in this measurement by: (i) helping break the mass-sheet degeneracy; (ii) determining in principle the angular diameter distance Dd to the deflector and thus further improving the cosmological constraints. In this paper we simulate observations of lensed quasars with integral field spectrographs and show that spatially resolved kinematics of the deflector enables further progress by helping break the mass-anisotropy degeneracy. Furthermore, we use our simulations to obtain realistic error estimates with current/upcoming instruments like OSIRIS on Keck and NIRSPEC on the James Webb Space Telescope for both distances (typically ∼6 per cent on DΔt and ∼10 per cent on Dd). We use the error estimates to compute cosmological forecasts for the sample of nine lenses that currently have well-measured time delays and deep Hubble Space Telescope images and for a sample of 40 lenses that is projected to be available in a few years through follow-up of candidates found in ongoing wide field surveys. We find that H0 can be measured with 2 per cent (1 per cent) precision from nine (40) lenses in a flat Λcold dark matter cosmology. We study several other cosmological models beyond the flat Λcold dark matter model and find that time-delay lenses with spatially resolved kinematics can greatly improve the precision of the cosmological parameters measured by cosmic microwave background data.
Incorporating measurement error in n = 1 psychological autoregressive modeling.
Schuurman, Noémi K; Houtveen, Jan H; Hamaker, Ellen L
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30-50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters.
NASA Technical Reports Server (NTRS)
Callantine, Todd J.
2002-01-01
This report describes preliminary research on intelligent agents that make errors. Such agents are crucial to the development of novel agent-based techniques for assessing system safety. The agents extend an agent architecture derived from the Crew Activity Tracking System that has been used as the basis for air traffic controller agents. The report first reviews several error taxonomies. Next, it presents an overview of the air traffic controller agents, then details several mechanisms for causing the agents to err in realistic ways. The report presents a performance assessment of the error-generating agents, and identifies directions for further research. The research was supported by the System-Wide Accident Prevention element of the FAA/NASA Aviation Safety Program.
The influence of orbit selection on the accuracy of the Stanford Relativity gyroscope experiment
NASA Technical Reports Server (NTRS)
Vassar, R.; Everitt, C. W. F.; Vanpatten, R. A.; Breakwell, J. V.
1980-01-01
This paper discusses an error analysis for the Stanford Relativity experiment, designed to measure the precession of a gyroscope's spin-axis predicted by general relativity. Measurements will be made of the spin-axis orientations of 4 superconducting spherical gyroscopes carried by an earth-satellite. Two relativistic precessions are predicted: a 'geodetic' precession associated with the satellite's orbital motion and a 'motional' precession due to the earth's rotation. Using a Kalman filter covariance analysis with a realistic error model we have computed the error in determining the relativistic precession rates. Studies show that a slightly off-polar orbit is better than a polar orbit for determining the 'motional' drift.
Waller, Niels G; Feuerstahler, Leah
2017-01-01
In this study, we explored item and person parameter recovery of the four-parameter model (4PM) in over 24,000 real, realistic, and idealized data sets. In the first analyses, we fit the 4PM and three alternative models to data from three Minnesota Multiphasic Personality Inventory-Adolescent form factor scales using Bayesian modal estimation (BME). Our results indicated that the 4PM fits these scales better than simpler item Response Theory (IRT) models. Next, using the parameter estimates from these real data analyses, we estimated 4PM item parameters in 6,000 realistic data sets to establish minimum sample size requirements for accurate item and person parameter recovery. Using a factorial design that crossed discrete levels of item parameters, sample size, and test length, we also fit the 4PM to an additional 18,000 idealized data sets to extend our parameter recovery findings. Our combined results demonstrated that 4PM item parameters and parameter functions (e.g., item response functions) can be accurately estimated using BME in moderate to large samples (N ⩾ 5, 000) and person parameters can be accurately estimated in smaller samples (N ⩾ 1, 000). In the supplemental files, we report annotated [Formula: see text] code that shows how to estimate 4PM item and person parameters in [Formula: see text] (Chalmers, 2012 ).
On the possibility of non-invasive multilayer temperature estimation using soft-computing methods.
Teixeira, C A; Pereira, W C A; Ruano, A E; Ruano, M Graça
2010-01-01
This work reports original results on the possibility of non-invasive temperature estimation (NITE) in a multilayered phantom by applying soft-computing methods. The existence of reliable non-invasive temperature estimator models would improve the security and efficacy of thermal therapies. These points would lead to a broader acceptance of this kind of therapies. Several approaches based on medical imaging technologies were proposed, magnetic resonance imaging (MRI) being appointed as the only one to achieve the acceptable temperature resolutions for hyperthermia purposes. However, MRI intrinsic characteristics (e.g., high instrumentation cost) lead us to use backscattered ultrasound (BSU). Among the different BSU features, temporal echo-shifts have received a major attention. These shifts are due to changes of speed-of-sound and expansion of the medium. The originality of this work involves two aspects: the estimator model itself is original (based on soft-computing methods) and the application to temperature estimation in a three-layer phantom is also not reported in literature. In this work a three-layer (non-homogeneous) phantom was developed. The two external layers were composed of (in % of weight): 86.5% degassed water, 11% glycerin and 2.5% agar-agar. The intermediate layer was obtained by adding graphite powder in the amount of 2% of the water weight to the above composition. The phantom was developed to have attenuation and speed-of-sound similar to in vivo muscle, according to the literature. BSU signals were collected and cumulative temporal echo-shifts computed. These shifts and the past temperature values were then considered as possible estimators inputs. A soft-computing methodology was applied to look for appropriate multilayered temperature estimators. The methodology involves radial-basis functions neural networks (RBFNN) with structure optimized by the multi-objective genetic algorithm (MOGA). In this work 40 operating conditions were considered, i.e. five 5-mm spaced spatial points and eight therapeutic intensities (I(SATA)): 0.3, 0.5, 0.7, 1.0, 1.3, 1.5, 1.7 and 2.0W/cm(2). Models were trained and selected to estimate temperature at only four intensities, then during the validation phase, the best-fitted models were analyzed in data collected at the eight intensities. This procedure leads to a more realistic evaluation of the generalisation level of the best-obtained structures. At the end of the identification phase, 82 (preferable) estimator models were achieved. The majority of them present an average maximum absolute error (MAE) inferior to 0.5 degrees C. The best-fitted estimator presents a MAE of only 0.4 degrees C for both the 40 operating conditions. This means that the gold-standard maximum error (0.5 degrees C) pointed for hyperthermia was fulfilled independently of the intensity and spatial position considered, showing the improved generalisation capacity of the identified estimator models. As the majority of the preferable estimator models, the best one presents 6 inputs and 11 neurons. In addition to the appropriate error performance, the estimator models present also a reduced computational complexity and then the possibility to be applied in real-time. A non-invasive temperature estimation model, based on soft-computing technique, was proposed for a three-layered phantom. The best-achieved estimator models presented an appropriate error performance regardless of the spatial point considered (inside or at the interface of the layers) and of the intensity applied. Other methodologies published so far, estimate temperature only in homogeneous media. The main drawback of the proposed methodology is the necessity of a-priory knowledge of the temperature behavior. Data used for training and optimisation should be representative, i.e., they should cover all possible physical situations of the estimation environment.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.
Pathak, Biswajit; Boruah, Bosanta R
2017-12-01
Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.
NASA Technical Reports Server (NTRS)
Chatterji, Gano
2011-01-01
Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.
An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers
Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan
2017-01-01
Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration—which are the basis of tracking error estimation—are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (−0.25 cycle, 0.25 cycle) to (−0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz. PMID:29156581
[Patient safety and errors in medicine: development, prevention and analyses of incidents].
Rall, M; Manser, T; Guggenberger, H; Gaba, D M; Unertl, K
2001-06-01
"Patient safety" and "errors in medicine" are issues gaining more and more prominence in the eyes of the public. According to newer studies, errors in medicine are among the ten major causes of death in association with the whole area of health care. A new era has begun incorporating attention to a "systems" approach to deal with errors and their causes in the health system. In other high-risk domains with a high demand for safety (such as the nuclear power industry and aviation) many strategies to enhance safety have been established. It is time to study these strategies, to adapt them if necessary and apply them to the field of medicine. These strategies include: to teach people how errors evolve in complex working domains and how types of errors are classified; the introduction of critical incident reporting systems that are free of negative consequences for the reporters; the promotion of continuous medical education; and the development of generic problem-solving skills incorporating the extensive use of realistic simulators wherever possible. Interestingly, the field of anesthesiology--within which realistic simulators were developed--is referred to as a model for the new patient safety movement. Despite this proud track record in recent times though, there is still much to be done even in the field of anesthesiology. Overall though, the most important strategy towards a long-term improvement in patient safety will be a change of "culture" throughout the entire health care system. The "culture of blame" focused on individuals should be replaced by a "safety culture", that sees errors and critical incidents as a problem of the whole organization. The acceptance of human fallability and an open-minded non-punitive analysis of errors in the sense of a "preventive and proactive safety culture" should lead to solutions at the systemic level. This change in culture can only be achieved with a strong commitment from the highest levels of an organization. Patient safety must have the highest priority in the goals of the institution: "Primum nihil nocere"--"First, do not harm".
Application and Evaluation of MODIS LAI, FPAR, and Albedo Products in the WRF/CMAQ System
MODIS vegetation and albedo products provide a more realistic representation of surface conditions for input to the WRF/CMAQ modeling system. However, the initial evaluation of ingesting MODIS data into the system showed mixed results, with increased bias and error for 2-m temper...
USDA-ARS?s Scientific Manuscript database
The reliability of common calibration practices for process based water quality models has recently been questioned. A so-called “adequately calibrated model” may contain input errors not readily identifiable by model users, or may not realistically represent intra-watershed responses. These short...
NASA Astrophysics Data System (ADS)
Collins, B. D.; Corbett, S. C.; Fairley, H. C.
2012-04-01
Erosion of archaeological sites within Grand Canyon National Park (GCNP) Arizona, located in the southwestern United States is a subject of continuing interest to land and resource managers. This is partly fueled by an ongoing debate about whether and to what degree controlled releases from Glen Canyon Dam, located immediately upstream of GCNP, are affecting the physical integrity of archaeological sites. Long-term topographic change due to natural sources is typical in the desert southwest region. However, continuing erosion, which may be related in-part to anthropogenic factors, threatens both the preservation of archaeological sites as well as our ability to study evidence of past human habitation in GCNP that dates back at least 8,000 years before present. To quantitatively identify changes to archaeological sites in this region, and with the broader intention of developing numerical models to predict how and under what circumstances dam-controlled flows influence archaeological sites, we undertook a detailed terrestrial-lidar based monitoring program at thirteen sites between 2006 and 2010. Our studies looked specifically at sites located along the Colorado River that are potentially subject to changes related to dam operations. This could occur, for example, by limited sediment supply to sand bars which in turn contribute aeolian sediment to archaeologic sites. Each site was several hundred to several thousand square meters in size and was surveyed multiple times during the 5-year period. Our monitoring program shows how various data registration and georeferencing techniques result in varying degrees of topographic surface model accuracy. For example, surveys performed between 2006 and 2007 used point cloud registration methods and resulted in estimated change detection thresholds of 8 cm between repeat surveys. In 2010, surveys at the same sites used control point registration methods and resulted in estimated change detection thresholds of 3 cm. Error thresholds were determined using two types of change detection error analyses. The first used the absolute errors inherent in each step of the lidar data collection process (i.e., directly combining laser, survey, and registration errors) and provides a conservative estimate of potential errors. The second used an empirical metric based on the closest point-to-point match between known fixed objects (e.g., large boulders) and results in a more realistic error bound. Our data indicate that some sites changed significantly during the monitored time period. These measurements provide much of the essential data required for developing an in-house, physically-based, numerical sediment transport model that can provide estimates on the likelihood for future archaeological site change in GCNP. Thus far, we are finding that the data provided by typical terrestrial lidar surveys is likely overly-dense for numerical model requirements with respect to computational efficiency. Despite this, we also find that high-resolution data is necessary to perform change detection at the accuracy required for model calibration and to document changes before they have progressed beyond the point when site integrity is compromised. The results of the study will provide land and resource managers with the pertinent information needed to oversee these archaeological resources in the best way possible.
Roon, David A.; Waits, L.P.; Kendall, K.C.
2005-01-01
Non-invasive genetic sampling (NGS) is becoming a popular tool for population estimation. However, multiple NGS studies have demonstrated that polymerase chain reaction (PCR) genotyping errors can bias demographic estimates. These errors can be detected by comprehensive data filters such as the multiple-tubes approach, but this approach is expensive and time consuming as it requires three to eight PCR replicates per locus. Thus, researchers have attempted to correct PCR errors in NGS datasets using non-comprehensive error checking methods, but these approaches have not been evaluated for reliability. We simulated NGS studies with and without PCR error and 'filtered' datasets using non-comprehensive approaches derived from published studies and calculated mark-recapture estimates using CAPTURE. In the absence of data-filtering, simulated error resulted in serious inflations in CAPTURE estimates; some estimates exceeded N by ??? 200%. When data filters were used, CAPTURE estimate reliability varied with per-locus error (E??). At E?? = 0.01, CAPTURE estimates from filtered data displayed < 5% deviance from error-free estimates. When E?? was 0.05 or 0.09, some CAPTURE estimates from filtered data displayed biases in excess of 10%. Biases were positive at high sampling intensities; negative biases were observed at low sampling intensities. We caution researchers against using non-comprehensive data filters in NGS studies, unless they can achieve baseline per-locus error rates below 0.05 and, ideally, near 0.01. However, we suggest that data filters can be combined with careful technique and thoughtful NGS study design to yield accurate demographic information. ?? 2005 The Zoological Society of London.
Optimized pulses for the control of uncertain qubits
Grace, Matthew D.; Dominy, Jason M.; Witzel, Wayne M.; ...
2012-05-18
The construction of high-fidelity control fields that are robust to control, system, and/or surrounding environment uncertainties is a crucial objective for quantum information processing. Using the two-state Landau-Zener model for illustrative simulations of a controlled qubit, we generate optimal controls for π/2 and π pulses and investigate their inherent robustness to uncertainty in the magnitude of the drift Hamiltonian. Next, we construct a quantum-control protocol to improve system-drift robustness by combining environment-decoupling pulse criteria and optimal control theory for unitary operations. By perturbatively expanding the unitary time-evolution operator for an open quantum system, previous analysis of environment-decoupling control pulses hasmore » calculated explicit control-field criteria to suppress environment-induced errors up to (but not including) third order from π/2 and π pulses. We systematically integrate this criteria with optimal control theory, incorporating an estimate of the uncertain parameter to produce improvements in gate fidelity and robustness, demonstrated via a numerical example based on double quantum dot qubits. For the qubit model used in this work, postfacto analysis of the resulting controls suggests that realistic control-field fluctuations and noise may contribute just as significantly to gate errors as system and environment fluctuations.« less
NASA Astrophysics Data System (ADS)
Kunii, M.; Ito, K.; Wada, A.
2015-12-01
An ensemble Kalman filter (EnKF) using a regional mesoscale atmosphere-ocean coupled model was developed to represent the uncertainties of sea surface temperature (SST) in ensemble data assimilation strategies. The system was evaluated through data assimilation cycle experiments over a one-month period from July to August 2014, during which a tropical cyclone as well as severe rainfall events occurred. The results showed that the data assimilation cycle with the coupled model could reproduce SST distributions realistically even without updating SST and salinity during the data assimilation cycle. Therefore, atmospheric variables and radiation applied as a forcing to ocean models can control oceanic variables to some extent in the current data assimilation configuration. However, investigations of the forecast error covariance estimated in EnKF revealed that the correlation between atmospheric and oceanic variables could possibly lead to less flow-dependent error covariance for atmospheric variables owing to the difference in the time scales between atmospheric and oceanic variables. A verification of the analyses showed positive impacts of applying the ocean model to EnKF on precipitation forecasts. The use of EnKF with the coupled model system captured intensity changes of a tropical cyclone better than it did with an uncoupled atmosphere model, even though the impact on the track forecast was negligibly small.
Yang, Xiaochen; Clements, Logan W; Luo, Ma; Narasimhan, Saramati; Thompson, Reid C; Dawant, Benoit M; Miga, Michael I
2017-07-01
Intraoperative soft tissue deformation, referred to as brain shift, compromises the application of current image-guided surgery navigation systems in neurosurgery. A computational model driven by sparse data has been proposed as a cost-effective method to compensate for cortical surface and volumetric displacements. We present a mock environment developed to acquire stereoimages from a tracked operating microscope and to reconstruct three-dimensional point clouds from these images. A reconstruction error of 1 mm is estimated by using a phantom with a known geometry and independently measured deformation extent. The microscope is tracked via an attached tracking rigid body that facilitates the recording of the position of the microscope via a commercial optical tracking system as it moves during the procedure. Point clouds, reconstructed under different microscope positions, are registered into the same space to compute the feature displacements. Using our mock craniotomy device, realistic cortical deformations are generated. When comparing our tracked microscope stereo-pair measure of mock vessel displacements to that of the measurement determined by the independent optically tracked stylus marking, the displacement error was [Formula: see text] on average. These results demonstrate the practicality of using tracked stereoscopic microscope as an alternative to laser range scanners to collect sufficient intraoperative information for brain shift correction.
NASA Astrophysics Data System (ADS)
Psikuta, Agnes; Mert, Emel; Annaheim, Simon; Rossi, René M.
2018-02-01
To evaluate the quality of new energy-saving and performance-supporting building and urban settings, the thermal sensation and comfort models are often used. The accuracy of these models is related to accurate prediction of the human thermo-physiological response that, in turn, is highly sensitive to the local effect of clothing. This study aimed at the development of an empirical regression model of the air gap thickness and the contact area in clothing to accurately simulate human thermal and perceptual response. The statistical model predicted reliably both parameters for 14 body regions based on the clothing ease allowances. The effect of the standard error in air gap prediction on the thermo-physiological response was lower than the differences between healthy humans. It was demonstrated that currently used assumptions and methods for determination of the air gap thickness can produce a substantial error for all global, mean, and local physiological parameters, and hence, lead to false estimation of the resultant physiological state of the human body, thermal sensation, and comfort. Thus, this model may help researchers to strive for improvement of human thermal comfort, health, productivity, safety, and overall sense of well-being with simultaneous reduction of energy consumption and costs in built environment.
Precise Haptic Device Co-Location for Visuo-Haptic Augmented Reality.
Eck, Ulrich; Pankratz, Frieder; Sandor, Christian; Klinker, Gudrun; Laga, Hamid
2015-12-01
Visuo-haptic augmented reality systems enable users to see and touch digital information that is embedded in the real world. PHANToM haptic devices are often employed to provide haptic feedback. Precise co-location of computer-generated graphics and the haptic stylus is necessary to provide a realistic user experience. Previous work has focused on calibration procedures that compensate the non-linear position error caused by inaccuracies in the joint angle sensors. In this article we present a more complete procedure that additionally compensates for errors in the gimbal sensors and improves position calibration. The proposed procedure further includes software-based temporal alignment of sensor data and a method for the estimation of a reference for position calibration, resulting in increased robustness against haptic device initialization and external tracker noise. We designed our procedure to require minimal user input to maximize usability. We conducted an extensive evaluation with two different PHANToMs, two different optical trackers, and a mechanical tracker. Compared to state-of-the-art calibration procedures, our approach significantly improves the co-location of the haptic stylus. This results in higher fidelity visual and haptic augmentations, which are crucial for fine-motor tasks in areas such as medical training simulators, assembly planning tools, or rapid prototyping applications.
NASA Technical Reports Server (NTRS)
Stone, H. W.; Powell, R. W.
1985-01-01
A six degree of freedom simulation analysis was performed for the space shuttle orbiter during entry from Mach 8 to Mach 1.5 with realistic off nominal conditions by using the flight control systems defined by the shuttle contractor. The off nominal conditions included aerodynamic uncertainties in extrapolating from wind tunnel derived characteristics to full scale flight characteristics, uncertainties in the estimates of the reaction control system interaction with the orbiter aerodynamics, an error in deriving the angle of attack from onboard instrumentation, the failure of two of the four reaction control system thrusters on each side, and a lateral center of gravity offset coupled with vehicle and flow asymmetries. With combinations of these off nominal conditions, the flight control system performed satisfactorily. At low hypersonic speeds, a few cases exhibited unacceptable performances when errors in deriving the angle of attack from the onboard instrumentation were modeled. The orbiter was unable to maintain lateral trim for some cases between Mach 5 and Mach 2 and exhibited limit cycle tendencies or residual roll oscillations between Mach 3 and Mach 1. Piloting techniques and changes in some gains and switching times in the flight control system are suggested to help alleviate these problems.
Coggins,, Lewis G.; Walters, Carl J.
2009-01-01
Mark-recapture methods have been used for the past two decades to assess trends in adult abundance and recruitment of the Little Colorado River (LCR) population of humpback chub. These methods indicate that the adult population declined through the 1980s and early 1990s but has been increasing for the past decade. Recruitment appears also to have increased, particularly in the 2003-4 period. Considering a range of assumed natural mortality-rates and magnitude of ageing error, it is unlikely that there are currently less than 6,000 adults or more than 10,000 adults. Our best estimate of the current adult (age 4 years or more) population is approximately 7,650 fish. Recent humpback chub assessments using the Age-Structured Mark-Recapture model (ASMR) and reported in 2006 (Melis and others, 2006) and 2008 (Coggins, 2008a,b) have provided abundance and recruitment trend estimates that have changed progressively over time as more data are considered by the model. The general pattern of change implies a less severe decline in adult abundance during the late 1980s through early 1990s, with attendant changes in recruitment supporting this demographic pattern. We have been concerned that these changes are not indicative of the true population and may be associated with a 'retrospective' bias as additional data are included in the ASMR model. To investigate this possibility, we developed a realistic individual-based simulation model (IBM) to generate replicate artificial data sets with similar characteristics to the true humpback chub data. The artificial data have known abundance trends and we analyzed these data with ASMR. On the basis of these simulations, we believe that errors in assigning age (and therefore brood-year) to fish based on their length are likely to have caused the retrospective bias pattern seen in the assessments and to have caused both less severe trends in the adult abundance estimates and progressively more severe downward bias in estimates of adult mortality-rates. This 'smearing', or assignment of fish from a single brood-year into multiple incorrect brood-years, is a result of variation in growth rates. The IBM simulations indicate that as a result of this error source, the best estimates of abundance and recruitment for any calendar year are those obtained from data collected previous to and within a year or two after each calendar year.
Distilling entanglement with noisy operations
NASA Astrophysics Data System (ADS)
Chang, Jinho; Bae, Joonwoo; Kwon, Younghun
Entanglement distillation is a fundamental task in quantum information processing. It not only extracts entanglement out of corrupted systems but also leads to protecting systems of interest against intervention with environment. In this work, we consider a realistic scenario of entanglement distillation where noisy quantum operations are applied. In particular, the two-way distillation protocol that tolerates the highest error rate is considered. We show that among all types of noise there are only four equivalence classes according to the distillability condition. Since the four classes are connected by local unitary transformations, our results can be used to improve entanglement distillability in practice when entanglement distillation is performed in a realistic setting.
The OSSE Framework at the NASA Global Modeling and Assimilation Office (GMAO)
NASA Astrophysics Data System (ADS)
Moradi, I.; Prive, N.; McCarty, W.; Errico, R. M.; Gelaro, R.
2017-12-01
This abstract summarizes the OSSE framework developed at the Global Modeling and Assimilation Office at the National Aeronautics and Space Administration (NASA/GMAO). Some of the OSSE techniques developed at GMAO including simulation of realistic observations, e.g., adding errors to simulated observations, are now widely used by the community to evaluate the impact of new observations on the weather forecasts. This talk presents some of the recent progresses and challenges in simulating realistic observations, radiative transfer modeling support for the GMAO OSSE activities, assimilation of OSSE observations into data assimilation systems, and evaluating the impact of simulated observations on the forecast skills.
The OSSE Framework at the NASA Global Modeling and Assimilation Office (GMAO)
NASA Technical Reports Server (NTRS)
Moradi, Isaac; Prive, Nikki; McCarty, Will; Errico, Ronald M.; Gelaro, Ron
2017-01-01
This abstract summarizes the OSSE framework developed at the Global Modeling and Assimilation Office at the National Aeronautics and Space Administration (NASA/GMAO). Some of the OSSE techniques developed at GMAO including simulation of realistic observations, e.g., adding errors to simulated observations, are now widely used by the community to evaluate the impact of new observations on the weather forecasts. This talk presents some of the recent progresses and challenges in simulating realistic observations, radiative transfer modeling support for the GMAO OSSE activities, assimilation of OSSE observations into data assimilation systems, and evaluating the impact of simulated observations on the forecast skills.
Toward Millimagnitude Photometric Calibration (Abstract)
NASA Astrophysics Data System (ADS)
Dose, E.
2014-12-01
(Abstract only) Asteroid roation, exoplanet transits, and similar measurements will increasingly call for photometric precisions better than about 10 millimagnitudes, often between nights and ideally between distant observers. The present work applies detailed spectral simulations to test popular photometric calibration practices, and to test new extensions of these practices. Using 107 synthetic spectra of stars of diverse colors, detailed atmospheric transmission spectra computed by solar-energy software, realistic spectra of popular astronomy gear, and the option of three sources of noise added at realistic millimagnitude levels, we find that certain adjustments to current calibration practices can help remove small systematic errors, especially for imperfect filters, high airmasses, and possibly passing thin cirrus clouds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O’Shea, T; Bamber, J; Harris, E
Purpose: For ultrasound speckle tracking there is some evidence that the envelope-detected signal (the main step in B-mode image formation) may be more accurate than raw ultrasound data for tracking larger inter-frame tissue motion. This study investigates the accuracy of raw radio-frequency (RF) versus non-logarithmic compressed envelope-detected (B-mode) data for ultrasound speckle tracking in the context of image-guided radiation therapy. Methods: Transperineal ultrasound RF data was acquired (with a 7.5 MHz linear transducer operating at a 12 Hz frame rate) from a speckle phantom moving with realistic intra-fraction prostate motion derived from a commercial tracking system. A normalised cross-correlation templatemore » matching algorithm was used to track speckle motion at the focus using (i) the RF signal and (ii) the B-mode signal. A range of imaging rates (0.5 to 12 Hz) were simulated by decimating the imaging sequences, therefore simulating larger to smaller inter-frame displacements. Motion estimation accuracy was quantified by comparison with known phantom motion. Results: The differences between RF and B-mode motion estimation accuracy (2D mean and 95% errors relative to ground truth displacements) were less than 0.01 mm for stable and persistent motion types and 0.2 mm for transient motion for imaging rates of 0.5 to 12 Hz. The mean correlation for all motion types and imaging rates was 0.851 and 0.845 for RF and B-mode data, respectively. Data type is expected to have most impact on axial (Superior-Inferior) motion estimation. Axial differences were <0.004 mm for stable and persistent motion and <0.3 mm for transient motion (axial mean errors were lowest for B-mode in all cases). Conclusions: Using the RF or B-mode signal for speckle motion estimation is comparable for translational prostate motion. B-mode image formation may involve other signal-processing steps which also influence motion estimation accuracy. A similar study for respiratory-induced motion would also be prudent. This work is support by Cancer Research UK Programme Grant C33589/A19727.« less
3-D transient hydraulic tomography in unconfined aquifers with fast drainage response
NASA Astrophysics Data System (ADS)
Cardiff, M.; Barrash, W.
2011-12-01
We investigate, through numerical experiments, the viability of three-dimensional transient hydraulic tomography (3DTHT) for identifying the spatial distribution of groundwater flow parameters (primarily, hydraulic conductivity K) in permeable, unconfined aquifers. To invert the large amount of transient data collected from 3DTHT surveys, we utilize an iterative geostatistical inversion strategy in which outer iterations progressively increase the number of data points fitted and inner iterations solve the quasi-linear geostatistical formulas of Kitanidis. In order to base our numerical experiments around realistic scenarios, we utilize pumping rates, geometries, and test lengths similar to those attainable during 3DTHT field campaigns performed at the Boise Hydrogeophysical Research Site (BHRS). We also utilize hydrologic parameters that are similar to those observed at the BHRS and in other unconsolidated, unconfined fluvial aquifers. In addition to estimating K, we test the ability of 3DTHT to estimate both average storage values (specific storage Ss and specific yield Sy) as well as spatial variability in storage coefficients. The effects of model conceptualization errors during unconfined 3DTHT are investigated including: (1) assuming constant storage coefficients during inversion and (2) assuming stationary geostatistical parameter variability. Overall, our findings indicate that estimation of K is slightly degraded if storage parameters must be jointly estimated, but that this effect is quite small compared with the degradation of estimates due to violation of "structural" geostatistical assumptions. Practically, we find for our scenarios that assuming constant storage values during inversion does not appear to have a significant effect on K estimates or uncertainty bounds.
Infrasound Waveform Inversion and Mass Flux Validation from Sakurajima Volcano, Japan
NASA Astrophysics Data System (ADS)
Fee, D.; Kim, K.; Yokoo, A.; Izbekov, P. E.; Lopez, T. M.; Prata, F.; Ahonen, P.; Kazahaya, R.; Nakamichi, H.; Iguchi, M.
2015-12-01
Recent advances in numerical wave propagation modeling and station coverage have permitted robust inversion of infrasound data from volcanic explosions. Complex topography and crater morphology have been shown to substantially affect the infrasound waveform, suggesting that homogeneous acoustic propagation assumptions are invalid. Infrasound waveform inversion provides an exciting tool to accurately characterize emission volume and mass flux from both volcanic and non-volcanic explosions. Mass flux, arguably the most sought-after parameter from a volcanic eruption, can be determined from the volume flux using infrasound waveform inversion if the volcanic flow is well-characterized. Thus far, infrasound-based volume and mass flux estimates have yet to be validated. In February 2015 we deployed six infrasound stations around the explosive Sakurajima Volcano, Japan for 8 days. Here we present our full waveform inversion method and volume and mass flux estimates of numerous high amplitude explosions using a high resolution DEM and 3-D Finite Difference Time Domain modeling. Application of this technique to volcanic eruptions may produce realistic estimates of mass flux and plume height necessary for volcanic hazard mitigation. Several ground-based instruments and methods are used to independently determine the volume, composition, and mass flux of individual volcanic explosions. Specifically, we use ground-based ash sampling, multispectral infrared imagery, UV spectrometry, and multigas data to estimate the plume composition and flux. Unique tiltmeter data from underground tunnels at Sakurajima also provides a way to estimate the volume and mass of each explosion. In this presentation we compare the volume and mass flux estimates derived from the different methods and discuss sources of error and future improvements.
Adjusting for radiotelemetry error to improve estimates of habitat use.
Scott L. Findholt; Bruce K. Johnson; Lyman L. McDonald; John W. Kern; Alan Ager; Rosemary J. Stussy; Larry D. Bryant
2002-01-01
Animal locations estimated from radiotelemetry have traditionally been treated as error-free when analyzed in relation to habitat variables. Location error lowers the power of statistical tests of habitat selection. We describe a method that incorporates the error surrounding point estimates into measures of environmental variables determined from a geographic...
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Madsen, Thomas; Braun, Danielle; Peng, Gang; Parmigiani, Giovanni; Trippa, Lorenzo
2018-06-25
The Elston-Stewart peeling algorithm enables estimation of an individual's probability of harboring germline risk alleles based on pedigree data, and serves as the computational backbone of important genetic counseling tools. However, it remains limited to the analysis of risk alleles at a small number of genetic loci because its computing time grows exponentially with the number of loci considered. We propose a novel, approximate version of this algorithm, dubbed the peeling and paring algorithm, which scales polynomially in the number of loci. This allows extending peeling-based models to include many genetic loci. The algorithm creates a trade-off between accuracy and speed, and allows the user to control this trade-off. We provide exact bounds on the approximation error and evaluate it in realistic simulations. Results show that the loss of accuracy due to the approximation is negligible in important applications. This algorithm will improve genetic counseling tools by increasing the number of pathogenic risk alleles that can be addressed. To illustrate we create an extended five genes version of BRCAPRO, a widely used model for estimating the carrier probabilities of BRCA1 and BRCA2 risk alleles and assess its computational properties. © 2018 WILEY PERIODICALS, INC.
A novel highly parallel algorithm for linearly unmixing hyperspectral images
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto
2014-10-01
Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.
Kimura, Akatsuki; Celani, Antonio; Nagao, Hiromichi; Stasevich, Timothy; Nakamura, Kazuyuki
2015-01-01
Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE) in a prediction or to maximize likelihood. A (local) maximum of likelihood or (local) minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.
Using Reconstructed POD Modes as Turbulent Inflow for LES Wind Turbine Simulations
NASA Astrophysics Data System (ADS)
Nielson, Jordan; Bhaganagar, Kiran; Juttijudata, Vejapong; Sirisup, Sirod
2016-11-01
Currently, in order to get realistic atmospheric effects of turbulence, wind turbine LES simulations require computationally expensive precursor simulations. At times, the precursor simulation is more computationally expensive than the wind turbine simulation. The precursor simulations are important because they capture turbulence in the atmosphere and as stated above, turbulence impacts the power production estimation. On the other hand, POD analysis has been shown to be capable of capturing turbulent structures. The current study was performed to determine the plausibility of using lower dimension models from POD analysis of LES simulations as turbulent inflow to wind turbine LES simulations. The study will aid the wind energy community by lowering the computational cost of full scale wind turbine LES simulations, while maintaining a high level of turbulent information and being able to quickly apply the turbulent inflow to multi turbine wind farms. This will be done by comparing a pure LES precursor wind turbine simulation with simulations that use reduced POD mod inflow conditions. The study shows the feasibility of using lower dimension models as turbulent inflow of LES wind turbine simulations. Overall the power production estimation and velocity field of the wind turbine wake are well captured with small errors.
Attack Detection in Sensor Network Target Localization Systems With Quantized Data
NASA Astrophysics Data System (ADS)
Zhang, Jiangfan; Wang, Xiaodong; Blum, Rick S.; Kaplan, Lance M.
2018-04-01
We consider a sensor network focused on target localization, where sensors measure the signal strength emitted from the target. Each measurement is quantized to one bit and sent to the fusion center. A general attack is considered at some sensors that attempts to cause the fusion center to produce an inaccurate estimation of the target location with a large mean-square-error. The attack is a combination of man-in-the-middle, hacking, and spoofing attacks that can effectively change both signals going into and coming out of the sensor nodes in a realistic manner. We show that the essential effect of attacks is to alter the estimated distance between the target and each attacked sensor to a different extent, giving rise to a geometric inconsistency among the attacked and unattacked sensors. Hence, with the help of two secure sensors, a class of detectors are proposed to detect the attacked sensors by scrutinizing the existence of the geometric inconsistency. We show that the false alarm and miss probabilities of the proposed detectors decrease exponentially as the number of measurement samples increases, which implies that for sufficiently large number of samples, the proposed detectors can identify the attacked and unattacked sensors with any required accuracy.
Open quantum systems and error correction
NASA Astrophysics Data System (ADS)
Shabani Barzegar, Alireza
Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC) that applies to any linear map, in particular maps that are not completely positive (CP). This is a complementary to the second chapter which is published in [Shabani and Lidar, 2007]. In the last chapter 7 before the conclusion, a formulation for evaluating the performance of quantum error correcting codes for a general error model is presented, also published in [Shabani, 2005]. In this formulation, the correlation between errors is quantified by a Hamiltonian description of the noise process. In particular, we consider Calderbank-Shor-Steane codes and observe a better performance in the presence of correlated errors depending on the timing of the error recovery.
Software for Quantifying and Simulating Microsatellite Genotyping Error
Johnson, Paul C.D.; Haydon, Daniel T.
2007-01-01
Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
An hp-adaptivity and error estimation for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
Simulating realistic predator signatures in quantitative fatty acid signature analysis
Bromaghin, Jeffrey F.
2015-01-01
Diet estimation is an important field within quantitative ecology, providing critical insights into many aspects of ecology and community dynamics. Quantitative fatty acid signature analysis (QFASA) is a prominent method of diet estimation, particularly for marine mammal and bird species. Investigators using QFASA commonly use computer simulation to evaluate statistical characteristics of diet estimators for the populations they study. Similar computer simulations have been used to explore and compare the performance of different variations of the original QFASA diet estimator. In both cases, computer simulations involve bootstrap sampling prey signature data to construct pseudo-predator signatures with known properties. However, bootstrap sample sizes have been selected arbitrarily and pseudo-predator signatures therefore may not have realistic properties. I develop an algorithm to objectively establish bootstrap sample sizes that generates pseudo-predator signatures with realistic properties, thereby enhancing the utility of computer simulation for assessing QFASA estimator performance. The algorithm also appears to be computationally efficient, resulting in bootstrap sample sizes that are smaller than those commonly used. I illustrate the algorithm with an example using data from Chukchi Sea polar bears (Ursus maritimus) and their marine mammal prey. The concepts underlying the approach may have value in other areas of quantitative ecology in which bootstrap samples are post-processed prior to their use.
NASA Astrophysics Data System (ADS)
Pan, X. G.; Wang, J. Q.; Zhou, H. Y.
2013-05-01
The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
NASA Technical Reports Server (NTRS)
Fisher, Brad; Wolff, David B.
2010-01-01
Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.
Calibration of remotely sensed proportion or area estimates for misclassification error
Raymond L. Czaplewski; Glenn P. Catts
1992-01-01
Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...
Passarge, Michelle; Fix, Michael K; Manser, Peter; Stampanoni, Marco F M; Siebers, Jeffrey V
2017-04-01
To develop a robust and efficient process that detects relevant dose errors (dose errors of ≥5%) in external beam radiation therapy and directly indicates the origin of the error. The process is illustrated in the context of electronic portal imaging device (EPID)-based angle-resolved volumetric-modulated arc therapy (VMAT) quality assurance (QA), particularly as would be implemented in a real-time monitoring program. A Swiss cheese error detection (SCED) method was created as a paradigm for a cine EPID-based during-treatment QA. For VMAT, the method compares a treatment plan-based reference set of EPID images with images acquired over each 2° gantry angle interval. The process utilizes a sequence of independent consecutively executed error detection tests: an aperture check that verifies in-field radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment check to examine if rotation, scaling, and translation are within tolerances; pixel intensity check containing the standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each check were determined. To test the SCED method, 12 different types of errors were selected to modify the original plan. A series of angle-resolved predicted EPID images were artificially generated for each test case, resulting in a sequence of precalculated frames for each modified treatment plan. The SCED method was applied multiple times for each test case to assess the ability to detect introduced plan variations. To compare the performance of the SCED process with that of a standard gamma analysis, both error detection methods were applied to the generated test cases with realistic noise variations. Averaged over ten test runs, 95.1% of all plan variations that resulted in relevant patient dose errors were detected within 2° and 100% within 14° (<4% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 89.1% were detected by the SCED method within 2°. Based on the type of check that detected the error, determination of error sources was achieved. With noise ranging from no random noise to four times the established noise value, the averaged relevant dose error detection rate of the SCED method was between 94.0% and 95.8% and that of gamma between 82.8% and 89.8%. An EPID-frame-based error detection process for VMAT deliveries was successfully designed and tested via simulations. The SCED method was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of relevant dose errors. Compared to a typical (3%, 3 mm) gamma analysis, the SCED method produced a higher detection rate for all introduced dose errors, identified errors in an earlier stage, displayed a higher robustness to noise variations, and indicated the error source. © 2017 American Association of Physicists in Medicine.
System Measures Thermal Noise In A Microphone
NASA Technical Reports Server (NTRS)
Zuckerwar, Allan J.; Ngo, Kim Chi T.
1994-01-01
Vacuum provides acoustic isolation from environment. System for measuring thermal noise of microphone and its preamplifier eliminates some sources of error found in older systems. Includes isolation vessel and exterior suspension, acting together, enables measurement of thermal noise under realistic conditions while providing superior vibrational and accoustical isolation. System yields more accurate measurements of thermal noise.
NASA Astrophysics Data System (ADS)
Wu, Heng
2000-10-01
In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different Reynolds numbers. It is found that the velocity angle error estimator can detect most flow characteristics and produce dense grids in the regions where flow velocity directions have abrupt changes. In addition, the e theta estimator makes the derivative error dilutely distribute in the whole computational domain and also allows the refinement to be conducted at regions of high error. Through comparison of the velocity angle error across the interface with neighbouring cells, it is verified that the adaptive scheme in using etheta provides an optimum mesh which can clearly resolve local flow features in a precise way. The adaptive results justify the applicability of the etheta estimator and prove that this error estimator is a valuable adaptive indicator for the automatic refinement of unstructured grids.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2016-01-01
This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.
NASA Astrophysics Data System (ADS)
Eppenhof, Koen A. J.; Pluim, Josien P. W.
2017-02-01
Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.
Goal-oriented explicit residual-type error estimates in XFEM
NASA Astrophysics Data System (ADS)
Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin
2013-08-01
A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Dwarf galaxy mass estimators versus cosmological simulations
NASA Astrophysics Data System (ADS)
González-Samaniego, Alejandro; Bullock, James S.; Boylan-Kolchin, Michael; Fitts, Alex; Elbert, Oliver D.; Hopkins, Philip F.; Kereš, Dušan; Faucher-Giguère, Claude-André
2017-12-01
We use a suite of high-resolution cosmological dwarf galaxy simulations to test the accuracy of commonly used mass estimators from Walker et al. (2009) and Wolf et al. (2010), both of which depend on the observed line-of-sight velocity dispersion and the 2D half-light radius of the galaxy, Re. The simulations are part of the Feedback in Realistic Environments (FIRE) project and include 12 systems with stellar masses spanning 105-107 M⊙ that have structural and kinematic properties similar to those of observed dispersion-supported dwarfs. Both estimators are found to be quite accurate: M_Wolf/M_true = 0.98^{+0.19}_{-0.12} and M_Walker/M_true =1.07^{+0.21}_{-0.15}, with errors reflecting the 68 per cent range over all simulations. The excellent performance of these estimators is remarkable given that they each assume spherical symmetry, a supposition that is broken in our simulated galaxies. Though our dwarfs have negligible rotation support, their 3D stellar distributions are flattened, with short-to-long axis ratios c/a ≃ 0.4-0.7. The median accuracy of the estimators shows no trend with asphericity. Our simulated galaxies have sphericalized stellar profiles in 3D that follow a nearly universal form, one that transitions from a core at small radius to a steep fall-off ∝r-4.2 at large r; they are well fit by Sérsic profiles in projection. We find that the most important empirical quantity affecting mass estimator accuracy is Re. Determining Re by an analytic fit to the surface density profile produces a better estimated mass than if the half-light radius is determined via direct summation.
Regression-assisted deconvolution.
McIntyre, Julie; Stefanski, Leonard A
2011-06-30
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun
2018-03-01
Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.
Pedigree reconstruction from SNP data: parentage assignment, sibship clustering and beyond.
Huisman, Jisca
2017-09-01
Data on hundreds or thousands of single nucleotide polymorphisms (SNPs) provide detailed information about the relationships between individuals, but currently few tools can turn this information into a multigenerational pedigree. I present the r package sequoia, which assigns parents, clusters half-siblings sharing an unsampled parent and assigns grandparents to half-sibships. Assignments are made after consideration of the likelihoods of all possible first-, second- and third-degree relationships between the focal individuals, as well as the traditional alternative of being unrelated. This careful exploration of the local likelihood surface is implemented in a fast, heuristic hill-climbing algorithm. Distinction between the various categories of second-degree relatives is possible when likelihoods are calculated conditional on at least one parent of each focal individual. Performance was tested on simulated data sets with realistic genotyping error rate and missingness, based on three different large pedigrees (N = 1000-2000). This included a complex pedigree with overlapping generations, occasional close inbreeding and some unknown birth years. Parentage assignment was highly accurate down to about 100 independent SNPs (error rate <0.1%) and fast (<1 min) as most pairs can be excluded from being parent-offspring based on opposite homozygosity. For full pedigree reconstruction, 40% of parents were assumed nongenotyped. Reconstruction resulted in low error rates (<0.3%), high assignment rates (>99%) in limited computation time (typically <1 h) when at least 200 independent SNPs were used. In three empirical data sets, relatedness estimated from the inferred pedigree was strongly correlated to genomic relatedness. © 2017 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.
Hindcasting of decadal‐timescale estuarine bathymetric change with a tidal‐timescale model
Ganju, Neil K.; Schoellhamer, David H.; Jaffe, Bruce E.
2009-01-01
Hindcasting decadal-timescale bathymetric change in estuaries is prone to error due to limited data for initial conditions, boundary forcing, and calibration; computational limitations further hinder efforts. We developed and calibrated a tidal-timescale model to bathymetric change in Suisun Bay, California, over the 1867–1887 period. A general, multiple-timescale calibration ensured robustness over all timescales; two input reduction methods, the morphological hydrograph and the morphological acceleration factor, were applied at the decadal timescale. The model was calibrated to net bathymetric change in the entire basin; average error for bathymetric change over individual depth ranges was 37%. On a model cell-by-cell basis, performance for spatial amplitude correlation was poor over the majority of the domain, though spatial phase correlation was better, with 61% of the domain correctly indicated as erosional or depositional. Poor agreement was likely caused by the specification of initial bed composition, which was unknown during the 1867–1887 period. Cross-sectional bathymetric change between channels and flats, driven primarily by wind wave resuspension, was modeled with higher skill than longitudinal change, which is driven in part by gravitational circulation. The accelerated response of depth may have prevented gravitational circulation from being represented properly. As performance criteria became more stringent in a spatial sense, the error of the model increased. While these methods are useful for estimating basin-scale sedimentation changes, they may not be suitable for predicting specific locations of erosion or deposition. They do, however, provide a foundation for realistic estuarine geomorphic modeling applications.
NASA Astrophysics Data System (ADS)
Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil
2016-09-01
The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error
Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G
2012-01-01
Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908
NASA Astrophysics Data System (ADS)
Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu
2018-02-01
Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.
A posteriori error estimates in voice source recovery
NASA Astrophysics Data System (ADS)
Leonov, A. S.; Sorokin, V. N.
2017-12-01
The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2015-01-01
Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.
Han, Lianghao; Dong, Hua; McClelland, Jamie R; Han, Liangxiu; Hawkes, David J; Barratt, Dean C
2017-07-01
This paper presents a new hybrid biomechanical model-based non-rigid image registration method for lung motion estimation. In the proposed method, a patient-specific biomechanical modelling process captures major physically realistic deformations with explicit physical modelling of sliding motion, whilst a subsequent non-rigid image registration process compensates for small residuals. The proposed algorithm was evaluated with 10 4D CT datasets of lung cancer patients. The target registration error (TRE), defined as the Euclidean distance of landmark pairs, was significantly lower with the proposed method (TRE = 1.37 mm) than with biomechanical modelling (TRE = 3.81 mm) and intensity-based image registration without specific considerations for sliding motion (TRE = 4.57 mm). The proposed method achieved a comparable accuracy as several recently developed intensity-based registration algorithms with sliding handling on the same datasets. A detailed comparison on the distributions of TREs with three non-rigid intensity-based algorithms showed that the proposed method performed especially well on estimating the displacement field of lung surface regions (mean TRE = 1.33 mm, maximum TRE = 5.3 mm). The effects of biomechanical model parameters (such as Poisson's ratio, friction and tissue heterogeneity) on displacement estimation were investigated. The potential of the algorithm in optimising biomechanical models of lungs through analysing the pattern of displacement compensation from the image registration process has also been demonstrated. Copyright © 2017 Elsevier B.V. All rights reserved.
Flight Test 4 Preliminary Results: NASA Ames SSI
NASA Technical Reports Server (NTRS)
Isaacson, Doug; Gong, Chester; Reardon, Scott; Santiago, Confesor
2016-01-01
Realization of the expected proliferation of Unmanned Aircraft System (UAS) operations in the National Airspace System (NAS) depends on the development and validation of performance standards for UAS Detect and Avoid (DAA) Systems. The RTCA Special Committee 228 is charged with leading the development of draft Minimum Operational Performance Standards (MOPS) for UAS DAA Systems. NASA, as a participating member of RTCA SC-228 is committed to supporting the development and validation of draft requirements as well as the safety substantiation and end-to-end assessment of DAA system performance. The Unmanned Aircraft System (UAS) Integration into the National Airspace System (NAS) Project conducted flight test program, referred to as Flight Test 4, at Armstrong Flight Research Center from April -June 2016. Part of the test flights were dedicated to the NASA Ames-developed Detect and Avoid (DAA) System referred to as JADEM (Java Architecture for DAA Extensibility and Modeling). The encounter scenarios, which involved NASA's Ikhana UAS and a manned intruder aircraft, were designed to collect data on DAA system performance in real-world conditions and uncertainties with four different surveillance sensor systems. Flight test 4 has four objectives: (1) validate DAA requirements in stressing cases that drive MOPS requirements, including: high-speed cooperative intruder, low-speed non-cooperative intruder, high vertical closure rate encounter, and Mode CS-only intruder (i.e. without ADS-B), (2) validate TCASDAA alerting and guidance interoperability concept in the presence of realistic sensor, tracking and navigational errors and in multiple-intruder encounters against both cooperative and non-cooperative intruders, (3) validate Well Clear Recovery guidance in the presence of realistic sensor, tracking and navigational errors, and (4) validate DAA alerting and guidance requirements in the presence of realistic sensor, tracking and navigational errors. The results will be presented at RTCA Special Committee 228 in support of final verification and validation of the DAA MOPS.
Distortion Representation of Forecast Errors for Model Skill Assessment and Objective Analysis
NASA Technical Reports Server (NTRS)
Hoffman, Ross N.; Nehrkorn, Thomas; Grassotti, Christopher
1996-01-01
We study a novel characterization of errors for numerical weather predictions. In its simplest form we decompose the error into a part attributable to phase errors and a remainder. The phase error is represented in the same fashion as a velocity field and will be required to vary slowly and smoothly with position. A general distortion representation allows for the displacement and a bias correction of forecast anomalies. In brief, the distortion is determined by minimizing the objective function by varying the displacement and bias correction fields. In the present project we use a global or hemispheric domain, and spherical harmonics to represent these fields. In this project we are initially focusing on the assessment application, restricted to a realistic but univariate 2-dimensional situation. Specifically we study the forecast errors of the 500 hPa geopotential height field for forecasts of the short and medium range. The forecasts are those of the Goddard Earth Observing System data assimilation system. Results presented show that the methodology works, that a large part of the total error may be explained by a distortion limited to triangular truncation at wavenumber 10, and that the remaining residual error contains mostly small spatial scales.
Incorporating measurement error in n = 1 psychological autoregressive modeling
Schuurman, Noémi K.; Houtveen, Jan H.; Hamaker, Ellen L.
2015-01-01
Measurement error is omnipresent in psychological data. However, the vast majority of applications of autoregressive time series analyses in psychology do not take measurement error into account. Disregarding measurement error when it is present in the data results in a bias of the autoregressive parameters. We discuss two models that take measurement error into account: An autoregressive model with a white noise term (AR+WN), and an autoregressive moving average (ARMA) model. In a simulation study we compare the parameter recovery performance of these models, and compare this performance for both a Bayesian and frequentist approach. We find that overall, the AR+WN model performs better. Furthermore, we find that for realistic (i.e., small) sample sizes, psychological research would benefit from a Bayesian approach in fitting these models. Finally, we illustrate the effect of disregarding measurement error in an AR(1) model by means of an empirical application on mood data in women. We find that, depending on the person, approximately 30–50% of the total variance was due to measurement error, and that disregarding this measurement error results in a substantial underestimation of the autoregressive parameters. PMID:26283988
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2015-03-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks
NASA Astrophysics Data System (ADS)
Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.
2014-11-01
The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.
Fully anisotropic goal-oriented mesh adaptation for 3D steady Euler equations
NASA Astrophysics Data System (ADS)
Loseille, A.; Dervieux, A.; Alauzet, F.
2010-04-01
This paper studies the coupling between anisotropic mesh adaptation and goal-oriented error estimate. The former is very well suited to the control of the interpolation error. It is generally interpreted as a local geometric error estimate. On the contrary, the latter is preferred when studying approximation errors for PDEs. It generally involves non local error contributions. Consequently, a full and strong coupling between both is hard to achieve due to this apparent incompatibility. This paper shows how to achieve this coupling in three steps. First, a new a priori error estimate is proved in a formal framework adapted to goal-oriented mesh adaptation for output functionals. This estimate is based on a careful analysis of the contributions of the implicit error and of the interpolation error. Second, the error estimate is applied to the set of steady compressible Euler equations which are solved by a stabilized Galerkin finite element discretization. A goal-oriented error estimation is derived. It involves the interpolation error of the Euler fluxes weighted by the gradient of the adjoint state associated with the observed functional. Third, rewritten in the continuous mesh framework, the previous estimate is minimized on the set of continuous meshes thanks to a calculus of variations. The optimal continuous mesh is then derived analytically. Thus, it can be used as a metric tensor field to drive the mesh adaptation. From a numerical point of view, this method is completely automatic, intrinsically anisotropic, and does not depend on any a priori choice of variables to perform the adaptation. 3D examples of steady flows around supersonic and transsonic jets are presented to validate the current approach and to demonstrate its efficiency.
A numerical study of some potential sources of error in side-by-side seismometer evaluations
Holcomb, L. Gary
1990-01-01
This report presents the results of a series of computer simulations of potential errors in test data, which might be obtained when conducting side-by-side comparisons of seismometers. These results can be used as guides in estimating potential sources and magnitudes of errors one might expect when analyzing real test data. First, the derivation of a direct method for calculating the noise levels of two sensors in a side-by-side evaluation is repeated and extended slightly herein. This bulk of this derivation was presented previously (see Holcomb 1989); it is repeated here for easy reference.This method is applied to the analysis of a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of white noise spectra with known signal-tonoise ratios (SNR's). This report extends this analysis to high SNR's to determine the limitations of the direct method for calculating the noise levels at signal-to-noise levels which are much higher than presented previously (see Holcomb 1989). Next, the method is used to analyze a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of bandshaped noise spectra with known signal-tonoise ratios. This is a much more realistic representation of real world data because the earth's background spectrum is certainly not flat.Finally, the results of the analysis of simulated white and bandshaped side-by-side test data are used to assist in interpreting the analysis of the effects of simulated azimuthal misalignment in side-by-side sensor evaluations. A thorough understanding of azimuthal misalignment errors is important because of the physical impossibility of perfectly aligning two sensors in a real world situation. The analysis herein indicates that alignment errors place lower limits on the levels of system noise which can be resolved in a side-by-side measurement It also indicates that alignment errors are the source of the fact that real data noise spectra tend to follow the earth's background spectra in shape.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach
NASA Astrophysics Data System (ADS)
Bähr, Hermann; Hanssen, Ramon F.
2012-12-01
An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.
Giovannelli, J; Loury, P; Lainé, M; Spaccaferri, G; Hubert, B; Chaud, P
2015-05-01
To describe and evaluate the forecasts of the load that pandemic A(H1N1)2009 influenza would have on the general practitioners (GP) and hospital care systems, especially during its peak, in the Nord-Pas-de-Calais (NPDC) region, France. Modelling study. The epidemic curve was modelled using an assumption of normal distribution of cases. The values for the forecast parameters were estimated from a literature review of observed data from the Southern hemisphere and French Overseas Territories, where the pandemic had already occurred. Two scenarios were considered, one realistic, the other pessimistic, enabling the authors to evaluate the 'reasonable worst case'. Forecasts were then assessed by comparing them with observed data in the NPDC region--of 4 million people. The realistic scenarios forecasts estimated 300,000 cases, 1500 hospitalizations, 225 intensive care units (ICU) admissions for the pandemic wave; 115 hospital beds and 45 ICU beds would be required per day during the peak. The pessimistic scenario's forecasts were 2-3 times higher than the realistic scenario's forecasts. Observed data were: 235,000 cases, 1585 hospitalizations, 58 ICU admissions; and a maximum of 11.6 ICU beds per day. The realistic scenario correctly estimated the temporal distribution of GP and hospitalized cases but overestimated the number of cases admitted to ICU. Obtaining more robust data for parameters estimation--particularly the rate of ICU admission among the population that the authors recommend to use--may provide better forecasts. Copyright © 2015 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Two-species occupancy modeling accounting for species misidentification and nondetection
Chambert, Thierry; Grant, Evan H. Campbell; Miller, David A. W.; Nichols, James; Mulder, Kevin P.; Brand, Adrianne B,
2018-01-01
In occupancy studies, species misidentification can lead to false‐positive detections, which can cause severe estimator biases. Currently, all models that account for false‐positive errors only consider omnibus sources of false detections and are limited to single‐species occupancy.However, false detections for a given species often occur because of the misidentification with another, closely related species. To exploit this explicit source of false‐positive detection error, we develop a two‐species occupancy model that accounts for misidentifications between two species of interest. As with other false‐positive models, identifiability is greatly improved by the availability of unambiguous detections at a subset of site x occasions. Here, we consider the case where some of the field observations can be confirmed using laboratory or other independent identification methods (“confirmatory data”).We performed three simulation studies to (1) assess the model's performance under various realistic scenarios, (2) investigate the influence of the proportion of confirmatory data on estimator accuracy and (3) compare the performance of this two‐species model with that of the single‐species false‐positive model. The model shows good performance under all scenarios, even when only small proportions of detections are confirmed (e.g. 5%). It also clearly outperforms the single‐species model.We illustrate application of this model using a 4‐year dataset on two sympatric species of lungless salamanders: the US federally endangered Shenandoah salamander Plethodon shenandoah, and its presumed competitor, the red‐backed salamander Plethodon cinereus. Occupancy of red‐backed salamanders appeared very stable across the 4 years of study, whereas the Shenandoah salamander displayed substantial turnover in occupancy of forest habitats among years.Given the extent of species misidentification issues in occupancy studies, this modelling approach should help improve the reliability of estimates of species distribution, which is the goal of many studies and monitoring programmes. Further developments, to account for different forms of state uncertainty, can be readily undertaken under our general approach.
NASA Astrophysics Data System (ADS)
Raeder, K.; Anderson, J. L.; Lauritzen, P. H.; Hoar, T. J.; Collins, N.
2010-12-01
DART (www.image.ucar.edu/DAReS/DART) is a general purpose, freely available, ensemble Kalman filter, data assimilation system, which is being used to generate state-of-the-art, partially coupled, ocean-atmosphere re-analyses in support of the decadal predictions planned for the next IPCC report. The resulting gridded product is directly comparable to the state variables output by POP and CAM (oceanic and atmospheric components of NCAR's Community Earth System Model climate model) because those are the assimilating models. Other models could also benefit from comparison against these reanalyses, since the ocean analyses are at the leading edge of ocean state estimation, and the atmospheric analyses are competitive with operational centers'. Such comparisons can reveal model biases and predictability characteristics, and do so in a quantitative way, since the ensemble nature of the analyses provides an objective estimate of the analysis error. The analyses will also be used as initial conditions for the decadal forecasts because they are the most realistic available. The generation of such analyses has revealed errors in model formulation for several versions of the finite volume core CAM, which has led to model improvements in each case. New models can be incorporated into DART in a matter of weeks, allowing them to be compared directly against available observations. The observations currently used in the assimilations include, for the ocean; temperature and salinity from the World Ocean Database (floats, drifters, moorings, autonomous pinipeds, and others), and for the atmosphere; temperature and winds from radiosondes, satellite drift winds, ACARS and aircraft. Observations of ocean currents and atmospheric moisture and pressure are also available. Global Positioning System profiles of atmospheric temperature and moisture are available for recent years. All that is required to add new observations to the suite is the forward operator, which generates an estimate of the observation from the model state. In summary, DART provides a flexible, convenient, rigorous environment for evaluating models in the context of real observations.
Fully probabilistic seismic source inversion - Part 2: Modelling errors and station covariances
NASA Astrophysics Data System (ADS)
Stähler, Simon C.; Sigloch, Karin
2016-11-01
Seismic source inversion, a central task in seismology, is concerned with the estimation of earthquake source parameters and their uncertainties. Estimating uncertainties is particularly challenging because source inversion is a non-linear problem. In a companion paper, Stähler and Sigloch (2014) developed a method of fully Bayesian inference for source parameters, based on measurements of waveform cross-correlation between broadband, teleseismic body-wave observations and their modelled counterparts. This approach yields not only depth and moment tensor estimates but also source time functions. A prerequisite for Bayesian inference is the proper characterisation of the noise afflicting the measurements, a problem we address here. We show that, for realistic broadband body-wave seismograms, the systematic error due to an incomplete physical model affects waveform misfits more strongly than random, ambient background noise. In this situation, the waveform cross-correlation coefficient CC, or rather its decorrelation D = 1 - CC, performs more robustly as a misfit criterion than ℓp norms, more commonly used as sample-by-sample measures of misfit based on distances between individual time samples. From a set of over 900 user-supervised, deterministic earthquake source solutions treated as a quality-controlled reference, we derive the noise distribution on signal decorrelation D = 1 - CC of the broadband seismogram fits between observed and modelled waveforms. The noise on D is found to approximately follow a log-normal distribution, a fortunate fact that readily accommodates the formulation of an empirical likelihood function for D for our multivariate problem. The first and second moments of this multivariate distribution are shown to depend mostly on the signal-to-noise ratio (SNR) of the CC measurements and on the back-azimuthal distances of seismic stations. By identifying and quantifying this likelihood function, we make D and thus waveform cross-correlation measurements usable for fully probabilistic sampling strategies, in source inversion and related applications such as seismic tomography.
NASA Astrophysics Data System (ADS)
Kirchner, J. W.
2016-01-01
Environmental heterogeneity is ubiquitous, but environmental systems are often analyzed as if they were homogeneous instead, resulting in aggregation errors that are rarely explored and almost never quantified. Here I use simple benchmark tests to explore this general problem in one specific context: the use of seasonal cycles in chemical or isotopic tracers (such as Cl-, δ18O, or δ2H) to estimate timescales of storage in catchments. Timescales of catchment storage are typically quantified by the mean transit time, meaning the average time that elapses between parcels of water entering as precipitation and leaving again as streamflow. Longer mean transit times imply greater damping of seasonal tracer cycles. Thus, the amplitudes of tracer cycles in precipitation and streamflow are commonly used to calculate catchment mean transit times. Here I show that these calculations will typically be wrong by several hundred percent, when applied to catchments with realistic degrees of spatial heterogeneity. This aggregation bias arises from the strong nonlinearity in the relationship between tracer cycle amplitude and mean travel time. I propose an alternative storage metric, the young water fraction in streamflow, defined as the fraction of runoff with transit times of less than roughly 0.2 years. I show that this young water fraction (not to be confused with event-based "new water" in hydrograph separations) is accurately predicted by seasonal tracer cycles within a precision of a few percent, across the entire range of mean transit times from almost zero to almost infinity. Importantly, this relationship is also virtually free from aggregation error. That is, seasonal tracer cycles also accurately predict the young water fraction in runoff from highly heterogeneous mixtures of subcatchments with strongly contrasting transit-time distributions. Thus, although tracer cycle amplitudes yield biased and unreliable estimates of catchment mean travel times in heterogeneous catchments, they can be used to reliably estimate the fraction of young water in runoff.
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L
2010-04-01
The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.
NASA Astrophysics Data System (ADS)
Boé, Julien; Terray, Laurent
2014-05-01
Ensemble approaches for climate change projections have become ubiquitous. Because of large model-to-model variations and, generally, lack of rationale for the choice of a particular climate model against others, it is widely accepted that future climate change and its impacts should not be estimated based on a single climate model. Generally, as a default approach, the multi-model ensemble mean (MMEM) is considered to provide the best estimate of climate change signals. The MMEM approach is based on the implicit hypothesis that all the models provide equally credible projections of future climate change. This hypothesis is unlikely to be true and ideally one would want to give more weight to more realistic models. A major issue with this alternative approach lies in the assessment of the relative credibility of future climate projections from different climate models, as they can only be evaluated against present-day observations: which present-day metric(s) should be used to decide which models are "good" and which models are "bad" in the future climate? Once a supposedly informative metric has been found, other issues arise. What is the best statistical method to combine multiple models results taking into account their relative credibility measured by a given metric? How to be sure in the end that the metric-based estimate of future climate change is not in fact less realistic than the MMEM? It is impossible to provide strict answers to those questions in the climate change context. Yet, in this presentation, we propose a methodological approach based on a perfect model framework that could bring some useful elements of answer to the questions previously mentioned. The basic idea is to take a random climate model in the ensemble and treat it as if it were the truth (results of this model, in both past and future climate, are called "synthetic observations"). Then, all the other members from the multi-model ensemble are used to derive thanks to a metric-based approach a posterior estimate of climate change, based on the synthetic observation of the metric. Finally, it is possible to compare the posterior estimate to the synthetic observation of future climate change to evaluate the skill of the method. The main objective of this presentation is to describe and apply this perfect model framework to test different methodological issues associated with non-uniform model weighting and similar metric-based approaches. The methodology presented is general, but will be applied to the specific case of summer temperature change in France, for which previous works have suggested potentially useful metrics associated with soil-atmosphere and cloud-temperature interactions. The relative performances of different simple statistical approaches to combine multiple model results based on metrics will be tested. The impact of ensemble size, observational errors, internal variability, and model similarity will be characterized. The potential improvements associated with metric-based approaches compared to the MMEM is terms of errors and uncertainties will be quantified.
Optimal post-experiment estimation of poorly modeled dynamic systems
NASA Technical Reports Server (NTRS)
Mook, D. Joseph
1988-01-01
Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.
Polarizable multipolar electrostatics for cholesterol
NASA Astrophysics Data System (ADS)
Fletcher, Timothy L.; Popelier, Paul L. A.
2016-08-01
FFLUX is a novel force field under development for biomolecular modelling, and is based on topological atoms and the machine learning method kriging. Successful kriging models have been obtained for realistic electrostatics of amino acids, small peptides, and some carbohydrates but here, for the first time, we construct kriging models for a sizeable ligand of great importance, which is cholesterol. Cholesterol's mean total (internal) electrostatic energy prediction error amounts to 3.9 kJ mol-1, which pleasingly falls below the threshold of 1 kcal mol-1 often cited for accurate biomolecular modelling. We present a detailed analysis of the error distributions.
Neural network decoder for quantum error correcting codes
NASA Astrophysics Data System (ADS)
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
Performance Sensitivity Studies on the PIAA Implementation of the High-Contrast Imaging Testbed
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Lou, John; Shaklan, Stuart; Levine, Marie
2010-01-01
This slide presentation reviews the sensitivity studies on the Phase-Induced Amplitude Apodization (PIAA), or pupil mapping using the High-Contrast Imaging Testbed (HCIT). PIAA is a promising technique in high-dynamic range stellar coronagraph. This presentation reports on the investigation of the effects of the phase and rigid-body errors of various optics on the narrowband contrast performance of the PIAA/HCIT hybrid system. The results have shown that the 2-step wavefront control method utilizing 2-DMs is quite effective in compensating the effects of realistic phase and rigid-body errors of various optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heid, Matthias; Luetkenhaus, Norbert
2006-05-15
We investigate the performance of a continuous-variable quantum key distribution scheme in a practical setting. More specifically, we take a nonideal error reconciliation procedure into account. The quantum channel connecting the two honest parties is assumed to be lossy but noiseless. Secret key rates are given for the case that the measurement outcomes are postselected or a reverse reconciliation scheme is applied. The reverse reconciliation scheme loses its initial advantage in the practical setting. If one combines postselection with reverse reconciliation, however, much of this advantage can be recovered.
A-posteriori error estimation for the finite point method with applications to compressible flow
NASA Astrophysics Data System (ADS)
Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio
2017-08-01
An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.
Zolgharni, M; Griffiths, H; Ledger, P D
2010-08-01
The feasibility of detecting a cerebral haemorrhage with a hemispherical MIT coil array consisting of 56 exciter/sensor coils of 10 mm radius and operating at 1 and 10 MHz was investigated. A finite difference method combined with an anatomically realistic head model comprising 12 tissue types was used to simulate the strokes. Frequency-difference images were reconstructed from the modelled data with different levels of the added phase noise and two types of a priori boundary errors: a displacement of the head and a size scaling error. The results revealed that a noise level of 3 m degrees (standard deviation) was adequate for obtaining good visualization of a peripheral stroke (volume approximately 49 ml). The simulations further showed that the displacement error had to be within 3-4 mm and the scaling error within 3-4% so as not to cause unacceptably large artefacts on the images.
Model-based sphere localization (MBSL) in x-ray projections
NASA Astrophysics Data System (ADS)
Sawall, Stefan; Maier, Joscha; Leinweber, Carsten; Funck, Carsten; Kuntz, Jan; Kachelrieß, Marc
2017-08-01
The detection of spherical markers in x-ray projections is an important task in a variety of applications, e.g. geometric calibration and detector distortion correction. Therein, the projection of the sphere center on the detector is of particular interest as the used spherical beads are no ideal point-like objects. Only few methods have been proposed to estimate this respective position on the detector with sufficient accuracy and surrogate positions, e.g. the center of gravity, are used, impairing the results of subsequent algorithms. We propose to estimate the projection of the sphere center on the detector using a simulation-based method matching an artificial projection to the actual measurement. The proposed algorithm intrinsically corrects for all polychromatic effects included in the measurement and absent in the simulation by a polynomial which is estimated simultaneously. Furthermore, neither the acquisition geometry nor any object properties besides the fact that the object is of spherical shape need to be known to find the center of the bead. It is shown by simulations that the algorithm estimates the center projection with an error of less than 1% of the detector pixel size in case of realistic noise levels and that the method is robust to the sphere material, sphere size, and acquisition parameters. A comparison to three reference methods using simulations and measurements indicates that the proposed method is an order of magnitude more accurate compared to these algorithms. The proposed method is an accurate algorithm to estimate the center of spherical markers in CT projections in the presence of polychromatic effects and noise.
Zhang, Shengzhi; Yu, Shuai; Liu, Chaojun; Yuan, Xuebing; Liu, Sheng
2016-02-20
To provide a long-time reliable orientation, sensor fusion technologies are widely used to integrate available inertial sensors for the low-cost orientation estimation. In this paper, a novel dual-linear Kalman filter was designed for a multi-sensor system integrating MEMS gyros, an accelerometer, and a magnetometer. The proposed filter precludes the impacts of magnetic disturbances on the pitch and roll which the heading is subjected to. The filter can achieve robust orientation estimation for different statistical models of the sensors. The root mean square errors (RMSE) of the estimated attitude angles are reduced by 30.6% under magnetic disturbances. Owing to the reduction of system complexity achieved by smaller matrix operations, the mean total time consumption is reduced by 23.8%. Meanwhile, the separated filter offers greater flexibility for the system configuration, as it is possible to switch on or off the second stage filter to include or exclude the magnetometer compensation for the heading. Online experiments were performed on the homemade miniature orientation determination system (MODS) with the turntable. The average RMSE of estimated orientation are less than 0.4° and 1° during the static and low-dynamic tests, respectively. More realistic tests on two-wheel self-balancing vehicle driving and indoor pedestrian walking were carried out to evaluate the performance of the designed MODS when high accelerations and angular rates were introduced. Test results demonstrate that the MODS is applicable for the orientation estimation under various dynamic conditions. This paper provides a feasible alternative for low-cost orientation determination.
Zhang, Shengzhi; Yu, Shuai; Liu, Chaojun; Yuan, Xuebing; Liu, Sheng
2016-01-01
To provide a long-time reliable orientation, sensor fusion technologies are widely used to integrate available inertial sensors for the low-cost orientation estimation. In this paper, a novel dual-linear Kalman filter was designed for a multi-sensor system integrating MEMS gyros, an accelerometer, and a magnetometer. The proposed filter precludes the impacts of magnetic disturbances on the pitch and roll which the heading is subjected to. The filter can achieve robust orientation estimation for different statistical models of the sensors. The root mean square errors (RMSE) of the estimated attitude angles are reduced by 30.6% under magnetic disturbances. Owing to the reduction of system complexity achieved by smaller matrix operations, the mean total time consumption is reduced by 23.8%. Meanwhile, the separated filter offers greater flexibility for the system configuration, as it is possible to switch on or off the second stage filter to include or exclude the magnetometer compensation for the heading. Online experiments were performed on the homemade miniature orientation determination system (MODS) with the turntable. The average RMSE of estimated orientation are less than 0.4° and 1° during the static and low-dynamic tests, respectively. More realistic tests on two-wheel self-balancing vehicle driving and indoor pedestrian walking were carried out to evaluate the performance of the designed MODS when high accelerations and angular rates were introduced. Test results demonstrate that the MODS is applicable for the orientation estimation under various dynamic conditions. This paper provides a feasible alternative for low-cost orientation determination. PMID:26907294
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
An underestimated record breaking event: why summer 1540 was very likely warmer than 2003
NASA Astrophysics Data System (ADS)
Wetter, O.; Pfister, C.
2012-07-01
This paper challenges the argument obtained from the analysis of grape harvest (GHD) and maximum latewood density (MXD) data that the 2003 heat-wave in Western Europe was the most extreme warm anomaly in the last millennium. We have evidence that the heat and drought in 1540 known from numerous contemporary narrative documentary reports is not adequately reflected in these estimates. Vines severely suffered from the extreme heat and drought which led vine-growers to postpone the harvest in hope for a rain spell. At the time of harvest many grapes had already become raisins. Likewise, many trees suffered from premature leaf fall probably as a result of a decreased net photosynthesis, as it was measured in 2003. To more realistically assess 1540's spring-summer (AMJJ) temperature we present a new Swiss series of critically evaluated GHD. Basing on three different approaches considering the drought effect on vines, temperatures were assessed between 4.3 °C and 6.3 °C (including the Standard Error of Estimate (SEE) of 0.52 °C) above the 1901-2000 mean which is significantly higher than the value of 2.9 °C measured in 2003. Considering the significance of soil moisture deficits for extreme heat-waves this result still needs to be validated with estimated seasonal precipitation from independent evidence.
Remontet, Laurent; Uhry, Zoé; Bossard, Nadine; Iwaz, Jean; Belot, Aurélien; Danieli, Coraline; Charvat, Hadrien; Roche, Laurent
2018-01-01
Cancer survival trend analyses are essential to describe accurately the way medical practices impact patients' survival according to the year of diagnosis. To this end, survival models should be able to account simultaneously for non-linear and non-proportional effects and for complex interactions between continuous variables. However, in the statistical literature, there is no consensus yet on how to build such models that should be flexible but still provide smooth estimates of survival. In this article, we tackle this challenge by smoothing the complex hypersurface (time since diagnosis, age at diagnosis, year of diagnosis, and mortality hazard) using a multidimensional penalized spline built from the tensor product of the marginal bases of time, age, and year. Considering this penalized survival model as a Poisson model, we assess the performance of this approach in estimating the net survival with a comprehensive simulation study that reflects simple and complex realistic survival trends. The bias was generally small and the root mean squared error was good and often similar to that of the true model that generated the data. This parametric approach offers many advantages and interesting prospects (such as forecasting) that make it an attractive and efficient tool for survival trend analyses.
Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching
2010-06-01
Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.
Objectified quantification of uncertainties in Bayesian atmospheric inversions
NASA Astrophysics Data System (ADS)
Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.
2015-05-01
Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.
Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error
NASA Astrophysics Data System (ADS)
Miller, Austin
In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Characterizing the SWOT discharge error budget on the Sacramento River, CA
NASA Astrophysics Data System (ADS)
Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.
2013-12-01
The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.
Jonsen, Ian
2016-02-08
State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.
Economic measurement of medical errors using a hospital claims database.
David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S
2013-01-01
The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
van den Tillaart-Haverkate, Maj; de Ronde-Brons, Inge; Dreschler, Wouter A; Houben, Rolph
2017-01-01
Single-microphone noise reduction leads to subjective benefit, but not to objective improvements in speech intelligibility. We investigated whether response times (RTs) provide an objective measure of the benefit of noise reduction and whether the effect of noise reduction is reflected in rated listening effort. Twelve normal-hearing participants listened to digit triplets that were either unprocessed or processed with one of two noise-reduction algorithms: an ideal binary mask (IBM) and a more realistic minimum mean square error estimator (MMSE). For each of these three processing conditions, we measured (a) speech intelligibility, (b) RTs on two different tasks (identification of the last digit and arithmetic summation of the first and last digit), and (c) subjective listening effort ratings. All measurements were performed at four signal-to-noise ratios (SNRs): -5, 0, +5, and +∞ dB. Speech intelligibility was high (>97% correct) for all conditions. A significant decrease in response time, relative to the unprocessed condition, was found for both IBM and MMSE for the arithmetic but not the identification task. Listening effort ratings were significantly lower for IBM than for MMSE and unprocessed speech in noise. We conclude that RT for an arithmetic task can provide an objective measure of the benefit of noise reduction. For young normal-hearing listeners, both ideal and realistic noise reduction can reduce RTs at SNRs where speech intelligibility is close to 100%. Ideal noise reduction can also reduce perceived listening effort.
NASA Astrophysics Data System (ADS)
Suryoputro, Nugroho; Suhardjono, Soetopo, Widandi; Suhartanto, Ery
2017-09-01
In calibrating hydrological models, there are generally two stages of activity: 1) determining realistic model initial parameters in representing natural component physical processes, 2) entering initial parameter values which are then processed by trial error or automatically to obtain optimal values. To determine a realistic initial value, it takes experience and user knowledge of the model. This is a problem for beginner model users. This paper will present another approach to estimate the infiltration parameters in the tank model. The parameters will be approximated by the runoff coefficient of rational method. The value approach of infiltration parameter is simply described as the result of the difference in the percentage of total rainfall minus the percentage of runoff. It is expected that the results of this research will accelerate the calibration process of tank model parameters. The research was conducted on the sub-watershed Kali Bango in Malang Regency with an area of 239,71 km2. Infiltration measurements were carried out in January 2017 to March 2017. Analysis of soil samples at Soil Physics Laboratory, Department of Soil Science, Faculty of Agriculture, Universitas Brawijaya. Rainfall and discharge data were obtained from UPT PSAWS Bango Gedangan in Malang. Temperature, evaporation, relative humidity, wind speed data was obtained from BMKG station of Karang Ploso, Malang. The results showed that the infiltration coefficient at the top tank outlet can be determined its initial value by using the approach of the coefficient of runoff rational method with good result.
On-line estimation of error covariance parameters for atmospheric data assimilation
NASA Technical Reports Server (NTRS)
Dee, Dick P.
1995-01-01
A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.
Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN
NASA Astrophysics Data System (ADS)
Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.
2016-12-01
In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)
2000-01-01
Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.
NASA Astrophysics Data System (ADS)
Wurl, D.; Grainger, R. G.; McDonald, A. J.; Deshler, T.
2010-05-01
Stratospheric aerosol particles under non-volcanic conditions are typically smaller than 0.1 μm. Due to fundamental limitations of the scattering theory in the Rayleigh limit, these tiny particles are hard to measure by satellite instruments. As a consequence, current estimates of global aerosol properties retrieved from spectral aerosol extinction measurements tend to be strongly biased. Aerosol surface area densities, for instance, are observed to be about 40% smaller than those derived from correlative in situ measurements (Deshler et al., 2003). An accurate knowledge of the global distribution of aerosol properties is, however, essential to better understand and quantify the role they play in atmospheric chemistry, dynamics, radiation and climate. To address this need a new retrieval algorithm was developed, which employs a nonlinear Optimal Estimation (OE) method to iteratively solve for the monomodal size distribution parameters which are statistically most consistent with both the satellite-measured multi-wavelength aerosol extinction data and a priori information. By thus combining spectral extinction measurements (at visible to near infrared wavelengths) with prior knowledge of aerosol properties at background level, even the smallest particles are taken into account which are practically invisible to optical remote sensing instruments. The performance of the OE retrieval algorithm was assessed based on synthetic spectral extinction data generated from both monomodal and small-mode-dominant bimodal sulphuric acid aerosol size distributions. For monomodal background aerosol, the new algorithm was shown to fairly accurately retrieve the particle sizes and associated integrated properties (surface area and volume densities), even in the presence of large extinction uncertainty. The associated retrieved uncertainties are a good estimate of the true errors. In the case of bimodal background aerosol, where the retrieved (monomodal) size distributions naturally differ from the correct bimodal values, the associated surface area (A) and volume densities (V) are, nevertheless, fairly accurately retrieved, except at values larger than 1.0 μm2 cm-3 (A) and 0.05 μm3 cm-3 (V), where they tend to underestimate the true bimodal values. Due to the limited information content in the SAGE II spectral extinction measurements this kind of forward model error cannot be avoided here. Nevertheless, the retrieved uncertainties are a good estimate of the true errors in the retrieved integrated properties, except where the surface area density exceeds the 1.0 μm2 cm-3 threshold. When applied to near-global SAGE II satellite extinction measured in 1999 the retrieved OE surface area and volume densities are observed to be larger by, respectively, 20-50% and 10-40% compared to those estimates obtained by the SAGE~II operational retrieval algorithm. An examination of the OE algorithm biases with in situ data indicates that the new OE aerosol property estimates tend to be more realistic than previous estimates obtained from remotely sensed data through other retrieval techniques. Based on the results of this study we therefore suggest that the new Optimal Estimation retrieval algorithm is able to contribute to an advancement in aerosol research by considerably improving current estimates of aerosol properties in the lower stratosphere under low aerosol loading conditions.
NASA Astrophysics Data System (ADS)
Adebiyi, S. J.; Adebesin, B. O.; Ikubanni, S. O.; Joshua, B. W.
2017-05-01
Empirical models of the ionosphere, such as the International Reference Ionosphere (IRI) model, play a vital role in evaluating the environmental effect on the operation of space-based communication and navigation technologies. The IRI extended to Plasmasphere (IRI-Plas) model can be adjusted with external data to update its electron density profile while still maintaining the overall integrity of the model representations. In this paper, the performance of the total electron content (TEC) assimilation option of the IRI-Plas at two equatorial stations, Jicamarca, Peru (geographic: 12°S, 77°W, dip angle 0.8°) and Cachoeira Paulista, Brazil (Geographic: 22.7°S, 45°W, dip angle -26°), is examined during quiet and disturbed conditions. TEC, F2 layer critical frequency (foF2), and peak height (hmF2) predicted when the model is operated without external input were used as a baseline in our model evaluation. Results indicate that TEC predicted by the assimilation option generally produced smaller estimation errors compared to the "no extra input" option during quiet and disturbed conditions. Generally, the error is smaller at the equatorial trough than near the crest for both quiet and disturbed days. With assimilation option, there is a substantial improvement of storm time estimations when compared with quiet time predictions. The improvement is, however, independent on storm's severity. Furthermore, the modeled foF2 and hmF2 are generally poor with TEC assimilation, particularly the hmF2 prediction, at the two locations during both quiet and disturbed conditions. Consequently, IRI-Plas model assimilated with TEC value only may not be sufficient where more realistic instantaneous values of peak parameters are required.
Sampling design for spatially distributed hydrogeologic and environmental processes
Christakos, G.; Olea, R.A.
1992-01-01
A methodology for the design of sampling networks over space is proposed. The methodology is based on spatial random field representations of nonhomogeneous natural processes, and on optimal spatial estimation techniques. One of the most important results of random field theory for physical sciences is its rationalization of correlations in spatial variability of natural processes. This correlation is extremely important both for interpreting spatially distributed observations and for predictive performance. The extent of site sampling and the types of data to be collected will depend on the relationship of subsurface variability to predictive uncertainty. While hypothesis formulation and initial identification of spatial variability characteristics are based on scientific understanding (such as knowledge of the physics of the underlying phenomena, geological interpretations, intuition and experience), the support offered by field data is statistically modelled. This model is not limited by the geometric nature of sampling and covers a wide range in subsurface uncertainties. A factorization scheme of the sampling error variance is derived, which possesses certain atttactive properties allowing significant savings in computations. By means of this scheme, a practical sampling design procedure providing suitable indices of the sampling error variance is established. These indices can be used by way of multiobjective decision criteria to obtain the best sampling strategy. Neither the actual implementation of the in-situ sampling nor the solution of the large spatial estimation systems of equations are necessary. The required values of the accuracy parameters involved in the network design are derived using reference charts (readily available for various combinations of data configurations and spatial variability parameters) and certain simple yet accurate analytical formulas. Insight is gained by applying the proposed sampling procedure to realistic examples related to sampling problems in two dimensions. ?? 1992.
Protein surface roughness accounts for binding free energy of Plasmepsin II-ligand complexes.
Valdés-Tresanco, Mario E; Valdés-Tresanco, Mario S; Valiente, Pedro A; Cocho, Germinal; Mansilla, Ricardo; Nieto-Villar, J M
2018-01-01
The calculation of absolute binding affinities for protein-inhibitor complexes remains as one of the main challenges in computational structure-based ligand design. The present work explored the calculations of surface fractal dimension (as a measure of surface roughness) and the relationship with experimental binding free energies of Plasmepsin II complexes. Plasmepsin II is an attractive target for novel therapeutic compounds to treat malaria. However, the structural flexibility of this enzyme is a drawback when searching for specific inhibitors. Concerning that, we performed separate explicitly solvated molecular dynamics simulations using the available high-resolution crystal structures of different Plasmepsin II complexes. Molecular dynamics simulations allowed a better approximation to systems dynamics and, therefore, a more reliable estimation of surface roughness. This constitutes a novel approximation in order to obtain more realistic values of fractal dimension, because previous works considered only x-ray structures. Binding site fractal dimension was calculated considering the ensemble of structures generated at different simulation times. A linear relationship between binding site fractal dimension and experimental binding free energies of the complexes was observed within 20 ns. Previous studies of the subject did not uncover this relationship. Regression model, coined FD model, was built to estimate binding free energies from binding site fractal dimension values. Leave-one-out cross-validation showed that our model reproduced accurately the absolute binding free energies for our training set (R 2 = 0.76; <|error|> =0.55 kcal/mol; SD error = 0.19 kcal/mol). The fact that such a simple model may be applied raises some questions that are addressed in the article. Copyright © 2017 John Wiley & Sons, Ltd.
A Just Culture Approach to Managing Medication Errors.
Rogers, Erin; Griffin, Emily; Carnie, William; Melucci, Joseph; Weber, Robert J
2017-04-01
Medication errors continue to be a concern of health care providers and the public, in particular how to prevent harm from medication mistakes. Many health care workers are afraid to report errors for fear of retribution including the loss of professional licensure and even imprisonment. Most health care workers are silent, instead of admitting their mistake and discussing it openly with peers. This can result in further patient harm if the system causing the mistake is not identified and fixed; thus self-denial may have a negative impact on patient care outcomes. As a result, pharmacy leaders, in collaboration with others, must put systems in place that serve to prevent medication errors while promoting a "Just Culture" way of managing performance and outcomes. This culture must exist across disciplines and departments. Pharmacy leaders need to understand how to classify behaviors associated with errors, set realistic expectations, instill values for staff, and promote accountability within the workplace. This article reviews the concept of Just Culture and provides ways that pharmacy directors can use this concept to manage the degree of error in patient-centered pharmacy services.
Some economic benefits of a synchronous earth observatory satellite
NASA Technical Reports Server (NTRS)
Battacharyya, R. K.; Greenberg, J. S.; Lowe, D. S.; Sattinger, I. J.
1974-01-01
An analysis was made of the economic benefits which might be derived from reduced forecasting errors made possible by data obtained from a synchronous satellite system which can collect earth observation and meteorological data continuously and on demand. User costs directly associated with achieving benefits are included. In the analysis, benefits were evaluated which might be obtained as a result of improved thunderstorm forecasting, frost warning, and grain harvest forecasting capabilities. The anticipated system capabilities were used to arrive at realistic estimates of system performance on which to base the benefit analysis. Emphasis was placed on the benefits which result from system forecasting accuracies. Benefits from improved thunderstorm forecasts are indicated for the construction, air transportation, and agricultural industries. The effects of improved frost warning capability on the citrus crop are determined. The benefits from improved grain forecasting capability are evaluated in terms of both U.S. benefits resulting from domestic grain distribution and U.S. benefits from international grain distribution.
The impact of realistic source shape and flexibility on source mask optimization
NASA Astrophysics Data System (ADS)
Aoyama, Hajime; Mizuno, Yasushi; Hirayanagi, Noriyuki; Kita, Naonori; Matsui, Ryota; Izumi, Hirohiko; Tajima, Keiichi; Siebert, Joachim; Demmerle, Wolfgang; Matsuyama, Tomoyuki
2013-04-01
Source mask optimization (SMO) is widely used to make state-of-the-art semiconductor devices in high volume manufacturing. To realize mature SMO solutions in production, the Intelligent Illuminator, which is an illumination system on Nikon scanner, is useful because it can provide generation of freeform sources with high fidelity to the target. Proteus SMO, which employs co-optimization method and an insertion of validation with mask 3D effect and resist properties for an accurate prediction of wafer printing, can take into account the properties of Intelligent Illuminator. We investigate an impact of the source properties on the SMO to pattern of a static-random access memory. Quality of a source made on the scanner compared to the SMO target is evaluated with in-situ measurement and aerial image simulation using its measurement data. Furthermore we discuss an evaluation of a universality of the source to use it in multiple scanners with a validation with estimated value of scanner errors.
NASA Astrophysics Data System (ADS)
Farahani, Pooria; Lundberg, Marcus; Karlsson, Hans O.
2013-11-01
The SN2 substitution reactions at phosphorus play a key role in organic and biological processes. Quantum molecular dynamics simulations have been performed to study the prototype reaction Cl-+PH2Cl→ClPH2+Cl-, using one and two-dimensional models. A potential energy surface, showing an energy well for a transition complex, was generated using ab initio electronic structure calculations. The one-dimensional model is essentially reflection free, whereas the more realistic two-dimensional model displays involved resonance structures in the reaction probability. The reaction rate is almost two orders of magnitude smaller for the two-dimensional compared to the one-dimensional model. Energetic errors in the potential energy surface is estimated to affect the rate by only a factor of two. This shows that for these types of reactions it is more important to increase the dimensionality of the modeling than to increase the accuracy of the electronic structure calculation.
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
Stress Recovery and Error Estimation for 3-D Shell Structures
NASA Technical Reports Server (NTRS)
Riggs, H. R.
2000-01-01
The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).
Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.
2017-01-01
SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018
2008-02-01
1996. [15] Gambrill, C.S. DeAngelis M.L., Lu S-T. Error analysis of a thermometric microwave-dosimetry procedure. In: Blank M, editor. Electricity...Transactions on Biomedical Engineering, Vol. BME-31, No. 7, July 1984, 533-536. [18] Lu S-T, DeAngelis ML, Gambrill CS. Ocular microwave thermometric
NASA Astrophysics Data System (ADS)
Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.
2018-05-01
Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.
NASA Astrophysics Data System (ADS)
Borghi, Giacomo; Peet, Bart Jan; Tabacchini, Valerio; Schaart, Dennis R.
2016-07-01
New applications for positron emission tomography (PET) and combined PET/magnetic resonance imaging (MRI) are currently emerging, for example in the fields of neurological, breast, and pediatric imaging. Such applications require improved image quality, reduced dose, shorter scanning times, and more precise quantification. This can be achieved by means of dedicated scanners based on ultrahigh-performance detectors, which should provide excellent spatial resolution, precise depth-of-interaction (DOI) estimation, outstanding time-of-flight (TOF) capability, and high detection efficiency. Here, we introduce such an ultrahigh-performance TOF/DOI PET detector, based on a 32 mm × 32 mm × 22 mm monolithic LYSO:Ce crystal. The 32 mm × 32 mm front and back faces of the crystal are coupled to a digital photon counter (DPC) array, in so-called dual-sided readout (DSR) configuration. The fully digital detector offers a spatial resolution of ~1.1 mm full width at half maximum (FWHM)/~1.2 mm mean absolute error, together with a DOI resolution of ~2.4 mm FWHM, an energy resolution of 10.2% FWHM, and a coincidence resolving time of 147 ps FWHM. The time resolution closely approaches the best results (135 ps FWHM) obtained to date with small crystals made from the same material coupled to the same DPC arrays, illustrating the excellent correction for optical and electronic transit time spreads that can be achieved in monolithic scintillators using maximum-likelihood techniques for estimating the time of interaction. The performance barely degrades for events with missing data (up to 6 out of 32 DPC dies missing), permitting the use of almost all events registered under realistic acquisition conditions. Moreover, the calibration procedures and computational methods used for position and time estimation follow recently made improvements that make them fast and practical, opening up realistic perspectives for using DSR monolithic scintillator detectors in TOF-PET and TOF-PET/MRI systems.
Anderson, N G; Jolley, I J; Wells, J E
2007-08-01
To determine the major sources of error in ultrasonographic assessment of fetal weight and whether they have changed over the last decade. We performed a prospective observational study in 1991 and again in 2000 of a mixed-risk pregnancy population, estimating fetal weight within 7 days of delivery. In 1991, the Rose and McCallum formula was used for 72 deliveries. Inter- and intraobserver agreement was assessed within this group. Bland-Altman measures of agreement from log data were calculated as ratios. We repeated the study in 2000 in 208 consecutive deliveries, comparing predicted and actual weights for 12 published equations using Bland-Altman and percentage error methods. We compared bias (mean percentage error), precision (SD percentage error), and their consistency across the weight ranges. 95% limits of agreement ranged from - 4.4% to + 3.3% for inter- and intraobserver estimates, but were - 18.0% to 24.0% for estimated and actual birth weight. There was no improvement in accuracy between 1991 and 2000. In 2000 only six of the 12 published formulae had overall bias within 7% and precision within 15%. There was greater bias and poorer precision in nearly all equations if the birth weight was < 1,000 g. Observer error is a relatively minor component of the error in estimating fetal weight; error due to the equation is a larger source of error. Improvements in ultrasound technology have not improved the accuracy of estimating fetal weight. Comparison of methods of estimating fetal weight requires statistical methods that can separate out bias, precision and consistency. Estimating fetal weight in the very low birth weight infant is subject to much greater error than it is in larger babies. Copyright (c) 2007 ISUOG. Published by John Wiley & Sons, Ltd.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
Mukherjee, Joyeeta Mitra; Hutton, Brian F; Johnson, Karen L; Pretorius, P Hendrik; King, Michael A
2014-01-01
Motion estimation methods in single photon emission computed tomography (SPECT) can be classified into methods which depend on just the emission data (data-driven), or those that use some other source of information such as an external surrogate. The surrogate-based methods estimate the motion exhibited externally which may not correlate exactly with the movement of organs inside the body. The accuracy of data-driven strategies on the other hand is affected by the type and timing of motion occurrence during acquisition, the source distribution, and various degrading factors such as attenuation, scatter, and system spatial resolution. The goal of this paper is to investigate the performance of two data-driven motion estimation schemes based on the rigid-body registration of projections of motion-transformed source distributions to the acquired projection data for cardiac SPECT studies. Comparison is also made of six intensity based registration metrics to an external surrogate-based method. In the data-driven schemes, a partially reconstructed heart is used as the initial source distribution. The partially-reconstructed heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired while the patient maintained the same pose. The performance of different cost functions in quantifying consistency with the SPECT projection data in the data-driven schemes was compared for clinically realistic patient motion occurring as discrete pose changes, one or two times during acquisition. The six intensity-based metrics studied were mean-squared difference (MSD), mutual information (MI), normalized mutual information (NMI), pattern intensity (PI), normalized cross-correlation (NCC) and entropy of the difference (EDI). Quantitative and qualitative analysis of the performance is reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation, scatter and system spatial resolution. Further the visual appearance of motion-corrected images using data-driven motion estimates was compared to that obtained using the external motion-tracking system in patient studies. Pattern intensity and normalized mutual information cost functions were observed to have the best performance in terms of lowest average position error and stability with degradation of image quality of the partial reconstruction in simulations. In all patients, the visual quality of PI-based estimation was either significantly better or comparable to NMI-based estimation. Best visual quality was obtained with PI-based estimation in 1 of the 5 patient studies, and with external-surrogate based correction in 3 out of 5 patients. In the remaining patient study there was little motion and all methods yielded similar visual image quality. PMID:24107647
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
Consequences of Secondary Calibrations on Divergence Time Estimates.
Schenk, John J
2016-01-01
Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
The mean sea surface height and geoid along the Geosat subtrack from Bermuda to Cape Cod
NASA Astrophysics Data System (ADS)
Kelly, Kathryn A.; Joyce, Terrence M.; Schubert, David M.; Caruso, Michael J.
1991-07-01
Measurements of near-surface velocity and concurrent sea level along an ascending Geosat subtrack were used to estimate the mean sea surface height and the Earth's gravitational geoid. Velocity measurements were made on three traverses of a Geosat subtrack within 10 days, using an acoustic Doppler current profiler (ADCP). A small bias in the ADCP velocity was removed by considering a mass balance for two pairs of triangles for which expendable bathythermograph measurements were also made. Because of the large curvature of the Gulf Stream, the gradient wind balance was used to estimate the cross-track component of geostrophic velocity from the ADCP vectors; this component was then integrated to obtain the sea surface height profile. The mean sea surface height was estimated as the difference between the instantaneous sea surface height from ADCP and the Geosat residual sea level, with mesoscale errors reduced by low-pass filtering. The error estimates were divided into a bias, tilt, and mesoscale residual; the bias was ignored because profiles were only determined within a constant of integration. The calculated mean sea surface height estimate agreed with an independent estimate of the mean sea surface height from Geosat, obtained by modeling the Gulf Stream as a Gaussian jet, within the expected errors in the estimates: the tilt error was 0.10 m, and the mesoscale error was 0.044 m. To minimize mesoscale errors in the estimate, the alongtrack geoid estimate was computed as the difference between the mean sea level from the Geosat Exact Repeat Mission and an estimate of the mean sea surface height, rather than as the difference between instantaneous profiles of sea level and sea surface height. In the critical region near the Gulf Stream the estimated error reduction using this method was about 0.07 m. Differences between the geoid estimate and a gravimetric geoid were not within the expected errors: the rms mesoscale difference was 0.24 m rms.
Space-Time Error Representation and Estimation in Navier-Stokes Calculations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2006-01-01
The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Mengistu, Zelalem
2016-12-01
In this study, we propose a new formulation of subsurface water storage dynamics for use in rainfall-runoff models. Under the assumption of a strong relationship between storage and runoff, the temporal distribution of catchment-scale storage is considered to have the same shape as the distribution of observed recessions (measured as the difference between the log of runoff values). The mean subsurface storage is estimated as the storage at steady state, where moisture input equals the mean annual runoff. An important contribution of the new formulation is that its parameters are derived directly from observed recession data and the mean annual runoff. The parameters are hence estimated prior to model calibration against runoff. The new storage routine is implemented in the parameter parsimonious distance distribution dynamics (DDD) model and has been tested for 73 catchments in Norway of varying size, mean elevation and landscape type. Runoff simulations for the 73 catchments from two model structures (DDD with calibrated subsurface storage and DDD with the new estimated subsurface storage) were compared. Little loss in precision of runoff simulations was found using the new estimated storage routine. For the 73 catchments, an average of the Nash-Sutcliffe efficiency criterion of 0.73 was obtained using the new estimated storage routine compared with 0.75 using calibrated storage routine. The average Kling-Gupta efficiency criterion was 0.80 and 0.81 for the new and old storage routine, respectively. Runoff recessions are more realistically modelled using the new approach since the root mean square error between the mean of observed and simulated recession characteristics was reduced by almost 50 % using the new storage routine. The parameters of the proposed storage routine are found to be significantly correlated to catchment characteristics, which is potentially useful for predictions in ungauged basins.
Informed spectral analysis: audio signal parameter estimation using side information
NASA Astrophysics Data System (ADS)
Fourer, Dominique; Marchand, Sylvain
2013-12-01
Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.
Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas
Puente, Celso
1978-01-01
The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.
NASA Technical Reports Server (NTRS)
Redemann, Jens; Russell, Philip B.; Hamill, Patrick
2001-01-01
Atmospheric aerosols frequently contain hygroscopic sulfate species and black carbon (soot) inclusions. In this paper we report results of a modeling study to determine the change in aerosol absorption due to increases in ambient relative humidity (RH), for three common sulfate species, assuming that the soot mass fraction is present as a single concentric core within each particle. Because of the lack of detailed knowledge about various input parameters to models describing internally mixed aerosol particle optics, we focus on results that were aimed at determining the maximum effect that particle humidification may have on aerosol light absorption. In the wavelength range from 450 to 750 nm, maximum absorption humidification factors (ratio of wet to 'dry=30% RH' absorption) for single aerosol particles are found to be as large as 1.75 when the RH changes from 30 to 99.5%. Upon lesser humidification from 30 to 80% RH, absorption humidification for single particles is only as much as 1.2, even for the most favorable combination of initial ('dry') soot mass fraction and particle size. Integrated over monomodal lognormal particle size distributions, maximum absorption humidification factors range between 1.07 and 1.15 for humidification from 30 to 80% and between 1.1 and 1.35 for humidification from 30 to 95% RH for all species considered. The largest humidification factors at a wavelength of 450 nm are obtained for 'dry' particle size distributions that peak at a radius of 0.05 microns, while the absorption humidification factors at 700 nm are largest for 'dry' size distributions that are dominated by particles in the radius range of 0.06 to 0.08 microns. Single-scattering albedo estimates at ambient conditions are often based on absorption measurements at low RH (approx. 30%) and the assumption that aerosol absorption does not change upon humidification (i.e., absorption humidification equal to unity). Our modeling study suggests that this assumption alone can introduce absolute errors in estimates of the midvisible single-scattering albedo of up to 0.05 for realistic dry particle size distributions. Our study also indicates that this error increases with increasing wavelength. The potential errors in aerosol single-scattering albedo derived here are comparable in magnitude and in addition to uncertainties in single-scattering albedo estimates that are based on measurements of aerosol light absorption and scattering.
Adaptive Error Estimation in Linearized Ocean General Circulation Models
NASA Technical Reports Server (NTRS)
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.
Eisele, Thomas P; Rhoda, Dale A; Cutts, Felicity T; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J D; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.
Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred
2013-01-01
Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
Measuring rapid ocean tidal earth orientation variations with very long baseline interferometry
NASA Astrophysics Data System (ADS)
Sovers, O. J.; Jacobs, C. S.; Gross, R. S.
1993-11-01
Ocean tidal effects on universal time and polar motion (UTPM) are investigated at four nearly diurnal (K1, P1, O1, and Q1) and four nearly semidiurnal (K2, S2, M2, and N2) frequencies by analyzing very long baseline interferometry (VLBI) data extending from 1978 to 1992. We discuss limitations of comparisons between experiment and theory for the retrograde nearly diurnal polar motion components due to their degeneracy with prograde components of the nutation model. Estimating amplitudes of contributions to the modeled VLBI observables at these eight frequencies produces a statistically highly significant improvement of 7 mm to the residuals of a fit to the observed delays. Use of such an improved UTPM model also reduces the 14-30 mm scatter of baseline lengths about a time-linear model of tectonic motion by 3-14 mm, also with high significance levels. A total of 28 UTPM ocean tidal amplitudes can be unambiguously estimated from the data, with resulting UT1 and PM magnitudes as large as 21 μs and 270 microarc seconds (μas) and formal uncertainties of the order of 0.3 μs and 5 μas for UTI and PM, respectively. Empirically determined UTPM amplitudes and phases are compared to values calculated theoretically by Gross from Seiler's global ocean tide model. The discrepancy between theory and experiment is larger by a factor of 3 for UT1 amplitudes (9 μs) than for prograde PM amplitudes (42 μas). The 14-year VLBI data span strongly attenuates the influence of mismodeled effects on estimated UTPM amplitudes and phases that are not coherent with the eight frequencies of interest. Magnitudes of coherent and quasi-coherent systematic errors are quantified by means of internal consistency tests. We conclude that coherent systematic effects are many times larger than the formal uncertainties and can be as large as 4 μs for UT1 and 60 μas for polar motion. On the basis of such realistic error estimates, 22 of the 31 fitted UTPM ocean tidal amplitudes differ from zero by more than 2σ.
Using MODIS and GLAS Data to Develop Timber Volume Estimates in Central Siberia
NASA Technical Reports Server (NTRS)
Ranson, K. Jon; Kimes, Daniel; Sun, Guoqing; Kharuk, Viatcheslav; Hyde, Peter; Nelson, Ross
2007-01-01
The boreal forest is the Earth's largest terrestrial biome, covering some 12 million km2 and accounting for about one third of this planet's total forest area. Mapping of boreal forest's type, structure parameters and biomass are critical for understanding the boreal forest's significance in the carbon cycle, its response to and impact on global climate change. Ground based forest inventories, have much uncertainty in the inventory data, particularly in remote areas of Siberia where sampling is sparse and/or lacking. In addition, many of the forest inventories that do exist for Siberia are now a decade or more old. Thus, available forest inventories fail to capture the current conditions. Changes in forest structure in a particular forest-type and region can change significantly due to changing environment conditions, and natural and anthropogenic disturbance. Remote sensing methods can potentially overcome these problems. Multispectral sensors can be used to provide vegetation cover maps that show a timely and accurate geographic distribution of vegetation types rather than decade old ground based maps. Lidar sensors can be used to directly obtain measurements that can be used to derive critical forest structure information (e.g., height, density, and volume). These in turn can used to estimate biomass components using allometric equations without having to use out dated forest inventory. Finally, remote sensing data is ideally suited to provide a sampling basis for a rigorous statistical estimate of the variance and error bound on forest structure measures. In this study, new remote sensing methods were applied to develop estimates timber volume using NASA's MODerate resolution Imaging Spectroradiometer (MODIS) and unique waveform data of the geoscience laser altimeter system (GLAS) for a 10 deg x 10 deg area in central Siberia. Using MODIS and GLAS data, maps were produced for cover type and timber volume for 2003, and a realistic variance (error bound) for timber volume was calculated for the study area. In this 'study we used only GLAS footprints that had a slope value of less than 10 deg. This was done to avoid large errors due to the effect of slope on the GLAS models. The method requires the integration of new remote sensing methods with available ground studies of forest timber volume conducted in Russian forests. The results were compared to traditional ground forest inventory methods reported in the literature and to ground truth collected in the study area.
Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter
2012-01-01
The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have the same basic physical measurement characteristics, and therefore very similar retrieval error estimation method can be applied. Because of the expected improvements in radiometric resolution of the Sentinel-1 backscatter measurements, soil moisture estimation errors can be expected to be an order of magnitude less than those for ASAR GM. This opens the possibility for operationally available medium resolution soil moisture estimates with very well-specified errors that can be assimilated into hydrological or crop yield models, with potentially large benefits for land-atmosphere fluxes, crop growth, and water balance monitoring and modelling. PMID:23483015
Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian
2016-08-01
In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.
The pros and cons of code validation
NASA Technical Reports Server (NTRS)
Bobbitt, Percy J.
1988-01-01
Computational and wind tunnel error sources are examined and quantified using specific calculations of experimental data, and a substantial comparison of theoretical and experimental results, or a code validation, is discussed. Wind tunnel error sources considered include wall interference, sting effects, Reynolds number effects, flow quality and transition, and instrumentation such as strain gage balances, electronically scanned pressure systems, hot film gages, hot wire anemometers, and laser velocimeters. Computational error sources include math model equation sets, the solution algorithm, artificial viscosity/dissipation, boundary conditions, the uniqueness of solutions, grid resolution, turbulence modeling, and Reynolds number effects. It is concluded that, although improvements in theory are being made more quickly than in experiments, wind tunnel research has the advantage of the more realistic transition process of a right turbulence model in a free-transition test.
Observations of fallibility in applications of modern programming methodologies
NASA Technical Reports Server (NTRS)
Gerhart, S. L.; Yelowitz, L.
1976-01-01
Errors, inconsistencies, or confusing points are noted in a variety of published algorithms, many of which are being used as examples in formulating or teaching principles of such modern programming methodologies as formal specification, systematic construction, and correctness proving. Common properties of these points of contention are abstracted. These properties are then used to pinpoint possible causes of the errors and to formulate general guidelines which might help to avoid further errors. The common characteristic of mathematical rigor and reasoning in these examples is noted, leading to some discussion about fallibility in mathematics, and its relationship to fallibility in these programming methodologies. The overriding goal is to cast a more realistic perspective on the methodologies, particularly with respect to older methodologies, such as testing, and to provide constructive recommendations for their improvement.
Tailored Codes for Small Quantum Memories
NASA Astrophysics Data System (ADS)
Robertson, Alan; Granade, Christopher; Bartlett, Stephen D.; Flammia, Steven T.
2017-12-01
We demonstrate that small quantum memories, realized via quantum error correction in multiqubit devices, can benefit substantially by choosing a quantum code that is tailored to the relevant error model of the system. For a biased noise model, with independent bit and phase flips occurring at different rates, we show that a single code greatly outperforms the well-studied Steane code across the full range of parameters of the noise model, including for unbiased noise. In fact, this tailored code performs almost optimally when compared with 10 000 randomly selected stabilizer codes of comparable experimental complexity. Tailored codes can even outperform the Steane code with realistic experimental noise, and without any increase in the experimental complexity, as we demonstrate by comparison in the observed error model in a recent seven-qubit trapped ion experiment.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
Decay in blood loss estimation skills after web-based didactic training.
Toledo, Paloma; Eosakul, Stanley T; Goetz, Kristopher; Wong, Cynthia A; Grobman, William A
2012-02-01
Accuracy in blood loss estimation has been shown to improve immediately after didactic training. The objective of this study was to evaluate retention of blood loss estimation skills 9 months after a didactic web-based training. Forty-four participants were recruited from a cohort that had undergone web-based training and testing in blood loss estimation. The web-based posttraining test, consisting of pictures of simulated blood loss, was repeated 9 months after the initial training and testing. The primary outcome was the difference in accuracy of estimated blood loss (percent error) at 9 months compared with immediately posttraining. At the 9-month follow-up, the median error in estimation worsened to -34.6%. Although better than the pretraining error of -47.8% (P = 0.003), the 9-month error was significantly less accurate than the immediate posttraining error of -13.5% (P = 0.01). Decay in blood loss estimation skills occurs by 9 months after didactic training.
NASA Technical Reports Server (NTRS)
Berg, Wesley; Avery, Susan K.
1995-01-01
Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the special sensor microwave/imager (SSM/I) for the period from July 1987 through December 1990. These monthly estimates are calibrated using data from a network of Pacific atoll rain gauges in order to account for systematic biases and are then compared with several visible and infrared satellite-based rainfall estimation techniques for the purpose of evaluating the performance of the microwave-based estimates. Although several key differences among the various techniques are observed, the general features of the monthly rainfall time series agree very well. Finally, the significant error sources contributing to uncertainties in the monthly estimates are examined and an estimate of the total error is produced. The sampling error characteristics are investigated using data from two SSM/I sensors and a detailed analysis of the characteristics of the diurnal cycle of rainfall over the oceans and its contribution to sampling errors in the monthly SSM/I estimates is made using geosynchronous satellite data. Based on the analysis of the sampling and other error sources the total error was estimated to be of the order of 30 to 50% of the monthly rainfall for estimates averaged over 2.5 deg x 2.5 deg latitude/longitude boxes, with a contribution due to diurnal variability of the order of 10%.
Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B.; Kirkman, M. Sue; Kovatchev, Boris
2014-01-01
Introduction: Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. Methods: A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. Results: SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. Discussion: The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. PMID:25562886
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.
NASA Astrophysics Data System (ADS)
Honnell, Kevin; Burnett, Sarah; Yorke, Chloe'; Howard, April; Ramsey, Scott
2017-06-01
The Noh problem is classic verification problem in the field of compressible flows. Simple to conceptualize, it is nonetheless difficult for numerical codes to predict correctly, making it an ideal code-verification test bed. In its original incarnation, the fluid is a simple ideal gas; once validated, however, these codes are often used to study highly non-ideal fluids and solids. In this work the classic Noh problem is extended beyond the commonly-studied polytropic ideal gas to more realistic equations of state (EOS) including the stiff gas, the Nobel-Abel gas, and the Carnahan-Starling hard-sphere fluid, thus enabling verification studies to be performed on more physically-realistic fluids. Exact solutions are compared with numerical results obtained from the Lagrangian hydrocode FLAG, developed at Los Alamos. For these more realistic EOSs, the simulation errors decreased in magnitude both at the origin and at the shock, but also spread more broadly about these points compared to the ideal EOS. The overall spatial convergence rate remained first order.
Realistic Affective Forecasting: The Role of Personality
Hoerger, Michael; Chapman, Ben; Duberstein, Paul
2016-01-01
Affective forecasting often drives decision making. Although affective forecasting research has often focused on identifying sources of error at the event level, the present investigation draws upon the ‘realistic paradigm’ in seeking to identify factors that similarly influence predicted and actual emotions, explaining their concordance across individuals. We hypothesized that the personality traits neuroticism and extraversion would account for variation in both predicted and actual emotional reactions to a wide array of stimuli and events (football games, an election, Valentine’s Day, birthdays, happy/sad film clips, and an intrusive interview). As hypothesized, individuals who were more introverted and neurotic anticipated, correctly, that they would experience relatively more unpleasant emotional reactions, and those who were more extraverted and less neurotic anticipated, correctly, that they would experience relatively more pleasant emotional reactions. Personality explained 30% of the concordance between predicted and actual emotional reactions. Findings suggest three purported personality processes implicated in affective forecasting, highlight the importance of individual-differences research in this domain, and call for more research on realistic affective forecasts. PMID:26212463
Realistic affective forecasting: The role of personality.
Hoerger, Michael; Chapman, Ben; Duberstein, Paul
2016-11-01
Affective forecasting often drives decision-making. Although affective forecasting research has often focused on identifying sources of error at the event level, the present investigation draws upon the "realistic paradigm" in seeking to identify factors that similarly influence predicted and actual emotions, explaining their concordance across individuals. We hypothesised that the personality traits neuroticism and extraversion would account for variation in both predicted and actual emotional reactions to a wide array of stimuli and events (football games, an election, Valentine's Day, birthdays, happy/sad film clips, and an intrusive interview). As hypothesised, individuals who were more introverted and neurotic anticipated, correctly, that they would experience relatively more unpleasant emotional reactions, and those who were more extraverted and less neurotic anticipated, correctly, that they would experience relatively more pleasant emotional reactions. Personality explained 30% of the concordance between predicted and actual emotional reactions. Findings suggest three purported personality processes implicated in affective forecasting, highlight the importance of individual-differences research in this domain, and call for more research on realistic affective forecasts.
Impact and quantification of the sources of error in DNA pooling designs.
Jawaid, A; Sham, P
2009-01-01
The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.
Bayes Error Rate Estimation Using Classifier Ensembles
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2003-01-01
The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.
NASA Technical Reports Server (NTRS)
Safren, H. G.
1987-01-01
The effect of atmospheric turbulence on the bit error rate of a space-to-ground near infrared laser communications link is investigated, for a link using binary pulse position modulation and an avalanche photodiode detector. Formulas are presented for the mean and variance of the bit error rate as a function of signal strength. Because these formulas require numerical integration, they are of limited practical use. Approximate formulas are derived which are easy to compute and sufficiently accurate for system feasibility studies, as shown by numerical comparison with the exact formulas. A very simple formula is derived for the bit error rate as a function of signal strength, which requires only the evaluation of an error function. It is shown by numerical calculations that, for realistic values of the system parameters, the increase in the bit error rate due to turbulence does not exceed about thirty percent for signal strengths of four hundred photons per bit or less. The increase in signal strength required to maintain an error rate of one in 10 million is about one or two tenths of a db.
Inspection error and its adverse effects - A model with implications for practitioners
NASA Technical Reports Server (NTRS)
Collins, R. D., Jr.; Case, K. E.; Bennett, G. K.
1978-01-01
Inspection error has clearly been shown to have adverse effects upon the results desired from a quality assurance sampling plan. These effects upon performance measures have been well documented from a statistical point of view. However, little work has been presented to convince the QC manager of the unfavorable cost consequences resulting from inspection error. This paper develops a very general, yet easily used, mathematical cost model. The basic format of the well-known Guthrie-Johns model is used. However, it is modified as required to assess the effects of attributes sampling errors of the first and second kind. The economic results, under different yet realistic conditions, will no doubt be of interest to QC practitioners who face similar problems daily. Sampling inspection plans are optimized to minimize economic losses due to inspection error. Unfortunately, any error at all results in some economic loss which cannot be compensated for by sampling plan design; however, improvements over plans which neglect the presence of inspection error are possible. Implications for human performance betterment programs are apparent, as are trade-offs between sampling plan modification and inspection and training improvements economics.
NASA Astrophysics Data System (ADS)
Cao, Lu; Li, Hengnian
2016-10-01
For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).
NASA Technical Reports Server (NTRS)
Jaggi, S.; Quattrochi, D.; Baskin, R.
1992-01-01
The effective flux incident upon the detectors of a thermal sensor, after it has been corrected for atmospheric effects, is a function of a non-linear combination of the emissivity of the target for that channel and the temperature of the target. The sensor system cannot separate the contribution from the emissivity and the temperature that constitute the flux value. A method that estimates the bounds on these temperatures and emissivities from thermal data is described. This method is then tested with remotely sensed data obtained from NASA's Thermal Infrared Multispectral Scanner (TIMS) - a 6 channel thermal sensor. Since this is an under-determined set of equations i.e. there are 7 unknowns (6 emissivities and 1 temperature) and 6 equations (corresponding to the 6 channel fluxes), there exist theoretically an infinite combination of values of emissivities and temperature that can satisfy these equations. Using some realistic bounds on the emissivities, bounds on the temperature are calculated. These bounds on the temperature are refined to estimate a tighter bound on the emissivity of the source. An error analysis is also carried out to quantitatively determine the extent of uncertainty introduced in the estimate of these parameters. This method is useful only when a realistic set of bounds can be obtained for the emissivities of the data. In the case of water the lower and upper bounds were set at 0.97 and 1.00 respectively. Five flights were flown in succession at altitudes of 2 km (low), 6 km (mid), 12 km (high), and then back again at 6 km and 2 km. The area selected with the Ross Barnett reservoir near Jackson, Mississippi. The mission was flown during the predawn hours of 1 Feb. 1992. Radiosonde data was collected for that duration to profile the characteristics of the atmosphere. Ground truth temperatures using thermometers and radiometers were also obtained over an area of the reservoir. The results of two independent runs of the radiometer data averaged 7.03 plus or minus .70 for the first run and 7.31 plus or minus .88 for the second run. The results of the algorithm yield a temperature of 7.68 for the low altitude data to 8.73 for the high altitude data.
Tissue resistivity estimation in the presence of positional and geometrical uncertainties.
Baysal, U; Eyüboğlu, B M
2000-08-01
Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.
Pilots strategically compensate for display enlargements in surveillance and flight control tasks.
Stelzer, Emily Muthard; Wickens, Christopher D
2006-01-01
Experiments were conducted to assess the impact of display size on flight control, airspace surveillance, and goal-directed target search. Research of 3-D displays has shown that display scale compression influences the perception of flight path deviation, though less is known about the causes that drive this effect. In addition, research on attention-based tasks has shown that information displaced to significant eccentricities can amplify effort, but it is unclear whether the effect generates a performance difference in complex displays. In Experiment 1, 16 pilots completed a low-fidelity flight control task under single- and dual-axis control. In Experiment 2, the control task from Experiment 1 was scaled up to a more realistic flight environment, and pilots performed hazard surveillance and target search tasks. For flight control, pilots exhibited less path error and greater stick activity with a large display, which was attributed both to greater enhanced resolution and to the fact that larger depictions of error lead to greater urgency in correcting deviations. Size did not affect hazard surveillance or search, as pilots were adaptive in altering scanning patterns in response to the enlargement of the displays. Although pilots were adaptive to display changes in search and surveillance, display size reduction diminished estimates of flight path deviation and control performance because of lowered resolution and control urgency. Care should be taken when manipulating display size, as size reduction can diminish control performance.
New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction
NASA Astrophysics Data System (ADS)
Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.
2017-12-01
Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.
The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates
Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin
2011-01-01
An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030
NASA Astrophysics Data System (ADS)
Debchoudhury, Shantanab; Earle, Gregory
2017-04-01
Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.
Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials
Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.
2013-01-01
Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072
NASA Technical Reports Server (NTRS)
Pavlis, Nikolaos K.
1991-01-01
An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.